Dec 10 15:23:22 crc systemd[1]: Starting Kubernetes Kubelet... Dec 10 15:23:22 crc restorecon[4695]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:22 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 10 15:23:23 crc restorecon[4695]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 10 15:23:23 crc restorecon[4695]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Dec 10 15:23:23 crc kubenswrapper[4755]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 10 15:23:23 crc kubenswrapper[4755]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Dec 10 15:23:23 crc kubenswrapper[4755]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 10 15:23:23 crc kubenswrapper[4755]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 10 15:23:23 crc kubenswrapper[4755]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 10 15:23:23 crc kubenswrapper[4755]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.628146 4755 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.631117 4755 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.631140 4755 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.631148 4755 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.631153 4755 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.631159 4755 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.631164 4755 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.631169 4755 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.631178 4755 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.631184 4755 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.631189 4755 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.631199 4755 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.631204 4755 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.631209 4755 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.631213 4755 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.631218 4755 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.631222 4755 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.631227 4755 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.631232 4755 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.631237 4755 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.631241 4755 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.631246 4755 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.631251 4755 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.631255 4755 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.631264 4755 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.631269 4755 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.631274 4755 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.631280 4755 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.631287 4755 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.631292 4755 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.631296 4755 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.631305 4755 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.631309 4755 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.631314 4755 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.631320 4755 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.631325 4755 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.631359 4755 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.631366 4755 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.631372 4755 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.631376 4755 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.631383 4755 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.631391 4755 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.631397 4755 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.631402 4755 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.631450 4755 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.631488 4755 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.631492 4755 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.631497 4755 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.631501 4755 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.631505 4755 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.631508 4755 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.631535 4755 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.631539 4755 feature_gate.go:330] unrecognized feature gate: Example Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.631543 4755 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.631703 4755 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.631708 4755 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.631712 4755 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.631716 4755 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.631719 4755 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.631723 4755 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.631726 4755 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.631730 4755 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.631733 4755 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.631737 4755 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.631740 4755 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.631745 4755 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.631750 4755 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.631754 4755 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.631758 4755 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.631766 4755 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.631771 4755 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.631782 4755 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632050 4755 flags.go:64] FLAG: --address="0.0.0.0" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632067 4755 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632078 4755 flags.go:64] FLAG: --anonymous-auth="true" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632084 4755 flags.go:64] FLAG: --application-metrics-count-limit="100" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632091 4755 flags.go:64] FLAG: --authentication-token-webhook="false" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632096 4755 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632102 4755 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632108 4755 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632112 4755 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632117 4755 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632121 4755 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632126 4755 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632130 4755 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632134 4755 flags.go:64] FLAG: --cgroup-root="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632138 4755 flags.go:64] FLAG: --cgroups-per-qos="true" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632142 4755 flags.go:64] FLAG: --client-ca-file="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632146 4755 flags.go:64] FLAG: --cloud-config="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632150 4755 flags.go:64] FLAG: --cloud-provider="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632154 4755 flags.go:64] FLAG: --cluster-dns="[]" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632160 4755 flags.go:64] FLAG: --cluster-domain="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632163 4755 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632168 4755 flags.go:64] FLAG: --config-dir="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632172 4755 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632176 4755 flags.go:64] FLAG: --container-log-max-files="5" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632181 4755 flags.go:64] FLAG: --container-log-max-size="10Mi" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632185 4755 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632192 4755 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632196 4755 flags.go:64] FLAG: --containerd-namespace="k8s.io" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632201 4755 flags.go:64] FLAG: --contention-profiling="false" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632204 4755 flags.go:64] FLAG: --cpu-cfs-quota="true" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632208 4755 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632213 4755 flags.go:64] FLAG: --cpu-manager-policy="none" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632218 4755 flags.go:64] FLAG: --cpu-manager-policy-options="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632227 4755 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632238 4755 flags.go:64] FLAG: --enable-controller-attach-detach="true" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632243 4755 flags.go:64] FLAG: --enable-debugging-handlers="true" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632248 4755 flags.go:64] FLAG: --enable-load-reader="false" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632253 4755 flags.go:64] FLAG: --enable-server="true" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632259 4755 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632265 4755 flags.go:64] FLAG: --event-burst="100" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632270 4755 flags.go:64] FLAG: --event-qps="50" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632274 4755 flags.go:64] FLAG: --event-storage-age-limit="default=0" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632278 4755 flags.go:64] FLAG: --event-storage-event-limit="default=0" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632282 4755 flags.go:64] FLAG: --eviction-hard="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632288 4755 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632293 4755 flags.go:64] FLAG: --eviction-minimum-reclaim="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632297 4755 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632302 4755 flags.go:64] FLAG: --eviction-soft="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632306 4755 flags.go:64] FLAG: --eviction-soft-grace-period="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632311 4755 flags.go:64] FLAG: --exit-on-lock-contention="false" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632315 4755 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632320 4755 flags.go:64] FLAG: --experimental-mounter-path="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632324 4755 flags.go:64] FLAG: --fail-cgroupv1="false" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632328 4755 flags.go:64] FLAG: --fail-swap-on="true" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632332 4755 flags.go:64] FLAG: --feature-gates="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632337 4755 flags.go:64] FLAG: --file-check-frequency="20s" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632341 4755 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632345 4755 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632350 4755 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632355 4755 flags.go:64] FLAG: --healthz-port="10248" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632359 4755 flags.go:64] FLAG: --help="false" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632364 4755 flags.go:64] FLAG: --hostname-override="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632368 4755 flags.go:64] FLAG: --housekeeping-interval="10s" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632373 4755 flags.go:64] FLAG: --http-check-frequency="20s" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632377 4755 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632381 4755 flags.go:64] FLAG: --image-credential-provider-config="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632385 4755 flags.go:64] FLAG: --image-gc-high-threshold="85" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632389 4755 flags.go:64] FLAG: --image-gc-low-threshold="80" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632393 4755 flags.go:64] FLAG: --image-service-endpoint="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632397 4755 flags.go:64] FLAG: --kernel-memcg-notification="false" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632401 4755 flags.go:64] FLAG: --kube-api-burst="100" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632405 4755 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632409 4755 flags.go:64] FLAG: --kube-api-qps="50" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632413 4755 flags.go:64] FLAG: --kube-reserved="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632417 4755 flags.go:64] FLAG: --kube-reserved-cgroup="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632421 4755 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632426 4755 flags.go:64] FLAG: --kubelet-cgroups="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632429 4755 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632433 4755 flags.go:64] FLAG: --lock-file="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632437 4755 flags.go:64] FLAG: --log-cadvisor-usage="false" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632441 4755 flags.go:64] FLAG: --log-flush-frequency="5s" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632445 4755 flags.go:64] FLAG: --log-json-info-buffer-size="0" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632451 4755 flags.go:64] FLAG: --log-json-split-stream="false" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632457 4755 flags.go:64] FLAG: --log-text-info-buffer-size="0" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632461 4755 flags.go:64] FLAG: --log-text-split-stream="false" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632514 4755 flags.go:64] FLAG: --logging-format="text" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632523 4755 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632529 4755 flags.go:64] FLAG: --make-iptables-util-chains="true" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632534 4755 flags.go:64] FLAG: --manifest-url="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632538 4755 flags.go:64] FLAG: --manifest-url-header="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632545 4755 flags.go:64] FLAG: --max-housekeeping-interval="15s" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632550 4755 flags.go:64] FLAG: --max-open-files="1000000" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632555 4755 flags.go:64] FLAG: --max-pods="110" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632559 4755 flags.go:64] FLAG: --maximum-dead-containers="-1" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632563 4755 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632567 4755 flags.go:64] FLAG: --memory-manager-policy="None" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632571 4755 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632575 4755 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632580 4755 flags.go:64] FLAG: --node-ip="192.168.126.11" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632584 4755 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632595 4755 flags.go:64] FLAG: --node-status-max-images="50" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632599 4755 flags.go:64] FLAG: --node-status-update-frequency="10s" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632603 4755 flags.go:64] FLAG: --oom-score-adj="-999" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632608 4755 flags.go:64] FLAG: --pod-cidr="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632611 4755 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632619 4755 flags.go:64] FLAG: --pod-manifest-path="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632623 4755 flags.go:64] FLAG: --pod-max-pids="-1" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632627 4755 flags.go:64] FLAG: --pods-per-core="0" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632631 4755 flags.go:64] FLAG: --port="10250" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632635 4755 flags.go:64] FLAG: --protect-kernel-defaults="false" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632639 4755 flags.go:64] FLAG: --provider-id="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632644 4755 flags.go:64] FLAG: --qos-reserved="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632648 4755 flags.go:64] FLAG: --read-only-port="10255" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632652 4755 flags.go:64] FLAG: --register-node="true" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632656 4755 flags.go:64] FLAG: --register-schedulable="true" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632660 4755 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632667 4755 flags.go:64] FLAG: --registry-burst="10" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632670 4755 flags.go:64] FLAG: --registry-qps="5" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632674 4755 flags.go:64] FLAG: --reserved-cpus="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632679 4755 flags.go:64] FLAG: --reserved-memory="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632685 4755 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632689 4755 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632693 4755 flags.go:64] FLAG: --rotate-certificates="false" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632698 4755 flags.go:64] FLAG: --rotate-server-certificates="false" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632702 4755 flags.go:64] FLAG: --runonce="false" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632706 4755 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632711 4755 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632715 4755 flags.go:64] FLAG: --seccomp-default="false" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632719 4755 flags.go:64] FLAG: --serialize-image-pulls="true" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632722 4755 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632727 4755 flags.go:64] FLAG: --storage-driver-db="cadvisor" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632731 4755 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632736 4755 flags.go:64] FLAG: --storage-driver-password="root" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632739 4755 flags.go:64] FLAG: --storage-driver-secure="false" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632743 4755 flags.go:64] FLAG: --storage-driver-table="stats" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632747 4755 flags.go:64] FLAG: --storage-driver-user="root" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632751 4755 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632755 4755 flags.go:64] FLAG: --sync-frequency="1m0s" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632759 4755 flags.go:64] FLAG: --system-cgroups="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632763 4755 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632770 4755 flags.go:64] FLAG: --system-reserved-cgroup="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632774 4755 flags.go:64] FLAG: --tls-cert-file="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632778 4755 flags.go:64] FLAG: --tls-cipher-suites="[]" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632783 4755 flags.go:64] FLAG: --tls-min-version="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632787 4755 flags.go:64] FLAG: --tls-private-key-file="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632791 4755 flags.go:64] FLAG: --topology-manager-policy="none" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632795 4755 flags.go:64] FLAG: --topology-manager-policy-options="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632799 4755 flags.go:64] FLAG: --topology-manager-scope="container" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632803 4755 flags.go:64] FLAG: --v="2" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632808 4755 flags.go:64] FLAG: --version="false" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632814 4755 flags.go:64] FLAG: --vmodule="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632819 4755 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.632823 4755 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.632937 4755 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.632942 4755 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.632947 4755 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.632951 4755 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.632954 4755 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.632958 4755 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.632962 4755 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.632965 4755 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.632968 4755 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.632972 4755 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.632977 4755 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.632981 4755 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.632985 4755 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.632989 4755 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.632992 4755 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.632996 4755 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.632999 4755 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.633003 4755 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.633006 4755 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.633010 4755 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.633014 4755 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.633017 4755 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.633021 4755 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.633024 4755 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.633028 4755 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.633031 4755 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.633035 4755 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.633038 4755 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.633041 4755 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.633045 4755 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.633048 4755 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.633052 4755 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.633055 4755 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.633059 4755 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.633063 4755 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.633066 4755 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.633070 4755 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.633073 4755 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.633078 4755 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.633081 4755 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.633084 4755 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.633088 4755 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.633091 4755 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.633095 4755 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.633098 4755 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.633102 4755 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.633107 4755 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.633111 4755 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.633115 4755 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.633119 4755 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.633123 4755 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.633129 4755 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.633134 4755 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.633138 4755 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.633142 4755 feature_gate.go:330] unrecognized feature gate: Example Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.633146 4755 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.633150 4755 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.633154 4755 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.633157 4755 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.633160 4755 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.633165 4755 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.633169 4755 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.633174 4755 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.633177 4755 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.633181 4755 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.633184 4755 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.633188 4755 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.633192 4755 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.633195 4755 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.633198 4755 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.633202 4755 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.633331 4755 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.639170 4755 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.639188 4755 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639243 4755 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639249 4755 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639253 4755 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639257 4755 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639261 4755 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639265 4755 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639268 4755 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639272 4755 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639276 4755 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639280 4755 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639283 4755 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639287 4755 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639290 4755 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639294 4755 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639297 4755 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639301 4755 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639305 4755 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639308 4755 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639311 4755 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639315 4755 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639318 4755 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639322 4755 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639325 4755 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639329 4755 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639332 4755 feature_gate.go:330] unrecognized feature gate: Example Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639337 4755 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639343 4755 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639347 4755 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639351 4755 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639355 4755 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639359 4755 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639363 4755 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639367 4755 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639370 4755 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639375 4755 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639378 4755 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639382 4755 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639387 4755 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639391 4755 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639394 4755 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639398 4755 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639402 4755 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639405 4755 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639409 4755 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639414 4755 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639418 4755 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639421 4755 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639425 4755 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639429 4755 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639433 4755 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639437 4755 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639440 4755 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639444 4755 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639447 4755 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639451 4755 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639455 4755 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639460 4755 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639482 4755 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639485 4755 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639489 4755 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639493 4755 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639497 4755 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639501 4755 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639504 4755 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639508 4755 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639511 4755 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639515 4755 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639519 4755 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639522 4755 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639525 4755 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639530 4755 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.639536 4755 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639648 4755 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639654 4755 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639658 4755 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639662 4755 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639666 4755 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639670 4755 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639674 4755 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639677 4755 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639681 4755 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639686 4755 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639689 4755 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639693 4755 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639696 4755 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639700 4755 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639703 4755 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639707 4755 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639711 4755 feature_gate.go:330] unrecognized feature gate: Example Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639714 4755 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639717 4755 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639721 4755 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639725 4755 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639728 4755 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639731 4755 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639735 4755 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639739 4755 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639742 4755 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639746 4755 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639749 4755 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639753 4755 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639756 4755 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639760 4755 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639763 4755 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639766 4755 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639770 4755 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639774 4755 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639777 4755 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639781 4755 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639785 4755 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639789 4755 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639794 4755 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639799 4755 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639803 4755 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639808 4755 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639812 4755 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639816 4755 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639820 4755 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639824 4755 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639828 4755 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639832 4755 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639836 4755 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639840 4755 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639844 4755 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639847 4755 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639851 4755 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639855 4755 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639859 4755 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639863 4755 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639867 4755 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639870 4755 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639873 4755 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639878 4755 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639883 4755 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639886 4755 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639890 4755 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639894 4755 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639897 4755 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639901 4755 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639905 4755 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639910 4755 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639913 4755 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.639918 4755 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.639933 4755 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.640051 4755 server.go:940] "Client rotation is on, will bootstrap in background" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.642690 4755 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.642751 4755 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.643228 4755 server.go:997] "Starting client certificate rotation" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.643247 4755 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.643520 4755 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-24 15:19:31.134949177 +0000 UTC Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.643574 4755 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.647745 4755 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 10 15:23:23 crc kubenswrapper[4755]: E1210 15:23:23.648410 4755 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.18:6443: connect: connection refused" logger="UnhandledError" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.649355 4755 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.655511 4755 log.go:25] "Validated CRI v1 runtime API" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.669771 4755 log.go:25] "Validated CRI v1 image API" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.671082 4755 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.673889 4755 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-12-10-15-19-11-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.673923 4755 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:41 fsType:tmpfs blockSize:0}] Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.694669 4755 manager.go:217] Machine: {Timestamp:2025-12-10 15:23:23.693332745 +0000 UTC m=+0.294216387 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:ebd59de0-c6b0-47c1-bc17-6f665dcf344d BootID:ba232303-88d5-4931-b82e-34d9a0e5c06a Filesystems:[{Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:41 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:9e:a2:c6 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:9e:a2:c6 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:ce:cc:a2 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:a8:04:2f Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:9c:40:84 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:17:7a:8d Speed:-1 Mtu:1496} {Name:eth10 MacAddress:06:32:e5:8a:f3:4b Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:ca:8a:31:a0:47:78 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.694892 4755 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.695013 4755 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.695476 4755 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.695642 4755 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.695677 4755 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.695848 4755 topology_manager.go:138] "Creating topology manager with none policy" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.695859 4755 container_manager_linux.go:303] "Creating device plugin manager" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.696070 4755 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.696099 4755 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.696423 4755 state_mem.go:36] "Initialized new in-memory state store" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.696548 4755 server.go:1245] "Using root directory" path="/var/lib/kubelet" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.697334 4755 kubelet.go:418] "Attempting to sync node with API server" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.697355 4755 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.697377 4755 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.697390 4755 kubelet.go:324] "Adding apiserver pod source" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.697404 4755 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.699250 4755 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.699325 4755 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.18:6443: connect: connection refused Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.699315 4755 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.18:6443: connect: connection refused Dec 10 15:23:23 crc kubenswrapper[4755]: E1210 15:23:23.699403 4755 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.18:6443: connect: connection refused" logger="UnhandledError" Dec 10 15:23:23 crc kubenswrapper[4755]: E1210 15:23:23.699429 4755 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.18:6443: connect: connection refused" logger="UnhandledError" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.699809 4755 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.700569 4755 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.701548 4755 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.701576 4755 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.701587 4755 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.701596 4755 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.701610 4755 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.701618 4755 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.701627 4755 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.701641 4755 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.701651 4755 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.701660 4755 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.701673 4755 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.701681 4755 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.701875 4755 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.702319 4755 server.go:1280] "Started kubelet" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.702863 4755 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.703113 4755 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.18:6443: connect: connection refused Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.702866 4755 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.704124 4755 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 10 15:23:23 crc systemd[1]: Started Kubernetes Kubelet. Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.704839 4755 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.704865 4755 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.704887 4755 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 12:27:44.35552811 +0000 UTC Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.705493 4755 volume_manager.go:287] "The desired_state_of_world populator starts" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.705521 4755 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.705727 4755 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 10 15:23:23 crc kubenswrapper[4755]: E1210 15:23:23.705775 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.708163 4755 factory.go:55] Registering systemd factory Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.708271 4755 factory.go:221] Registration of the systemd container factory successfully Dec 10 15:23:23 crc kubenswrapper[4755]: E1210 15:23:23.708375 4755 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.18:6443: connect: connection refused" interval="200ms" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.708743 4755 factory.go:153] Registering CRI-O factory Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.708769 4755 factory.go:221] Registration of the crio container factory successfully Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.708856 4755 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.708887 4755 factory.go:103] Registering Raw factory Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.708905 4755 manager.go:1196] Started watching for new ooms in manager Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.709601 4755 manager.go:319] Starting recovery of all containers Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.711171 4755 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.18:6443: connect: connection refused Dec 10 15:23:23 crc kubenswrapper[4755]: E1210 15:23:23.711277 4755 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.18:6443: connect: connection refused" logger="UnhandledError" Dec 10 15:23:23 crc kubenswrapper[4755]: E1210 15:23:23.710723 4755 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.18:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187fe3f11f2e7ee7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:23:23.702288103 +0000 UTC m=+0.303171735,LastTimestamp:2025-12-10 15:23:23.702288103 +0000 UTC m=+0.303171735,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.715085 4755 server.go:460] "Adding debug handlers to kubelet server" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.716843 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.717014 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.717088 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.717156 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.717220 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.717300 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.717371 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.717443 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.717533 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.717608 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.717685 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.717798 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.717870 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.717950 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.718014 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.718086 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.718157 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.718224 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.718291 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.718355 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.718428 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.718517 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.718608 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.718686 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.718755 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.718827 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.718895 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.718974 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.719038 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.719112 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.719181 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.719258 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.719342 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.719415 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.719501 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.719580 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.719658 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.719736 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.719832 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.719917 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.719998 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.720102 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.720185 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.720268 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.720343 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.720427 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.720546 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.720637 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.720732 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.720813 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.721573 4755 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.721686 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.721775 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.721867 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.721966 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.722055 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.722137 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.722221 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.722307 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.722399 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.722533 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.722620 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.722694 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.722776 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.722866 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.722957 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.723039 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.723106 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.723178 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.723252 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.723323 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.723398 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.723485 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.723574 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.723652 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.723732 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.723804 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.723883 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.723962 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.724045 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.724134 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.724212 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.724282 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.724353 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.724424 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.724572 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.724654 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.724719 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.724779 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.724835 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.724896 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.724956 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.725011 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.725066 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.725123 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.725180 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.725255 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.725328 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.725407 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.725524 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.725600 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.725667 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.725724 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.725779 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.725855 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.725941 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.726005 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.726068 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.726125 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.726181 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.726235 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.726294 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.726355 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.726413 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.726483 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.726543 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.726598 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.726710 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.726777 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.726848 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.726909 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.726964 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.727017 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.727087 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.727165 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.727247 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.727318 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.727725 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.727792 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.727846 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.727900 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.727956 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.728015 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.728078 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.728134 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.728188 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.728242 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.728297 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.728350 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.728409 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.728481 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.728576 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.728633 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.728689 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.728751 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.728816 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.728872 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.728925 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.728978 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.729036 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.729096 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.729155 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.729229 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.729304 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.729385 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.729483 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.729571 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.729661 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.729750 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.729844 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.729939 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.730016 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.730092 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.730187 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.730269 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.730353 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.730437 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.730550 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.730647 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.730721 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.730793 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.730030 4755 manager.go:324] Recovery completed Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.730860 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.731013 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.731107 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.731188 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.731274 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.731357 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.731437 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.731619 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.731706 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.731797 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.731879 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.731943 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.732017 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.732091 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.732155 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.732210 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.732443 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.732762 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.732808 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.732830 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.732844 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.733000 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.733072 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.733094 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.733122 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.733136 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.733161 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.733176 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.733191 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.733218 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.733232 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.733254 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.733268 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.733283 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.733303 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.733317 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.733337 4755 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.733349 4755 reconstruct.go:97] "Volume reconstruction finished" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.733357 4755 reconciler.go:26] "Reconciler: start to sync state" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.742956 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.744658 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.744726 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.744741 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.745518 4755 cpu_manager.go:225] "Starting CPU manager" policy="none" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.745608 4755 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.745684 4755 state_mem.go:36] "Initialized new in-memory state store" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.754622 4755 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.756233 4755 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.756294 4755 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 10 15:23:23 crc kubenswrapper[4755]: I1210 15:23:23.756332 4755 kubelet.go:2335] "Starting kubelet main sync loop" Dec 10 15:23:23 crc kubenswrapper[4755]: E1210 15:23:23.756390 4755 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 10 15:23:23 crc kubenswrapper[4755]: W1210 15:23:23.757099 4755 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.18:6443: connect: connection refused Dec 10 15:23:23 crc kubenswrapper[4755]: E1210 15:23:23.757164 4755 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.18:6443: connect: connection refused" logger="UnhandledError" Dec 10 15:23:23 crc kubenswrapper[4755]: E1210 15:23:23.807208 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 10 15:23:23 crc kubenswrapper[4755]: E1210 15:23:23.857342 4755 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 10 15:23:23 crc kubenswrapper[4755]: E1210 15:23:23.907779 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 10 15:23:23 crc kubenswrapper[4755]: E1210 15:23:23.909441 4755 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.18:6443: connect: connection refused" interval="400ms" Dec 10 15:23:24 crc kubenswrapper[4755]: E1210 15:23:24.008742 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 10 15:23:24 crc kubenswrapper[4755]: E1210 15:23:24.057865 4755 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 10 15:23:24 crc kubenswrapper[4755]: E1210 15:23:24.109290 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 10 15:23:24 crc kubenswrapper[4755]: E1210 15:23:24.209856 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 10 15:23:24 crc kubenswrapper[4755]: E1210 15:23:24.310209 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 10 15:23:24 crc kubenswrapper[4755]: E1210 15:23:24.310787 4755 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.18:6443: connect: connection refused" interval="800ms" Dec 10 15:23:24 crc kubenswrapper[4755]: E1210 15:23:24.410928 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 10 15:23:24 crc kubenswrapper[4755]: E1210 15:23:24.458960 4755 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 10 15:23:24 crc kubenswrapper[4755]: E1210 15:23:24.511626 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 10 15:23:24 crc kubenswrapper[4755]: W1210 15:23:24.607722 4755 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.18:6443: connect: connection refused Dec 10 15:23:24 crc kubenswrapper[4755]: E1210 15:23:24.607844 4755 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.18:6443: connect: connection refused" logger="UnhandledError" Dec 10 15:23:24 crc kubenswrapper[4755]: I1210 15:23:24.609141 4755 policy_none.go:49] "None policy: Start" Dec 10 15:23:24 crc kubenswrapper[4755]: I1210 15:23:24.611155 4755 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 10 15:23:24 crc kubenswrapper[4755]: I1210 15:23:24.611208 4755 state_mem.go:35] "Initializing new in-memory state store" Dec 10 15:23:24 crc kubenswrapper[4755]: E1210 15:23:24.612612 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 10 15:23:24 crc kubenswrapper[4755]: I1210 15:23:24.704361 4755 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.18:6443: connect: connection refused Dec 10 15:23:24 crc kubenswrapper[4755]: I1210 15:23:24.705523 4755 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 09:35:49.992333981 +0000 UTC Dec 10 15:23:24 crc kubenswrapper[4755]: E1210 15:23:24.712959 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 10 15:23:24 crc kubenswrapper[4755]: E1210 15:23:24.813539 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 10 15:23:24 crc kubenswrapper[4755]: W1210 15:23:24.885616 4755 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.18:6443: connect: connection refused Dec 10 15:23:24 crc kubenswrapper[4755]: E1210 15:23:24.885722 4755 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.18:6443: connect: connection refused" logger="UnhandledError" Dec 10 15:23:24 crc kubenswrapper[4755]: E1210 15:23:24.914431 4755 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 10 15:23:24 crc kubenswrapper[4755]: I1210 15:23:24.944551 4755 manager.go:334] "Starting Device Plugin manager" Dec 10 15:23:24 crc kubenswrapper[4755]: I1210 15:23:24.944613 4755 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 10 15:23:24 crc kubenswrapper[4755]: I1210 15:23:24.944628 4755 server.go:79] "Starting device plugin registration server" Dec 10 15:23:24 crc kubenswrapper[4755]: I1210 15:23:24.945129 4755 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 10 15:23:24 crc kubenswrapper[4755]: I1210 15:23:24.945150 4755 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 10 15:23:24 crc kubenswrapper[4755]: I1210 15:23:24.945304 4755 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Dec 10 15:23:24 crc kubenswrapper[4755]: I1210 15:23:24.945415 4755 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Dec 10 15:23:24 crc kubenswrapper[4755]: I1210 15:23:24.945437 4755 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 10 15:23:24 crc kubenswrapper[4755]: E1210 15:23:24.950854 4755 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.045858 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.047537 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.047598 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.047613 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.047661 4755 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 10 15:23:25 crc kubenswrapper[4755]: E1210 15:23:25.048213 4755 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.18:6443: connect: connection refused" node="crc" Dec 10 15:23:25 crc kubenswrapper[4755]: W1210 15:23:25.103976 4755 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.18:6443: connect: connection refused Dec 10 15:23:25 crc kubenswrapper[4755]: E1210 15:23:25.104161 4755 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.18:6443: connect: connection refused" logger="UnhandledError" Dec 10 15:23:25 crc kubenswrapper[4755]: E1210 15:23:25.112130 4755 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.18:6443: connect: connection refused" interval="1.6s" Dec 10 15:23:25 crc kubenswrapper[4755]: E1210 15:23:25.173857 4755 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.18:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187fe3f11f2e7ee7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:23:23.702288103 +0000 UTC m=+0.303171735,LastTimestamp:2025-12-10 15:23:23.702288103 +0000 UTC m=+0.303171735,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:23:25 crc kubenswrapper[4755]: W1210 15:23:25.234229 4755 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.18:6443: connect: connection refused Dec 10 15:23:25 crc kubenswrapper[4755]: E1210 15:23:25.234396 4755 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.18:6443: connect: connection refused" logger="UnhandledError" Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.248812 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.250075 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.250110 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.250120 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.250144 4755 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 10 15:23:25 crc kubenswrapper[4755]: E1210 15:23:25.250488 4755 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.18:6443: connect: connection refused" node="crc" Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.259559 4755 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.259636 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.260646 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.260686 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.260697 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.260809 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.261067 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.261103 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.261420 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.261441 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.261450 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.261583 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.261751 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.261801 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.261818 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.261845 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.261859 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.262374 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.262402 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.262413 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.262541 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.262675 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.262714 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.263254 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.263264 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.263281 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.263330 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.263301 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.263371 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.263487 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.263545 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.263571 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.263911 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.263950 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.263962 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.264098 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.264116 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.264127 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.264252 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.264294 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.264620 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.264663 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.264681 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.265272 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.265306 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.265321 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.350056 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.350111 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.350141 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.350161 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.350182 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.350202 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.350221 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.350239 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.350258 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.350277 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.350296 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.350316 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.350336 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.350358 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.350380 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.451703 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.451764 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.451786 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.451804 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.451823 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.451840 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.451856 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.451871 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.451885 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.451900 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.451914 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.451902 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.451969 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.451929 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.451997 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.452012 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.452028 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.452058 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.452062 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.452085 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.452096 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.452084 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.452115 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.452117 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.452148 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.452153 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.452173 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.452059 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.452190 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.451953 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.589758 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.598661 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 10 15:23:25 crc kubenswrapper[4755]: W1210 15:23:25.620060 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-b235005389070c10c0c6815270a160b5a5d7e33400a08c041ef6a19516390aff WatchSource:0}: Error finding container b235005389070c10c0c6815270a160b5a5d7e33400a08c041ef6a19516390aff: Status 404 returned error can't find the container with id b235005389070c10c0c6815270a160b5a5d7e33400a08c041ef6a19516390aff Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.621513 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.641225 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.648602 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.650977 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.651999 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.652046 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.652062 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.652089 4755 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 10 15:23:25 crc kubenswrapper[4755]: E1210 15:23:25.653914 4755 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.18:6443: connect: connection refused" node="crc" Dec 10 15:23:25 crc kubenswrapper[4755]: W1210 15:23:25.659366 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-e9d2f057d595e3c4ab3dc74485555e506ba6618fd97dd9e0c3930b6defc25813 WatchSource:0}: Error finding container e9d2f057d595e3c4ab3dc74485555e506ba6618fd97dd9e0c3930b6defc25813: Status 404 returned error can't find the container with id e9d2f057d595e3c4ab3dc74485555e506ba6618fd97dd9e0c3930b6defc25813 Dec 10 15:23:25 crc kubenswrapper[4755]: W1210 15:23:25.668859 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-78aabf59a793518c75307fb11bade273438ec8f78e0c9bd00325e6d484508ce3 WatchSource:0}: Error finding container 78aabf59a793518c75307fb11bade273438ec8f78e0c9bd00325e6d484508ce3: Status 404 returned error can't find the container with id 78aabf59a793518c75307fb11bade273438ec8f78e0c9bd00325e6d484508ce3 Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.704603 4755 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.18:6443: connect: connection refused Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.705872 4755 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 20:44:46.258871152 +0000 UTC Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.705953 4755 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 917h21m20.552921516s for next certificate rotation Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.763602 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"33a68f52460527fab7cde57c2cc14199f1c61d6950d5b16e0d5b78bbd1b3571d"} Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.766019 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"b235005389070c10c0c6815270a160b5a5d7e33400a08c041ef6a19516390aff"} Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.766997 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"78aabf59a793518c75307fb11bade273438ec8f78e0c9bd00325e6d484508ce3"} Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.768173 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e9d2f057d595e3c4ab3dc74485555e506ba6618fd97dd9e0c3930b6defc25813"} Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.769179 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"0a40ba6ceabf9962186634a50c95aaa47e7338b39de2761bf343ce6764a703ce"} Dec 10 15:23:25 crc kubenswrapper[4755]: I1210 15:23:25.827520 4755 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Dec 10 15:23:25 crc kubenswrapper[4755]: E1210 15:23:25.828748 4755 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.18:6443: connect: connection refused" logger="UnhandledError" Dec 10 15:23:26 crc kubenswrapper[4755]: I1210 15:23:26.454208 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 15:23:26 crc kubenswrapper[4755]: I1210 15:23:26.456159 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:26 crc kubenswrapper[4755]: I1210 15:23:26.456189 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:26 crc kubenswrapper[4755]: I1210 15:23:26.456197 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:26 crc kubenswrapper[4755]: I1210 15:23:26.456249 4755 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 10 15:23:26 crc kubenswrapper[4755]: E1210 15:23:26.456695 4755 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.18:6443: connect: connection refused" node="crc" Dec 10 15:23:26 crc kubenswrapper[4755]: I1210 15:23:26.704092 4755 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.18:6443: connect: connection refused Dec 10 15:23:26 crc kubenswrapper[4755]: E1210 15:23:26.712997 4755 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.18:6443: connect: connection refused" interval="3.2s" Dec 10 15:23:26 crc kubenswrapper[4755]: I1210 15:23:26.778993 4755 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="55da309288bc2c7ff1f22da0569d18155796700c200fe3ed508f6f8179756865" exitCode=0 Dec 10 15:23:26 crc kubenswrapper[4755]: I1210 15:23:26.779045 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"55da309288bc2c7ff1f22da0569d18155796700c200fe3ed508f6f8179756865"} Dec 10 15:23:26 crc kubenswrapper[4755]: I1210 15:23:26.779156 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 15:23:26 crc kubenswrapper[4755]: I1210 15:23:26.780238 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:26 crc kubenswrapper[4755]: I1210 15:23:26.780261 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:26 crc kubenswrapper[4755]: I1210 15:23:26.780270 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:26 crc kubenswrapper[4755]: I1210 15:23:26.781133 4755 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="34c0f838adfa6e512d9529c4999cc194d978276c8736abccd0ef75d5b7e9b6e8" exitCode=0 Dec 10 15:23:26 crc kubenswrapper[4755]: I1210 15:23:26.781182 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"34c0f838adfa6e512d9529c4999cc194d978276c8736abccd0ef75d5b7e9b6e8"} Dec 10 15:23:26 crc kubenswrapper[4755]: I1210 15:23:26.782620 4755 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="54e0e014ecb31c499da20621a8b9e2ad6fdbdc2170a93636c828082b5b63b683" exitCode=0 Dec 10 15:23:26 crc kubenswrapper[4755]: I1210 15:23:26.782684 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"54e0e014ecb31c499da20621a8b9e2ad6fdbdc2170a93636c828082b5b63b683"} Dec 10 15:23:26 crc kubenswrapper[4755]: I1210 15:23:26.782714 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 15:23:26 crc kubenswrapper[4755]: I1210 15:23:26.783540 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:26 crc kubenswrapper[4755]: I1210 15:23:26.783587 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:26 crc kubenswrapper[4755]: I1210 15:23:26.783599 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:26 crc kubenswrapper[4755]: I1210 15:23:26.785413 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 15:23:26 crc kubenswrapper[4755]: I1210 15:23:26.785945 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"104d730ea666ffd2d639ba7fc8d1ac5b444586788a62656fd260cb249f82b1ae"} Dec 10 15:23:26 crc kubenswrapper[4755]: I1210 15:23:26.788441 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"17ca0b6cc1b8b0bf9c34e03c494d1fc1594db013eccf862027ec8749ea51f6e0"} Dec 10 15:23:26 crc kubenswrapper[4755]: I1210 15:23:26.787673 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 15:23:26 crc kubenswrapper[4755]: I1210 15:23:26.787570 4755 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="17ca0b6cc1b8b0bf9c34e03c494d1fc1594db013eccf862027ec8749ea51f6e0" exitCode=0 Dec 10 15:23:26 crc kubenswrapper[4755]: I1210 15:23:26.789151 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:26 crc kubenswrapper[4755]: I1210 15:23:26.789195 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:26 crc kubenswrapper[4755]: I1210 15:23:26.789207 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:26 crc kubenswrapper[4755]: I1210 15:23:26.789553 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:26 crc kubenswrapper[4755]: I1210 15:23:26.789578 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:26 crc kubenswrapper[4755]: I1210 15:23:26.789588 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:26 crc kubenswrapper[4755]: I1210 15:23:26.793821 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 15:23:26 crc kubenswrapper[4755]: I1210 15:23:26.795820 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:26 crc kubenswrapper[4755]: I1210 15:23:26.795854 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:26 crc kubenswrapper[4755]: I1210 15:23:26.795867 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:26 crc kubenswrapper[4755]: W1210 15:23:26.959532 4755 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.18:6443: connect: connection refused Dec 10 15:23:26 crc kubenswrapper[4755]: E1210 15:23:26.959677 4755 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.18:6443: connect: connection refused" logger="UnhandledError" Dec 10 15:23:27 crc kubenswrapper[4755]: W1210 15:23:27.215654 4755 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.18:6443: connect: connection refused Dec 10 15:23:27 crc kubenswrapper[4755]: E1210 15:23:27.215793 4755 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.18:6443: connect: connection refused" logger="UnhandledError" Dec 10 15:23:27 crc kubenswrapper[4755]: I1210 15:23:27.793966 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"08895c3ab20248c4bd70ccfae6442bdf8e76a602e605b043f0f00166624ada17"} Dec 10 15:23:27 crc kubenswrapper[4755]: I1210 15:23:27.794016 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c5b2467645116856afa1e1e37501c7b453b03da193a96d1075886b85839a86ac"} Dec 10 15:23:27 crc kubenswrapper[4755]: I1210 15:23:27.794028 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ef3871ddc16d2fc1b89a6f120390cf066424bd9ed4cca98f8dea4430c40c6c07"} Dec 10 15:23:27 crc kubenswrapper[4755]: I1210 15:23:27.795695 4755 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="219c531368bb5379f3726d94ab0afc0f33eedbf91b0a4dbbe0757c6e232f79ff" exitCode=0 Dec 10 15:23:27 crc kubenswrapper[4755]: I1210 15:23:27.795750 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"219c531368bb5379f3726d94ab0afc0f33eedbf91b0a4dbbe0757c6e232f79ff"} Dec 10 15:23:27 crc kubenswrapper[4755]: I1210 15:23:27.795806 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 15:23:27 crc kubenswrapper[4755]: I1210 15:23:27.797245 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"7446bf39567cbd3f5a0a9a1252748e01144968fd0a67b39d8c32b326a13dec38"} Dec 10 15:23:27 crc kubenswrapper[4755]: I1210 15:23:27.797269 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 15:23:27 crc kubenswrapper[4755]: I1210 15:23:27.797273 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:27 crc kubenswrapper[4755]: I1210 15:23:27.797448 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:27 crc kubenswrapper[4755]: I1210 15:23:27.797497 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:27 crc kubenswrapper[4755]: I1210 15:23:27.797970 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:27 crc kubenswrapper[4755]: I1210 15:23:27.798000 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:27 crc kubenswrapper[4755]: I1210 15:23:27.798011 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:27 crc kubenswrapper[4755]: I1210 15:23:27.799928 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"865537b06aba1b617a1a16ef38c6b5be072501d1bbbd69e14eaaf2ce76c6b1da"} Dec 10 15:23:27 crc kubenswrapper[4755]: I1210 15:23:27.799954 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"728ff8c02e5b0bea5514375b60c7a66f025e3c2a7163f0753c65d914b7873531"} Dec 10 15:23:27 crc kubenswrapper[4755]: I1210 15:23:27.799969 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"e68926ec899dad3c070810697fb078955e202f270724638719a7ea21d2debcb2"} Dec 10 15:23:27 crc kubenswrapper[4755]: I1210 15:23:27.800031 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 15:23:27 crc kubenswrapper[4755]: I1210 15:23:27.800720 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:27 crc kubenswrapper[4755]: I1210 15:23:27.800756 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:27 crc kubenswrapper[4755]: I1210 15:23:27.800767 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:27 crc kubenswrapper[4755]: I1210 15:23:27.802530 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f8bf1de0d8bdc0a20bc42ba5097d849ae0f507e5fbc18fb17b4ff3650e46ff0a"} Dec 10 15:23:27 crc kubenswrapper[4755]: I1210 15:23:27.802587 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"09dbec85547c7170bb9551e5657876814d48528e3047daf3547711a563d2b6ce"} Dec 10 15:23:27 crc kubenswrapper[4755]: I1210 15:23:27.802603 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"70d014e7227746c46b30f8f5a1f307a422d2fa0d4b98d98bfe5ba6217223489e"} Dec 10 15:23:27 crc kubenswrapper[4755]: I1210 15:23:27.802611 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 15:23:27 crc kubenswrapper[4755]: I1210 15:23:27.803456 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:27 crc kubenswrapper[4755]: I1210 15:23:27.803520 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:27 crc kubenswrapper[4755]: I1210 15:23:27.803534 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:28 crc kubenswrapper[4755]: I1210 15:23:28.057177 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 15:23:28 crc kubenswrapper[4755]: I1210 15:23:28.058422 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:28 crc kubenswrapper[4755]: I1210 15:23:28.058449 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:28 crc kubenswrapper[4755]: I1210 15:23:28.058458 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:28 crc kubenswrapper[4755]: I1210 15:23:28.058500 4755 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 10 15:23:28 crc kubenswrapper[4755]: I1210 15:23:28.808768 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"16e58653dca08c9b33671279553975bca9507ffeeefea161119cec624813f167"} Dec 10 15:23:28 crc kubenswrapper[4755]: I1210 15:23:28.808839 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"fea18ef6c319336321d97c2a55449903b837b5a47da785da3ff5308244751943"} Dec 10 15:23:28 crc kubenswrapper[4755]: I1210 15:23:28.808993 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 15:23:28 crc kubenswrapper[4755]: I1210 15:23:28.810164 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:28 crc kubenswrapper[4755]: I1210 15:23:28.810213 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:28 crc kubenswrapper[4755]: I1210 15:23:28.810234 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:28 crc kubenswrapper[4755]: I1210 15:23:28.814877 4755 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="f9da8f3d4f85ec83c16b71ecd983ff4f48049eb8e73461a2b46c4f66d71ed06e" exitCode=0 Dec 10 15:23:28 crc kubenswrapper[4755]: I1210 15:23:28.814982 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"f9da8f3d4f85ec83c16b71ecd983ff4f48049eb8e73461a2b46c4f66d71ed06e"} Dec 10 15:23:28 crc kubenswrapper[4755]: I1210 15:23:28.815014 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 15:23:28 crc kubenswrapper[4755]: I1210 15:23:28.815020 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 15:23:28 crc kubenswrapper[4755]: I1210 15:23:28.815060 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 15:23:28 crc kubenswrapper[4755]: I1210 15:23:28.815115 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 15:23:28 crc kubenswrapper[4755]: I1210 15:23:28.815284 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 10 15:23:28 crc kubenswrapper[4755]: I1210 15:23:28.816589 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:28 crc kubenswrapper[4755]: I1210 15:23:28.816630 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:28 crc kubenswrapper[4755]: I1210 15:23:28.816590 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:28 crc kubenswrapper[4755]: I1210 15:23:28.816670 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:28 crc kubenswrapper[4755]: I1210 15:23:28.816684 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:28 crc kubenswrapper[4755]: I1210 15:23:28.816644 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:28 crc kubenswrapper[4755]: I1210 15:23:28.816717 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:28 crc kubenswrapper[4755]: I1210 15:23:28.816773 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:28 crc kubenswrapper[4755]: I1210 15:23:28.816787 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:28 crc kubenswrapper[4755]: I1210 15:23:28.817139 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:28 crc kubenswrapper[4755]: I1210 15:23:28.817166 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:28 crc kubenswrapper[4755]: I1210 15:23:28.817179 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:29 crc kubenswrapper[4755]: I1210 15:23:29.276519 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 10 15:23:29 crc kubenswrapper[4755]: I1210 15:23:29.820429 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"cfbb8e350ad18b78a6bcf6cfa4eb8f2fad90f970dba08a8b1b2026af6f255e83"} Dec 10 15:23:29 crc kubenswrapper[4755]: I1210 15:23:29.820502 4755 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 10 15:23:29 crc kubenswrapper[4755]: I1210 15:23:29.820510 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"172ab3fce08c8ba4095ff4095c89364a778644752bd7bb6c178d6e3ebcface69"} Dec 10 15:23:29 crc kubenswrapper[4755]: I1210 15:23:29.820531 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d828f3f3b90a2dcb1c1908e6a686368af5b0d715b3251e4b8fcf3c8818ec75a6"} Dec 10 15:23:29 crc kubenswrapper[4755]: I1210 15:23:29.820541 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 15:23:29 crc kubenswrapper[4755]: I1210 15:23:29.820548 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"a823333ed5cb5de3988d25e50e4b7a0f9071c76fb39c22760a4f2acd5eb455d5"} Dec 10 15:23:29 crc kubenswrapper[4755]: I1210 15:23:29.820563 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"8c4b2bf0c88a16fd6b4b45a20730a92292895dc9f29ce756d347c302b25a8612"} Dec 10 15:23:29 crc kubenswrapper[4755]: I1210 15:23:29.820603 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 15:23:29 crc kubenswrapper[4755]: I1210 15:23:29.820613 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 15:23:29 crc kubenswrapper[4755]: I1210 15:23:29.820541 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 15:23:29 crc kubenswrapper[4755]: I1210 15:23:29.821983 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:29 crc kubenswrapper[4755]: I1210 15:23:29.822004 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:29 crc kubenswrapper[4755]: I1210 15:23:29.822013 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:29 crc kubenswrapper[4755]: I1210 15:23:29.822059 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:29 crc kubenswrapper[4755]: I1210 15:23:29.822083 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:29 crc kubenswrapper[4755]: I1210 15:23:29.822097 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:29 crc kubenswrapper[4755]: I1210 15:23:29.822010 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:29 crc kubenswrapper[4755]: I1210 15:23:29.822139 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:29 crc kubenswrapper[4755]: I1210 15:23:29.822147 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:29 crc kubenswrapper[4755]: I1210 15:23:29.822010 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:29 crc kubenswrapper[4755]: I1210 15:23:29.822594 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:29 crc kubenswrapper[4755]: I1210 15:23:29.822608 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:30 crc kubenswrapper[4755]: I1210 15:23:30.006638 4755 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Dec 10 15:23:30 crc kubenswrapper[4755]: I1210 15:23:30.823413 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 15:23:30 crc kubenswrapper[4755]: I1210 15:23:30.824399 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:30 crc kubenswrapper[4755]: I1210 15:23:30.824432 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:30 crc kubenswrapper[4755]: I1210 15:23:30.824443 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:31 crc kubenswrapper[4755]: I1210 15:23:31.173997 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 10 15:23:31 crc kubenswrapper[4755]: I1210 15:23:31.174160 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 15:23:31 crc kubenswrapper[4755]: I1210 15:23:31.175533 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:31 crc kubenswrapper[4755]: I1210 15:23:31.175578 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:31 crc kubenswrapper[4755]: I1210 15:23:31.175589 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:31 crc kubenswrapper[4755]: I1210 15:23:31.973213 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Dec 10 15:23:31 crc kubenswrapper[4755]: I1210 15:23:31.973483 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 15:23:31 crc kubenswrapper[4755]: I1210 15:23:31.974541 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:31 crc kubenswrapper[4755]: I1210 15:23:31.974576 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:31 crc kubenswrapper[4755]: I1210 15:23:31.974586 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:32 crc kubenswrapper[4755]: I1210 15:23:32.156202 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 15:23:32 crc kubenswrapper[4755]: I1210 15:23:32.156610 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 15:23:32 crc kubenswrapper[4755]: I1210 15:23:32.158080 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:32 crc kubenswrapper[4755]: I1210 15:23:32.158123 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:32 crc kubenswrapper[4755]: I1210 15:23:32.158136 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:32 crc kubenswrapper[4755]: I1210 15:23:32.277334 4755 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 10 15:23:32 crc kubenswrapper[4755]: I1210 15:23:32.277424 4755 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 10 15:23:32 crc kubenswrapper[4755]: I1210 15:23:32.836845 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 15:23:32 crc kubenswrapper[4755]: I1210 15:23:32.837093 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 15:23:32 crc kubenswrapper[4755]: I1210 15:23:32.838774 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:32 crc kubenswrapper[4755]: I1210 15:23:32.838810 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:32 crc kubenswrapper[4755]: I1210 15:23:32.838819 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:33 crc kubenswrapper[4755]: I1210 15:23:33.218182 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 15:23:33 crc kubenswrapper[4755]: I1210 15:23:33.831028 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 15:23:33 crc kubenswrapper[4755]: I1210 15:23:33.832167 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:33 crc kubenswrapper[4755]: I1210 15:23:33.832200 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:33 crc kubenswrapper[4755]: I1210 15:23:33.832209 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:34 crc kubenswrapper[4755]: I1210 15:23:34.382876 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 10 15:23:34 crc kubenswrapper[4755]: I1210 15:23:34.383024 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 15:23:34 crc kubenswrapper[4755]: I1210 15:23:34.384103 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:34 crc kubenswrapper[4755]: I1210 15:23:34.384135 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:34 crc kubenswrapper[4755]: I1210 15:23:34.384145 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:34 crc kubenswrapper[4755]: I1210 15:23:34.627595 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 10 15:23:34 crc kubenswrapper[4755]: I1210 15:23:34.632227 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 10 15:23:34 crc kubenswrapper[4755]: I1210 15:23:34.833759 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 15:23:34 crc kubenswrapper[4755]: I1210 15:23:34.834591 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:34 crc kubenswrapper[4755]: I1210 15:23:34.834686 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:34 crc kubenswrapper[4755]: I1210 15:23:34.834741 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:34 crc kubenswrapper[4755]: E1210 15:23:34.950993 4755 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 10 15:23:35 crc kubenswrapper[4755]: I1210 15:23:35.762933 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Dec 10 15:23:35 crc kubenswrapper[4755]: I1210 15:23:35.763609 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 15:23:35 crc kubenswrapper[4755]: I1210 15:23:35.764979 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:35 crc kubenswrapper[4755]: I1210 15:23:35.765093 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:35 crc kubenswrapper[4755]: I1210 15:23:35.765169 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:35 crc kubenswrapper[4755]: I1210 15:23:35.836060 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 15:23:35 crc kubenswrapper[4755]: I1210 15:23:35.837137 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:35 crc kubenswrapper[4755]: I1210 15:23:35.837175 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:35 crc kubenswrapper[4755]: I1210 15:23:35.837184 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:37 crc kubenswrapper[4755]: I1210 15:23:37.715720 4755 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 10 15:23:37 crc kubenswrapper[4755]: I1210 15:23:37.716075 4755 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 10 15:23:37 crc kubenswrapper[4755]: I1210 15:23:37.847275 4755 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 10 15:23:37 crc kubenswrapper[4755]: [+]log ok Dec 10 15:23:37 crc kubenswrapper[4755]: [+]etcd ok Dec 10 15:23:37 crc kubenswrapper[4755]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Dec 10 15:23:37 crc kubenswrapper[4755]: [+]poststarthook/openshift.io-api-request-count-filter ok Dec 10 15:23:37 crc kubenswrapper[4755]: [+]poststarthook/openshift.io-startkubeinformers ok Dec 10 15:23:37 crc kubenswrapper[4755]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Dec 10 15:23:37 crc kubenswrapper[4755]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Dec 10 15:23:37 crc kubenswrapper[4755]: [+]poststarthook/start-apiserver-admission-initializer ok Dec 10 15:23:37 crc kubenswrapper[4755]: [+]poststarthook/generic-apiserver-start-informers ok Dec 10 15:23:37 crc kubenswrapper[4755]: [+]poststarthook/priority-and-fairness-config-consumer ok Dec 10 15:23:37 crc kubenswrapper[4755]: [+]poststarthook/priority-and-fairness-filter ok Dec 10 15:23:37 crc kubenswrapper[4755]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 10 15:23:37 crc kubenswrapper[4755]: [+]poststarthook/start-apiextensions-informers ok Dec 10 15:23:37 crc kubenswrapper[4755]: [-]poststarthook/start-apiextensions-controllers failed: reason withheld Dec 10 15:23:37 crc kubenswrapper[4755]: [-]poststarthook/crd-informer-synced failed: reason withheld Dec 10 15:23:37 crc kubenswrapper[4755]: [+]poststarthook/start-system-namespaces-controller ok Dec 10 15:23:37 crc kubenswrapper[4755]: [+]poststarthook/start-cluster-authentication-info-controller ok Dec 10 15:23:37 crc kubenswrapper[4755]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Dec 10 15:23:37 crc kubenswrapper[4755]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Dec 10 15:23:37 crc kubenswrapper[4755]: [+]poststarthook/start-legacy-token-tracking-controller ok Dec 10 15:23:37 crc kubenswrapper[4755]: [+]poststarthook/start-service-ip-repair-controllers ok Dec 10 15:23:37 crc kubenswrapper[4755]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Dec 10 15:23:37 crc kubenswrapper[4755]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Dec 10 15:23:37 crc kubenswrapper[4755]: [+]poststarthook/priority-and-fairness-config-producer ok Dec 10 15:23:37 crc kubenswrapper[4755]: [+]poststarthook/bootstrap-controller ok Dec 10 15:23:37 crc kubenswrapper[4755]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Dec 10 15:23:37 crc kubenswrapper[4755]: [+]poststarthook/start-kube-aggregator-informers ok Dec 10 15:23:37 crc kubenswrapper[4755]: [+]poststarthook/apiservice-status-local-available-controller ok Dec 10 15:23:37 crc kubenswrapper[4755]: [+]poststarthook/apiservice-status-remote-available-controller ok Dec 10 15:23:37 crc kubenswrapper[4755]: [+]poststarthook/apiservice-registration-controller ok Dec 10 15:23:37 crc kubenswrapper[4755]: [+]poststarthook/apiservice-wait-for-first-sync ok Dec 10 15:23:37 crc kubenswrapper[4755]: [+]poststarthook/apiservice-discovery-controller ok Dec 10 15:23:37 crc kubenswrapper[4755]: [+]poststarthook/kube-apiserver-autoregistration ok Dec 10 15:23:37 crc kubenswrapper[4755]: [+]autoregister-completion ok Dec 10 15:23:37 crc kubenswrapper[4755]: [+]poststarthook/apiservice-openapi-controller ok Dec 10 15:23:37 crc kubenswrapper[4755]: [+]poststarthook/apiservice-openapiv3-controller ok Dec 10 15:23:37 crc kubenswrapper[4755]: livez check failed Dec 10 15:23:37 crc kubenswrapper[4755]: I1210 15:23:37.847367 4755 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 10 15:23:42 crc kubenswrapper[4755]: I1210 15:23:42.157238 4755 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Dec 10 15:23:42 crc kubenswrapper[4755]: I1210 15:23:42.157318 4755 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Dec 10 15:23:42 crc kubenswrapper[4755]: I1210 15:23:42.277940 4755 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 10 15:23:42 crc kubenswrapper[4755]: I1210 15:23:42.278024 4755 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 10 15:23:42 crc kubenswrapper[4755]: E1210 15:23:42.712099 4755 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Dec 10 15:23:42 crc kubenswrapper[4755]: I1210 15:23:42.715787 4755 trace.go:236] Trace[1776671632]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (10-Dec-2025 15:23:27.946) (total time: 14768ms): Dec 10 15:23:42 crc kubenswrapper[4755]: Trace[1776671632]: ---"Objects listed" error: 14768ms (15:23:42.715) Dec 10 15:23:42 crc kubenswrapper[4755]: Trace[1776671632]: [14.768825558s] [14.768825558s] END Dec 10 15:23:42 crc kubenswrapper[4755]: I1210 15:23:42.715826 4755 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 10 15:23:42 crc kubenswrapper[4755]: I1210 15:23:42.715884 4755 trace.go:236] Trace[477530308]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (10-Dec-2025 15:23:28.069) (total time: 14645ms): Dec 10 15:23:42 crc kubenswrapper[4755]: Trace[477530308]: ---"Objects listed" error: 14645ms (15:23:42.715) Dec 10 15:23:42 crc kubenswrapper[4755]: Trace[477530308]: [14.645889258s] [14.645889258s] END Dec 10 15:23:42 crc kubenswrapper[4755]: I1210 15:23:42.715912 4755 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 10 15:23:42 crc kubenswrapper[4755]: I1210 15:23:42.716178 4755 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 10 15:23:42 crc kubenswrapper[4755]: I1210 15:23:42.716211 4755 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Dec 10 15:23:42 crc kubenswrapper[4755]: E1210 15:23:42.718578 4755 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Dec 10 15:23:42 crc kubenswrapper[4755]: I1210 15:23:42.718613 4755 trace.go:236] Trace[1619348218]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (10-Dec-2025 15:23:30.347) (total time: 12371ms): Dec 10 15:23:42 crc kubenswrapper[4755]: Trace[1619348218]: ---"Objects listed" error: 12370ms (15:23:42.718) Dec 10 15:23:42 crc kubenswrapper[4755]: Trace[1619348218]: [12.371113694s] [12.371113694s] END Dec 10 15:23:42 crc kubenswrapper[4755]: I1210 15:23:42.719296 4755 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 10 15:23:42 crc kubenswrapper[4755]: I1210 15:23:42.724546 4755 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Dec 10 15:23:42 crc kubenswrapper[4755]: I1210 15:23:42.844261 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 15:23:42 crc kubenswrapper[4755]: I1210 15:23:42.847700 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.353146 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.414996 4755 csr.go:261] certificate signing request csr-b66n6 is approved, waiting to be issued Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.455790 4755 csr.go:257] certificate signing request csr-b66n6 is issued Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.644909 4755 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Dec 10 15:23:43 crc kubenswrapper[4755]: W1210 15:23:43.645153 4755 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Dec 10 15:23:43 crc kubenswrapper[4755]: W1210 15:23:43.645172 4755 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Dec 10 15:23:43 crc kubenswrapper[4755]: W1210 15:23:43.645202 4755 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Dec 10 15:23:43 crc kubenswrapper[4755]: W1210 15:23:43.645269 4755 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.709821 4755 apiserver.go:52] "Watching apiserver" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.714891 4755 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.715226 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-kube-apiserver/kube-apiserver-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf"] Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.715568 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.715621 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 15:23:43 crc kubenswrapper[4755]: E1210 15:23:43.715799 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.715837 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.715859 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 15:23:43 crc kubenswrapper[4755]: E1210 15:23:43.715936 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.716081 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.716091 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 15:23:43 crc kubenswrapper[4755]: E1210 15:23:43.716153 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.717673 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.719219 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.722972 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.723113 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.723173 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.723261 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.723374 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.723511 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.723565 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.748115 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.759823 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.774991 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e6db97-7e22-4fec-924e-20e90f463887\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef3871ddc16d2fc1b89a6f120390cf066424bd9ed4cca98f8dea4430c40c6c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08895c3ab20248c4bd70ccfae6442bdf8e76a602e605b043f0f00166624ada17\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5b2467645116856afa1e1e37501c7b453b03da193a96d1075886b85839a86ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16e58653dca08c9b33671279553975bca9507ffeeefea161119cec624813f167\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea18ef6c319336321d97c2a55449903b837b5a47da785da3ff5308244751943\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17ca0b6cc1b8b0bf9c34e03c494d1fc1594db013eccf862027ec8749ea51f6e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17ca0b6cc1b8b0bf9c34e03c494d1fc1594db013eccf862027ec8749ea51f6e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.787053 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.797586 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.806727 4755 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.809978 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.820411 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.823092 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.823134 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.823157 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.823183 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.823204 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.823226 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.823259 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.823282 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.823304 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.823326 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.823349 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.823369 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.823388 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.823406 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.823425 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.823445 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.823482 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.823511 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.823485 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.823536 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.823534 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.823564 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.823590 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.823610 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.823661 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.823688 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.823708 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.823729 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.823752 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.823756 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.823774 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.823877 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.823902 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.823921 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.823932 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.823945 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.823969 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.824014 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.824035 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.824056 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.824080 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.824101 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.824124 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.824209 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.824233 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.824256 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.824278 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.824299 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.824323 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.824342 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.824361 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.824382 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.824403 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.824425 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.824447 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.824483 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.824506 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.824528 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.824577 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.824597 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.824617 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.824636 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.824654 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.824672 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.824692 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.824713 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.824735 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.824758 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.824781 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.824802 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.824825 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.824845 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.824868 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.824889 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.824912 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.824936 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.824959 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.824980 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.825000 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.825023 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.825043 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.825064 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.825085 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.825105 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.825125 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.825145 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.825165 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.825188 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.825214 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.825236 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.825258 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.825285 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.825308 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.825332 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.825354 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.825378 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.825403 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.825424 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.825445 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.825484 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.825511 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.825534 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.825557 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.825578 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.825603 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.825626 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.825647 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.825668 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.825690 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.825714 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.825737 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.825760 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.825784 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.825806 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.825827 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.825849 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.825872 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.825897 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.825920 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.825942 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.825968 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.824200 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.825991 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.824396 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.824548 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.824684 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.824702 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.826044 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.824829 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.824879 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.825045 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.825151 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.825215 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.825345 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.825434 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.825641 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.825851 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.825874 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.826181 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.826305 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.826016 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.826609 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.826631 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.826650 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.826667 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.826685 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.826684 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.826704 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.826721 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.826740 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.826743 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.826758 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.826738 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.826774 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.826851 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.826875 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.826888 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.826915 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.826945 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.826970 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.826976 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.827001 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.827027 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.827052 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.827078 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.827102 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.827126 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.827149 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.827176 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.827199 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.827224 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.827247 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.827273 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.827298 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.827322 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.827349 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.827375 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.827399 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.827424 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.827438 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.827447 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.827457 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.827514 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.827542 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.827584 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.827615 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.827634 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.827665 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.827683 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.827701 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.827720 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.827721 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.827739 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.827758 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.827774 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.827794 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.827832 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.827871 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.827893 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.827910 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.827915 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.827962 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.827941 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.828010 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.828024 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.828056 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.828082 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.828108 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.828132 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.828157 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.828185 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.828208 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.828233 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.828257 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.828279 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.828316 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.828330 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.828339 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.828388 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.828411 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.828422 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.828434 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.828461 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.828502 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.828525 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.828547 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.828568 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.828591 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.828670 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.828698 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.828722 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.828769 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.828797 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.828821 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.828845 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.828873 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.828896 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.828922 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.828945 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.828970 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.828991 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.829016 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.829044 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.829069 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.829094 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.829167 4755 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.829183 4755 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.829197 4755 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.829212 4755 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.829227 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.829240 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.829254 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.829266 4755 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.829280 4755 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.829293 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.829306 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.829319 4755 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.829333 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.829346 4755 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.829359 4755 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.829372 4755 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.829385 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.829397 4755 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.829409 4755 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.829421 4755 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.829433 4755 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.829446 4755 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.829458 4755 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.829489 4755 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.829505 4755 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.829518 4755 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.829531 4755 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.829545 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.829557 4755 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.829570 4755 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.829582 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.829596 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.829611 4755 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.829625 4755 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.829637 4755 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.839037 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.848839 4755 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.828593 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.850413 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.850990 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.851060 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.828687 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.828782 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.829132 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.829312 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.829574 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.829846 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.830046 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.830236 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.830239 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.830875 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.831076 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.831179 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.831185 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.831353 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.851241 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.831456 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.851257 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.831507 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.851260 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.831894 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.832141 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.831544 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.832374 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.834092 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.851329 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.834061 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.834190 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.834229 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.834321 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.834609 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.834636 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.835014 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.835019 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.835037 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.835110 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.835321 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.835421 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.835993 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.836086 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.836176 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.836915 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.851522 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.836970 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.837002 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.837012 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.837476 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.837555 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.838600 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.838701 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.838810 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.839800 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.840825 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.841441 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.841650 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.843277 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.847605 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.847675 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.848141 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.848154 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.848489 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.848663 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.848626 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.848821 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.848919 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.849044 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.849179 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.849385 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.849652 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.848533 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.849676 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.849703 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.849751 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.849799 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.849829 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.849904 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.849996 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.850009 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.849986 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.850194 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.850346 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.850458 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.850451 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.850671 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.850695 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.831844 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.851345 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.851551 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: E1210 15:23:43.851637 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 15:23:44.351565399 +0000 UTC m=+20.952449041 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.851794 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.851983 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.852084 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.852360 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.852398 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.852438 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.852237 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.852460 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.852772 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.852844 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.853233 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.853262 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.853275 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.853534 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.853617 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: E1210 15:23:43.853645 4755 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.851641 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.853892 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 10 15:23:43 crc kubenswrapper[4755]: E1210 15:23:43.854176 4755 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 10 15:23:43 crc kubenswrapper[4755]: E1210 15:23:43.854182 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-10 15:23:44.354152118 +0000 UTC m=+20.955035750 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 10 15:23:43 crc kubenswrapper[4755]: E1210 15:23:43.854316 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-10 15:23:44.354295202 +0000 UTC m=+20.955178834 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.854518 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.854588 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.854363 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.855169 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.855891 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.856162 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.856233 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.857774 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.859389 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.860132 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.860679 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: E1210 15:23:43.868250 4755 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 10 15:23:43 crc kubenswrapper[4755]: E1210 15:23:43.868303 4755 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 10 15:23:43 crc kubenswrapper[4755]: E1210 15:23:43.868322 4755 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 15:23:43 crc kubenswrapper[4755]: E1210 15:23:43.868418 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-10 15:23:44.368388176 +0000 UTC m=+20.969271808 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.870109 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.870674 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.870762 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: E1210 15:23:43.871931 4755 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 10 15:23:43 crc kubenswrapper[4755]: E1210 15:23:43.872057 4755 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 10 15:23:43 crc kubenswrapper[4755]: E1210 15:23:43.872137 4755 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.872173 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: E1210 15:23:43.872352 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-10 15:23:44.372325221 +0000 UTC m=+20.973208923 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.874660 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.874838 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.876575 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.877500 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.877783 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.878848 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.879116 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.879625 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.880593 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.880612 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.880868 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.881113 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.881129 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.881207 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.881249 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.881259 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.881334 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.881741 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.881757 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.881837 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.882292 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.881904 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.882138 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.882595 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.882676 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.882806 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.882831 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.883045 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.883081 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.883574 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.883582 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.883847 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.883858 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.884647 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.885403 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.885585 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.885670 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.885998 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.886092 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.886291 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.886803 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.886994 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.888647 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.898052 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.900196 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.900479 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.910505 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.911290 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.920318 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:23:43 crc kubenswrapper[4755]: E1210 15:23:43.927070 4755 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-crc\" already exists" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.931967 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.932069 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.932168 4755 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.932184 4755 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.932197 4755 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.932230 4755 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.932242 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.932254 4755 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.932267 4755 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.932300 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.932313 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.932326 4755 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.932337 4755 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.932349 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.932381 4755 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.932393 4755 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.932405 4755 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.932416 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.932428 4755 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.932456 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.932479 4755 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.932490 4755 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.932503 4755 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.932518 4755 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.932552 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.932564 4755 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.932576 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.932588 4755 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.932599 4755 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.932631 4755 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.932642 4755 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.932653 4755 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.932664 4755 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.932674 4755 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.932704 4755 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.932718 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.932729 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.932739 4755 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.932750 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.932781 4755 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.932793 4755 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.932805 4755 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.932818 4755 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.932829 4755 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.932841 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.932874 4755 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.932885 4755 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.932896 4755 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.932907 4755 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.932920 4755 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.932954 4755 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.932967 4755 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.932979 4755 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.932990 4755 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.933024 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.933035 4755 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.933046 4755 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.933057 4755 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.933068 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.933103 4755 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.933114 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.933125 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.933136 4755 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.933148 4755 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.933158 4755 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.933187 4755 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.933198 4755 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.933209 4755 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.933219 4755 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.933230 4755 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.933259 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.933270 4755 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.933281 4755 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.933292 4755 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.933305 4755 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.933316 4755 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.933348 4755 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.933358 4755 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.933369 4755 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.933380 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.933392 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.933422 4755 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.933435 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.933447 4755 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.933487 4755 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.933502 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.933513 4755 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.933523 4755 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.933534 4755 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.933546 4755 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.933578 4755 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.933593 4755 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.933605 4755 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.933618 4755 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.933649 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.933662 4755 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.933673 4755 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.933686 4755 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.933696 4755 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.933706 4755 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.933736 4755 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.933748 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.933759 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.933811 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.933824 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.933837 4755 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.933849 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.933860 4755 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.933891 4755 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.933904 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.933918 4755 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.933940 4755 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.933973 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.933985 4755 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.933997 4755 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.934009 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.934020 4755 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.934052 4755 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.934064 4755 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.934076 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.934087 4755 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.934098 4755 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.934130 4755 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.934146 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.934157 4755 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.934167 4755 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.934179 4755 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.934210 4755 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.934223 4755 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.934236 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.934247 4755 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.934258 4755 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.934290 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.934305 4755 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.934317 4755 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.934329 4755 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.934341 4755 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.934373 4755 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.934385 4755 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.934396 4755 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.934407 4755 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.934418 4755 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.934450 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.934494 4755 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.934508 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.934521 4755 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.934532 4755 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.934542 4755 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.934572 4755 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.934585 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.934596 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.934607 4755 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.934618 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.934630 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.934660 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.934674 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.934685 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.934696 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.934706 4755 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.934736 4755 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.934746 4755 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.934822 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.934909 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 10 15:23:43 crc kubenswrapper[4755]: I1210 15:23:43.950667 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e6db97-7e22-4fec-924e-20e90f463887\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef3871ddc16d2fc1b89a6f120390cf066424bd9ed4cca98f8dea4430c40c6c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08895c3ab20248c4bd70ccfae6442bdf8e76a602e605b043f0f00166624ada17\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5b2467645116856afa1e1e37501c7b453b03da193a96d1075886b85839a86ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16e58653dca08c9b33671279553975bca9507ffeeefea161119cec624813f167\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea18ef6c319336321d97c2a55449903b837b5a47da785da3ff5308244751943\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17ca0b6cc1b8b0bf9c34e03c494d1fc1594db013eccf862027ec8749ea51f6e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17ca0b6cc1b8b0bf9c34e03c494d1fc1594db013eccf862027ec8749ea51f6e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.031970 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.039201 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.047712 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 10 15:23:44 crc kubenswrapper[4755]: W1210 15:23:44.054451 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-380cec3fc916dc8a46d6cce101c04c60847f50811d30731427762ac4ddce4373 WatchSource:0}: Error finding container 380cec3fc916dc8a46d6cce101c04c60847f50811d30731427762ac4ddce4373: Status 404 returned error can't find the container with id 380cec3fc916dc8a46d6cce101c04c60847f50811d30731427762ac4ddce4373 Dec 10 15:23:44 crc kubenswrapper[4755]: W1210 15:23:44.058965 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-b1a93533535f32a7d54e7f12df7e1f8aca766aac5e5a0ac650f688f25b9fa58b WatchSource:0}: Error finding container b1a93533535f32a7d54e7f12df7e1f8aca766aac5e5a0ac650f688f25b9fa58b: Status 404 returned error can't find the container with id b1a93533535f32a7d54e7f12df7e1f8aca766aac5e5a0ac650f688f25b9fa58b Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.320763 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-zl2tx"] Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.321060 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-ggt8v"] Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.321265 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-zl2tx" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.321443 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.321821 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-wv8fh"] Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.322181 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-wv8fh" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.323028 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-n8c6s"] Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.323422 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.323672 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.323725 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-n8c6s" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.324285 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.324726 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.325047 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.326173 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.327555 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.327557 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.327639 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.327722 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.327859 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.327987 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.328061 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.328174 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.328300 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.337702 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e6db97-7e22-4fec-924e-20e90f463887\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef3871ddc16d2fc1b89a6f120390cf066424bd9ed4cca98f8dea4430c40c6c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08895c3ab20248c4bd70ccfae6442bdf8e76a602e605b043f0f00166624ada17\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5b2467645116856afa1e1e37501c7b453b03da193a96d1075886b85839a86ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16e58653dca08c9b33671279553975bca9507ffeeefea161119cec624813f167\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea18ef6c319336321d97c2a55449903b837b5a47da785da3ff5308244751943\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17ca0b6cc1b8b0bf9c34e03c494d1fc1594db013eccf862027ec8749ea51f6e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17ca0b6cc1b8b0bf9c34e03c494d1fc1594db013eccf862027ec8749ea51f6e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.347367 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.385829 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.388551 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.399796 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.422569 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.437625 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.437717 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.437749 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2mdm\" (UniqueName: \"kubernetes.io/projected/b132a8b9-1c99-414d-8773-229bf36b305d-kube-api-access-m2mdm\") pod \"machine-config-daemon-ggt8v\" (UID: \"b132a8b9-1c99-414d-8773-229bf36b305d\") " pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.437772 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/8b56ae78-835a-45da-bc46-5adff2bdf9fd-cnibin\") pod \"multus-additional-cni-plugins-n8c6s\" (UID: \"8b56ae78-835a-45da-bc46-5adff2bdf9fd\") " pod="openshift-multus/multus-additional-cni-plugins-n8c6s" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.437795 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/d150a22e-c59a-4376-a5c8-db4085ea0be0-hosts-file\") pod \"node-resolver-wv8fh\" (UID: \"d150a22e-c59a-4376-a5c8-db4085ea0be0\") " pod="openshift-dns/node-resolver-wv8fh" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.437817 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/8b56ae78-835a-45da-bc46-5adff2bdf9fd-os-release\") pod \"multus-additional-cni-plugins-n8c6s\" (UID: \"8b56ae78-835a-45da-bc46-5adff2bdf9fd\") " pod="openshift-multus/multus-additional-cni-plugins-n8c6s" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.437837 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/8b56ae78-835a-45da-bc46-5adff2bdf9fd-cni-binary-copy\") pod \"multus-additional-cni-plugins-n8c6s\" (UID: \"8b56ae78-835a-45da-bc46-5adff2bdf9fd\") " pod="openshift-multus/multus-additional-cni-plugins-n8c6s" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.437858 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/796da6d5-6ccd-4786-a03e-9a8e47a55031-multus-conf-dir\") pod \"multus-zl2tx\" (UID: \"796da6d5-6ccd-4786-a03e-9a8e47a55031\") " pod="openshift-multus/multus-zl2tx" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.437879 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/796da6d5-6ccd-4786-a03e-9a8e47a55031-etc-kubernetes\") pod \"multus-zl2tx\" (UID: \"796da6d5-6ccd-4786-a03e-9a8e47a55031\") " pod="openshift-multus/multus-zl2tx" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.437901 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8b56ae78-835a-45da-bc46-5adff2bdf9fd-system-cni-dir\") pod \"multus-additional-cni-plugins-n8c6s\" (UID: \"8b56ae78-835a-45da-bc46-5adff2bdf9fd\") " pod="openshift-multus/multus-additional-cni-plugins-n8c6s" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.437923 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/796da6d5-6ccd-4786-a03e-9a8e47a55031-os-release\") pod \"multus-zl2tx\" (UID: \"796da6d5-6ccd-4786-a03e-9a8e47a55031\") " pod="openshift-multus/multus-zl2tx" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.437941 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/796da6d5-6ccd-4786-a03e-9a8e47a55031-cni-binary-copy\") pod \"multus-zl2tx\" (UID: \"796da6d5-6ccd-4786-a03e-9a8e47a55031\") " pod="openshift-multus/multus-zl2tx" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.437961 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/796da6d5-6ccd-4786-a03e-9a8e47a55031-multus-daemon-config\") pod \"multus-zl2tx\" (UID: \"796da6d5-6ccd-4786-a03e-9a8e47a55031\") " pod="openshift-multus/multus-zl2tx" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.437982 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/796da6d5-6ccd-4786-a03e-9a8e47a55031-multus-cni-dir\") pod \"multus-zl2tx\" (UID: \"796da6d5-6ccd-4786-a03e-9a8e47a55031\") " pod="openshift-multus/multus-zl2tx" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.438004 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b132a8b9-1c99-414d-8773-229bf36b305d-proxy-tls\") pod \"machine-config-daemon-ggt8v\" (UID: \"b132a8b9-1c99-414d-8773-229bf36b305d\") " pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.438024 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/796da6d5-6ccd-4786-a03e-9a8e47a55031-hostroot\") pod \"multus-zl2tx\" (UID: \"796da6d5-6ccd-4786-a03e-9a8e47a55031\") " pod="openshift-multus/multus-zl2tx" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.438049 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.438070 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8b56ae78-835a-45da-bc46-5adff2bdf9fd-tuning-conf-dir\") pod \"multus-additional-cni-plugins-n8c6s\" (UID: \"8b56ae78-835a-45da-bc46-5adff2bdf9fd\") " pod="openshift-multus/multus-additional-cni-plugins-n8c6s" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.438092 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/796da6d5-6ccd-4786-a03e-9a8e47a55031-host-run-k8s-cni-cncf-io\") pod \"multus-zl2tx\" (UID: \"796da6d5-6ccd-4786-a03e-9a8e47a55031\") " pod="openshift-multus/multus-zl2tx" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.438114 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/796da6d5-6ccd-4786-a03e-9a8e47a55031-host-var-lib-kubelet\") pod \"multus-zl2tx\" (UID: \"796da6d5-6ccd-4786-a03e-9a8e47a55031\") " pod="openshift-multus/multus-zl2tx" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.438136 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzg4t\" (UniqueName: \"kubernetes.io/projected/796da6d5-6ccd-4786-a03e-9a8e47a55031-kube-api-access-wzg4t\") pod \"multus-zl2tx\" (UID: \"796da6d5-6ccd-4786-a03e-9a8e47a55031\") " pod="openshift-multus/multus-zl2tx" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.438156 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/796da6d5-6ccd-4786-a03e-9a8e47a55031-cnibin\") pod \"multus-zl2tx\" (UID: \"796da6d5-6ccd-4786-a03e-9a8e47a55031\") " pod="openshift-multus/multus-zl2tx" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.438177 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/796da6d5-6ccd-4786-a03e-9a8e47a55031-system-cni-dir\") pod \"multus-zl2tx\" (UID: \"796da6d5-6ccd-4786-a03e-9a8e47a55031\") " pod="openshift-multus/multus-zl2tx" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.438200 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/796da6d5-6ccd-4786-a03e-9a8e47a55031-host-var-lib-cni-multus\") pod \"multus-zl2tx\" (UID: \"796da6d5-6ccd-4786-a03e-9a8e47a55031\") " pod="openshift-multus/multus-zl2tx" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.438224 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/796da6d5-6ccd-4786-a03e-9a8e47a55031-host-run-multus-certs\") pod \"multus-zl2tx\" (UID: \"796da6d5-6ccd-4786-a03e-9a8e47a55031\") " pod="openshift-multus/multus-zl2tx" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.438243 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/8b56ae78-835a-45da-bc46-5adff2bdf9fd-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-n8c6s\" (UID: \"8b56ae78-835a-45da-bc46-5adff2bdf9fd\") " pod="openshift-multus/multus-additional-cni-plugins-n8c6s" Dec 10 15:23:44 crc kubenswrapper[4755]: E1210 15:23:44.438368 4755 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 10 15:23:44 crc kubenswrapper[4755]: E1210 15:23:44.438420 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 15:23:45.438386778 +0000 UTC m=+22.039270410 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.438481 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbxwk\" (UniqueName: \"kubernetes.io/projected/8b56ae78-835a-45da-bc46-5adff2bdf9fd-kube-api-access-zbxwk\") pod \"multus-additional-cni-plugins-n8c6s\" (UID: \"8b56ae78-835a-45da-bc46-5adff2bdf9fd\") " pod="openshift-multus/multus-additional-cni-plugins-n8c6s" Dec 10 15:23:44 crc kubenswrapper[4755]: E1210 15:23:44.438502 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-10 15:23:45.43847644 +0000 UTC m=+22.039360072 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.438547 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/796da6d5-6ccd-4786-a03e-9a8e47a55031-multus-socket-dir-parent\") pod \"multus-zl2tx\" (UID: \"796da6d5-6ccd-4786-a03e-9a8e47a55031\") " pod="openshift-multus/multus-zl2tx" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.438584 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/796da6d5-6ccd-4786-a03e-9a8e47a55031-host-run-netns\") pod \"multus-zl2tx\" (UID: \"796da6d5-6ccd-4786-a03e-9a8e47a55031\") " pod="openshift-multus/multus-zl2tx" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.438610 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/b132a8b9-1c99-414d-8773-229bf36b305d-rootfs\") pod \"machine-config-daemon-ggt8v\" (UID: \"b132a8b9-1c99-414d-8773-229bf36b305d\") " pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" Dec 10 15:23:44 crc kubenswrapper[4755]: E1210 15:23:44.438581 4755 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.438680 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.438703 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b132a8b9-1c99-414d-8773-229bf36b305d-mcd-auth-proxy-config\") pod \"machine-config-daemon-ggt8v\" (UID: \"b132a8b9-1c99-414d-8773-229bf36b305d\") " pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.438745 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfw9h\" (UniqueName: \"kubernetes.io/projected/d150a22e-c59a-4376-a5c8-db4085ea0be0-kube-api-access-dfw9h\") pod \"node-resolver-wv8fh\" (UID: \"d150a22e-c59a-4376-a5c8-db4085ea0be0\") " pod="openshift-dns/node-resolver-wv8fh" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.438782 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.438819 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/796da6d5-6ccd-4786-a03e-9a8e47a55031-host-var-lib-cni-bin\") pod \"multus-zl2tx\" (UID: \"796da6d5-6ccd-4786-a03e-9a8e47a55031\") " pod="openshift-multus/multus-zl2tx" Dec 10 15:23:44 crc kubenswrapper[4755]: E1210 15:23:44.438962 4755 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 10 15:23:44 crc kubenswrapper[4755]: E1210 15:23:44.438984 4755 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 10 15:23:44 crc kubenswrapper[4755]: E1210 15:23:44.438996 4755 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 15:23:44 crc kubenswrapper[4755]: E1210 15:23:44.439036 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-10 15:23:45.439025964 +0000 UTC m=+22.039909596 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 10 15:23:44 crc kubenswrapper[4755]: E1210 15:23:44.439090 4755 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 10 15:23:44 crc kubenswrapper[4755]: E1210 15:23:44.439101 4755 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 10 15:23:44 crc kubenswrapper[4755]: E1210 15:23:44.439109 4755 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 15:23:44 crc kubenswrapper[4755]: E1210 15:23:44.439142 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-10 15:23:45.439132967 +0000 UTC m=+22.040016809 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 15:23:44 crc kubenswrapper[4755]: E1210 15:23:44.439204 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-10 15:23:45.439190469 +0000 UTC m=+22.040074101 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.448773 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zl2tx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"796da6d5-6ccd-4786-a03e-9a8e47a55031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzg4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zl2tx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.457113 4755 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-12-10 15:18:43 +0000 UTC, rotation deadline is 2026-09-26 08:16:02.699396057 +0000 UTC Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.457179 4755 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6952h52m18.24222029s for next certificate rotation Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.459316 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.468851 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.489605 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.518985 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.540130 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b132a8b9-1c99-414d-8773-229bf36b305d-mcd-auth-proxy-config\") pod \"machine-config-daemon-ggt8v\" (UID: \"b132a8b9-1c99-414d-8773-229bf36b305d\") " pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.540168 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfw9h\" (UniqueName: \"kubernetes.io/projected/d150a22e-c59a-4376-a5c8-db4085ea0be0-kube-api-access-dfw9h\") pod \"node-resolver-wv8fh\" (UID: \"d150a22e-c59a-4376-a5c8-db4085ea0be0\") " pod="openshift-dns/node-resolver-wv8fh" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.540190 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/796da6d5-6ccd-4786-a03e-9a8e47a55031-host-var-lib-cni-bin\") pod \"multus-zl2tx\" (UID: \"796da6d5-6ccd-4786-a03e-9a8e47a55031\") " pod="openshift-multus/multus-zl2tx" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.540212 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2mdm\" (UniqueName: \"kubernetes.io/projected/b132a8b9-1c99-414d-8773-229bf36b305d-kube-api-access-m2mdm\") pod \"machine-config-daemon-ggt8v\" (UID: \"b132a8b9-1c99-414d-8773-229bf36b305d\") " pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.540228 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/8b56ae78-835a-45da-bc46-5adff2bdf9fd-cnibin\") pod \"multus-additional-cni-plugins-n8c6s\" (UID: \"8b56ae78-835a-45da-bc46-5adff2bdf9fd\") " pod="openshift-multus/multus-additional-cni-plugins-n8c6s" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.540244 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/d150a22e-c59a-4376-a5c8-db4085ea0be0-hosts-file\") pod \"node-resolver-wv8fh\" (UID: \"d150a22e-c59a-4376-a5c8-db4085ea0be0\") " pod="openshift-dns/node-resolver-wv8fh" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.540260 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/796da6d5-6ccd-4786-a03e-9a8e47a55031-multus-conf-dir\") pod \"multus-zl2tx\" (UID: \"796da6d5-6ccd-4786-a03e-9a8e47a55031\") " pod="openshift-multus/multus-zl2tx" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.540274 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/796da6d5-6ccd-4786-a03e-9a8e47a55031-etc-kubernetes\") pod \"multus-zl2tx\" (UID: \"796da6d5-6ccd-4786-a03e-9a8e47a55031\") " pod="openshift-multus/multus-zl2tx" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.540288 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8b56ae78-835a-45da-bc46-5adff2bdf9fd-system-cni-dir\") pod \"multus-additional-cni-plugins-n8c6s\" (UID: \"8b56ae78-835a-45da-bc46-5adff2bdf9fd\") " pod="openshift-multus/multus-additional-cni-plugins-n8c6s" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.540301 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/8b56ae78-835a-45da-bc46-5adff2bdf9fd-os-release\") pod \"multus-additional-cni-plugins-n8c6s\" (UID: \"8b56ae78-835a-45da-bc46-5adff2bdf9fd\") " pod="openshift-multus/multus-additional-cni-plugins-n8c6s" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.540317 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/8b56ae78-835a-45da-bc46-5adff2bdf9fd-cni-binary-copy\") pod \"multus-additional-cni-plugins-n8c6s\" (UID: \"8b56ae78-835a-45da-bc46-5adff2bdf9fd\") " pod="openshift-multus/multus-additional-cni-plugins-n8c6s" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.540315 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/796da6d5-6ccd-4786-a03e-9a8e47a55031-host-var-lib-cni-bin\") pod \"multus-zl2tx\" (UID: \"796da6d5-6ccd-4786-a03e-9a8e47a55031\") " pod="openshift-multus/multus-zl2tx" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.540363 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/8b56ae78-835a-45da-bc46-5adff2bdf9fd-cnibin\") pod \"multus-additional-cni-plugins-n8c6s\" (UID: \"8b56ae78-835a-45da-bc46-5adff2bdf9fd\") " pod="openshift-multus/multus-additional-cni-plugins-n8c6s" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.540404 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/d150a22e-c59a-4376-a5c8-db4085ea0be0-hosts-file\") pod \"node-resolver-wv8fh\" (UID: \"d150a22e-c59a-4376-a5c8-db4085ea0be0\") " pod="openshift-dns/node-resolver-wv8fh" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.540375 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/796da6d5-6ccd-4786-a03e-9a8e47a55031-etc-kubernetes\") pod \"multus-zl2tx\" (UID: \"796da6d5-6ccd-4786-a03e-9a8e47a55031\") " pod="openshift-multus/multus-zl2tx" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.540447 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/796da6d5-6ccd-4786-a03e-9a8e47a55031-os-release\") pod \"multus-zl2tx\" (UID: \"796da6d5-6ccd-4786-a03e-9a8e47a55031\") " pod="openshift-multus/multus-zl2tx" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.540477 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/796da6d5-6ccd-4786-a03e-9a8e47a55031-multus-conf-dir\") pod \"multus-zl2tx\" (UID: \"796da6d5-6ccd-4786-a03e-9a8e47a55031\") " pod="openshift-multus/multus-zl2tx" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.540507 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8b56ae78-835a-45da-bc46-5adff2bdf9fd-system-cni-dir\") pod \"multus-additional-cni-plugins-n8c6s\" (UID: \"8b56ae78-835a-45da-bc46-5adff2bdf9fd\") " pod="openshift-multus/multus-additional-cni-plugins-n8c6s" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.540532 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/796da6d5-6ccd-4786-a03e-9a8e47a55031-cni-binary-copy\") pod \"multus-zl2tx\" (UID: \"796da6d5-6ccd-4786-a03e-9a8e47a55031\") " pod="openshift-multus/multus-zl2tx" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.540549 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/796da6d5-6ccd-4786-a03e-9a8e47a55031-multus-daemon-config\") pod \"multus-zl2tx\" (UID: \"796da6d5-6ccd-4786-a03e-9a8e47a55031\") " pod="openshift-multus/multus-zl2tx" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.540560 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/8b56ae78-835a-45da-bc46-5adff2bdf9fd-os-release\") pod \"multus-additional-cni-plugins-n8c6s\" (UID: \"8b56ae78-835a-45da-bc46-5adff2bdf9fd\") " pod="openshift-multus/multus-additional-cni-plugins-n8c6s" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.540565 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/796da6d5-6ccd-4786-a03e-9a8e47a55031-multus-cni-dir\") pod \"multus-zl2tx\" (UID: \"796da6d5-6ccd-4786-a03e-9a8e47a55031\") " pod="openshift-multus/multus-zl2tx" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.540615 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b132a8b9-1c99-414d-8773-229bf36b305d-proxy-tls\") pod \"machine-config-daemon-ggt8v\" (UID: \"b132a8b9-1c99-414d-8773-229bf36b305d\") " pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.540635 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/796da6d5-6ccd-4786-a03e-9a8e47a55031-hostroot\") pod \"multus-zl2tx\" (UID: \"796da6d5-6ccd-4786-a03e-9a8e47a55031\") " pod="openshift-multus/multus-zl2tx" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.540668 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8b56ae78-835a-45da-bc46-5adff2bdf9fd-tuning-conf-dir\") pod \"multus-additional-cni-plugins-n8c6s\" (UID: \"8b56ae78-835a-45da-bc46-5adff2bdf9fd\") " pod="openshift-multus/multus-additional-cni-plugins-n8c6s" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.540675 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/796da6d5-6ccd-4786-a03e-9a8e47a55031-os-release\") pod \"multus-zl2tx\" (UID: \"796da6d5-6ccd-4786-a03e-9a8e47a55031\") " pod="openshift-multus/multus-zl2tx" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.540688 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/796da6d5-6ccd-4786-a03e-9a8e47a55031-host-run-k8s-cni-cncf-io\") pod \"multus-zl2tx\" (UID: \"796da6d5-6ccd-4786-a03e-9a8e47a55031\") " pod="openshift-multus/multus-zl2tx" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.540749 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/796da6d5-6ccd-4786-a03e-9a8e47a55031-host-var-lib-kubelet\") pod \"multus-zl2tx\" (UID: \"796da6d5-6ccd-4786-a03e-9a8e47a55031\") " pod="openshift-multus/multus-zl2tx" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.540768 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzg4t\" (UniqueName: \"kubernetes.io/projected/796da6d5-6ccd-4786-a03e-9a8e47a55031-kube-api-access-wzg4t\") pod \"multus-zl2tx\" (UID: \"796da6d5-6ccd-4786-a03e-9a8e47a55031\") " pod="openshift-multus/multus-zl2tx" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.540777 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/796da6d5-6ccd-4786-a03e-9a8e47a55031-multus-cni-dir\") pod \"multus-zl2tx\" (UID: \"796da6d5-6ccd-4786-a03e-9a8e47a55031\") " pod="openshift-multus/multus-zl2tx" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.540784 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/796da6d5-6ccd-4786-a03e-9a8e47a55031-cnibin\") pod \"multus-zl2tx\" (UID: \"796da6d5-6ccd-4786-a03e-9a8e47a55031\") " pod="openshift-multus/multus-zl2tx" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.540807 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/796da6d5-6ccd-4786-a03e-9a8e47a55031-system-cni-dir\") pod \"multus-zl2tx\" (UID: \"796da6d5-6ccd-4786-a03e-9a8e47a55031\") " pod="openshift-multus/multus-zl2tx" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.540820 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/796da6d5-6ccd-4786-a03e-9a8e47a55031-cnibin\") pod \"multus-zl2tx\" (UID: \"796da6d5-6ccd-4786-a03e-9a8e47a55031\") " pod="openshift-multus/multus-zl2tx" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.540829 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/796da6d5-6ccd-4786-a03e-9a8e47a55031-host-var-lib-cni-multus\") pod \"multus-zl2tx\" (UID: \"796da6d5-6ccd-4786-a03e-9a8e47a55031\") " pod="openshift-multus/multus-zl2tx" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.540847 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/796da6d5-6ccd-4786-a03e-9a8e47a55031-host-run-multus-certs\") pod \"multus-zl2tx\" (UID: \"796da6d5-6ccd-4786-a03e-9a8e47a55031\") " pod="openshift-multus/multus-zl2tx" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.540861 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/8b56ae78-835a-45da-bc46-5adff2bdf9fd-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-n8c6s\" (UID: \"8b56ae78-835a-45da-bc46-5adff2bdf9fd\") " pod="openshift-multus/multus-additional-cni-plugins-n8c6s" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.540878 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbxwk\" (UniqueName: \"kubernetes.io/projected/8b56ae78-835a-45da-bc46-5adff2bdf9fd-kube-api-access-zbxwk\") pod \"multus-additional-cni-plugins-n8c6s\" (UID: \"8b56ae78-835a-45da-bc46-5adff2bdf9fd\") " pod="openshift-multus/multus-additional-cni-plugins-n8c6s" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.540894 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/796da6d5-6ccd-4786-a03e-9a8e47a55031-multus-socket-dir-parent\") pod \"multus-zl2tx\" (UID: \"796da6d5-6ccd-4786-a03e-9a8e47a55031\") " pod="openshift-multus/multus-zl2tx" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.540908 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/796da6d5-6ccd-4786-a03e-9a8e47a55031-host-run-netns\") pod \"multus-zl2tx\" (UID: \"796da6d5-6ccd-4786-a03e-9a8e47a55031\") " pod="openshift-multus/multus-zl2tx" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.540924 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/b132a8b9-1c99-414d-8773-229bf36b305d-rootfs\") pod \"machine-config-daemon-ggt8v\" (UID: \"b132a8b9-1c99-414d-8773-229bf36b305d\") " pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.540967 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/b132a8b9-1c99-414d-8773-229bf36b305d-rootfs\") pod \"machine-config-daemon-ggt8v\" (UID: \"b132a8b9-1c99-414d-8773-229bf36b305d\") " pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.540997 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/796da6d5-6ccd-4786-a03e-9a8e47a55031-system-cni-dir\") pod \"multus-zl2tx\" (UID: \"796da6d5-6ccd-4786-a03e-9a8e47a55031\") " pod="openshift-multus/multus-zl2tx" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.540997 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/8b56ae78-835a-45da-bc46-5adff2bdf9fd-cni-binary-copy\") pod \"multus-additional-cni-plugins-n8c6s\" (UID: \"8b56ae78-835a-45da-bc46-5adff2bdf9fd\") " pod="openshift-multus/multus-additional-cni-plugins-n8c6s" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.541020 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/796da6d5-6ccd-4786-a03e-9a8e47a55031-host-var-lib-cni-multus\") pod \"multus-zl2tx\" (UID: \"796da6d5-6ccd-4786-a03e-9a8e47a55031\") " pod="openshift-multus/multus-zl2tx" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.541045 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/796da6d5-6ccd-4786-a03e-9a8e47a55031-host-run-multus-certs\") pod \"multus-zl2tx\" (UID: \"796da6d5-6ccd-4786-a03e-9a8e47a55031\") " pod="openshift-multus/multus-zl2tx" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.541040 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/796da6d5-6ccd-4786-a03e-9a8e47a55031-host-run-k8s-cni-cncf-io\") pod \"multus-zl2tx\" (UID: \"796da6d5-6ccd-4786-a03e-9a8e47a55031\") " pod="openshift-multus/multus-zl2tx" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.541076 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/796da6d5-6ccd-4786-a03e-9a8e47a55031-multus-socket-dir-parent\") pod \"multus-zl2tx\" (UID: \"796da6d5-6ccd-4786-a03e-9a8e47a55031\") " pod="openshift-multus/multus-zl2tx" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.541099 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/796da6d5-6ccd-4786-a03e-9a8e47a55031-host-run-netns\") pod \"multus-zl2tx\" (UID: \"796da6d5-6ccd-4786-a03e-9a8e47a55031\") " pod="openshift-multus/multus-zl2tx" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.541104 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/796da6d5-6ccd-4786-a03e-9a8e47a55031-host-var-lib-kubelet\") pod \"multus-zl2tx\" (UID: \"796da6d5-6ccd-4786-a03e-9a8e47a55031\") " pod="openshift-multus/multus-zl2tx" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.540750 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/796da6d5-6ccd-4786-a03e-9a8e47a55031-hostroot\") pod \"multus-zl2tx\" (UID: \"796da6d5-6ccd-4786-a03e-9a8e47a55031\") " pod="openshift-multus/multus-zl2tx" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.541096 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b132a8b9-1c99-414d-8773-229bf36b305d-mcd-auth-proxy-config\") pod \"machine-config-daemon-ggt8v\" (UID: \"b132a8b9-1c99-414d-8773-229bf36b305d\") " pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.541118 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8b56ae78-835a-45da-bc46-5adff2bdf9fd-tuning-conf-dir\") pod \"multus-additional-cni-plugins-n8c6s\" (UID: \"8b56ae78-835a-45da-bc46-5adff2bdf9fd\") " pod="openshift-multus/multus-additional-cni-plugins-n8c6s" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.541251 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/796da6d5-6ccd-4786-a03e-9a8e47a55031-cni-binary-copy\") pod \"multus-zl2tx\" (UID: \"796da6d5-6ccd-4786-a03e-9a8e47a55031\") " pod="openshift-multus/multus-zl2tx" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.541272 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/796da6d5-6ccd-4786-a03e-9a8e47a55031-multus-daemon-config\") pod \"multus-zl2tx\" (UID: \"796da6d5-6ccd-4786-a03e-9a8e47a55031\") " pod="openshift-multus/multus-zl2tx" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.541601 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/8b56ae78-835a-45da-bc46-5adff2bdf9fd-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-n8c6s\" (UID: \"8b56ae78-835a-45da-bc46-5adff2bdf9fd\") " pod="openshift-multus/multus-additional-cni-plugins-n8c6s" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.544062 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e6db97-7e22-4fec-924e-20e90f463887\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef3871ddc16d2fc1b89a6f120390cf066424bd9ed4cca98f8dea4430c40c6c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08895c3ab20248c4bd70ccfae6442bdf8e76a602e605b043f0f00166624ada17\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5b2467645116856afa1e1e37501c7b453b03da193a96d1075886b85839a86ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16e58653dca08c9b33671279553975bca9507ffeeefea161119cec624813f167\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea18ef6c319336321d97c2a55449903b837b5a47da785da3ff5308244751943\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17ca0b6cc1b8b0bf9c34e03c494d1fc1594db013eccf862027ec8749ea51f6e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17ca0b6cc1b8b0bf9c34e03c494d1fc1594db013eccf862027ec8749ea51f6e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.545609 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b132a8b9-1c99-414d-8773-229bf36b305d-proxy-tls\") pod \"machine-config-daemon-ggt8v\" (UID: \"b132a8b9-1c99-414d-8773-229bf36b305d\") " pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.554310 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.558074 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2mdm\" (UniqueName: \"kubernetes.io/projected/b132a8b9-1c99-414d-8773-229bf36b305d-kube-api-access-m2mdm\") pod \"machine-config-daemon-ggt8v\" (UID: \"b132a8b9-1c99-414d-8773-229bf36b305d\") " pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.562153 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbxwk\" (UniqueName: \"kubernetes.io/projected/8b56ae78-835a-45da-bc46-5adff2bdf9fd-kube-api-access-zbxwk\") pod \"multus-additional-cni-plugins-n8c6s\" (UID: \"8b56ae78-835a-45da-bc46-5adff2bdf9fd\") " pod="openshift-multus/multus-additional-cni-plugins-n8c6s" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.568945 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzg4t\" (UniqueName: \"kubernetes.io/projected/796da6d5-6ccd-4786-a03e-9a8e47a55031-kube-api-access-wzg4t\") pod \"multus-zl2tx\" (UID: \"796da6d5-6ccd-4786-a03e-9a8e47a55031\") " pod="openshift-multus/multus-zl2tx" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.569242 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfw9h\" (UniqueName: \"kubernetes.io/projected/d150a22e-c59a-4376-a5c8-db4085ea0be0-kube-api-access-dfw9h\") pod \"node-resolver-wv8fh\" (UID: \"d150a22e-c59a-4376-a5c8-db4085ea0be0\") " pod="openshift-dns/node-resolver-wv8fh" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.571865 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.584403 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zl2tx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"796da6d5-6ccd-4786-a03e-9a8e47a55031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzg4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zl2tx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.594429 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b132a8b9-1c99-414d-8773-229bf36b305d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2mdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2mdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ggt8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.606490 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.616815 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aed6bb9-78c2-410b-9c58-b60ab22a7bd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70d014e7227746c46b30f8f5a1f307a422d2fa0d4b98d98bfe5ba6217223489e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://104d730ea666ffd2d639ba7fc8d1ac5b444586788a62656fd260cb249f82b1ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09dbec85547c7170bb9551e5657876814d48528e3047daf3547711a563d2b6ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8bf1de0d8bdc0a20bc42ba5097d849ae0f507e5fbc18fb17b4ff3650e46ff0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.634353 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n8c6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b56ae78-835a-45da-bc46-5adff2bdf9fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n8c6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:44Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.635422 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-zl2tx" Dec 10 15:23:44 crc kubenswrapper[4755]: W1210 15:23:44.647955 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod796da6d5_6ccd_4786_a03e_9a8e47a55031.slice/crio-58d9ca0bd4033738d8926038c5e42edfdb60130ca7cb98e028ce0211ff2071a7 WatchSource:0}: Error finding container 58d9ca0bd4033738d8926038c5e42edfdb60130ca7cb98e028ce0211ff2071a7: Status 404 returned error can't find the container with id 58d9ca0bd4033738d8926038c5e42edfdb60130ca7cb98e028ce0211ff2071a7 Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.654169 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:44Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.663815 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.665134 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wv8fh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d150a22e-c59a-4376-a5c8-db4085ea0be0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfw9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wv8fh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:44Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.681838 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-wv8fh" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.686587 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-n8c6s" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.686577 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:44Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:44 crc kubenswrapper[4755]: W1210 15:23:44.699661 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd150a22e_c59a_4376_a5c8_db4085ea0be0.slice/crio-c17e8b3555ac2aceb1d671f4603afcd20a45e33fd23318a1e7cc53c19eef8bc1 WatchSource:0}: Error finding container c17e8b3555ac2aceb1d671f4603afcd20a45e33fd23318a1e7cc53c19eef8bc1: Status 404 returned error can't find the container with id c17e8b3555ac2aceb1d671f4603afcd20a45e33fd23318a1e7cc53c19eef8bc1 Dec 10 15:23:44 crc kubenswrapper[4755]: W1210 15:23:44.702427 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8b56ae78_835a_45da_bc46_5adff2bdf9fd.slice/crio-1ce605df32c75f0cf50d9ac959cf362cb7d6f4e6c4701f95ae2483e545a8fc03 WatchSource:0}: Error finding container 1ce605df32c75f0cf50d9ac959cf362cb7d6f4e6c4701f95ae2483e545a8fc03: Status 404 returned error can't find the container with id 1ce605df32c75f0cf50d9ac959cf362cb7d6f4e6c4701f95ae2483e545a8fc03 Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.732105 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-6lfvk"] Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.735492 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.740205 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.740238 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.740352 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.740456 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.740550 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.740703 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.741866 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.750727 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:44Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.764939 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:44Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.775577 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wv8fh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d150a22e-c59a-4376-a5c8-db4085ea0be0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfw9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wv8fh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:44Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.793455 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e6db97-7e22-4fec-924e-20e90f463887\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef3871ddc16d2fc1b89a6f120390cf066424bd9ed4cca98f8dea4430c40c6c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08895c3ab20248c4bd70ccfae6442bdf8e76a602e605b043f0f00166624ada17\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5b2467645116856afa1e1e37501c7b453b03da193a96d1075886b85839a86ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16e58653dca08c9b33671279553975bca9507ffeeefea161119cec624813f167\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea18ef6c319336321d97c2a55449903b837b5a47da785da3ff5308244751943\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17ca0b6cc1b8b0bf9c34e03c494d1fc1594db013eccf862027ec8749ea51f6e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17ca0b6cc1b8b0bf9c34e03c494d1fc1594db013eccf862027ec8749ea51f6e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:44Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.807436 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:44Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.828371 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b1da51a-99c9-4f8e-920d-ce0973af6370\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6lfvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:44Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.840646 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:44Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.843947 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4b1da51a-99c9-4f8e-920d-ce0973af6370-run-openvswitch\") pod \"ovnkube-node-6lfvk\" (UID: \"4b1da51a-99c9-4f8e-920d-ce0973af6370\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.843973 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4b1da51a-99c9-4f8e-920d-ce0973af6370-host-cni-bin\") pod \"ovnkube-node-6lfvk\" (UID: \"4b1da51a-99c9-4f8e-920d-ce0973af6370\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.843990 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmtm2\" (UniqueName: \"kubernetes.io/projected/4b1da51a-99c9-4f8e-920d-ce0973af6370-kube-api-access-zmtm2\") pod \"ovnkube-node-6lfvk\" (UID: \"4b1da51a-99c9-4f8e-920d-ce0973af6370\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.844025 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4b1da51a-99c9-4f8e-920d-ce0973af6370-log-socket\") pod \"ovnkube-node-6lfvk\" (UID: \"4b1da51a-99c9-4f8e-920d-ce0973af6370\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.844046 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4b1da51a-99c9-4f8e-920d-ce0973af6370-systemd-units\") pod \"ovnkube-node-6lfvk\" (UID: \"4b1da51a-99c9-4f8e-920d-ce0973af6370\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.844063 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4b1da51a-99c9-4f8e-920d-ce0973af6370-node-log\") pod \"ovnkube-node-6lfvk\" (UID: \"4b1da51a-99c9-4f8e-920d-ce0973af6370\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.844078 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4b1da51a-99c9-4f8e-920d-ce0973af6370-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6lfvk\" (UID: \"4b1da51a-99c9-4f8e-920d-ce0973af6370\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.844099 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4b1da51a-99c9-4f8e-920d-ce0973af6370-host-run-netns\") pod \"ovnkube-node-6lfvk\" (UID: \"4b1da51a-99c9-4f8e-920d-ce0973af6370\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.844118 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4b1da51a-99c9-4f8e-920d-ce0973af6370-run-systemd\") pod \"ovnkube-node-6lfvk\" (UID: \"4b1da51a-99c9-4f8e-920d-ce0973af6370\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.844133 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4b1da51a-99c9-4f8e-920d-ce0973af6370-host-run-ovn-kubernetes\") pod \"ovnkube-node-6lfvk\" (UID: \"4b1da51a-99c9-4f8e-920d-ce0973af6370\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.844146 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4b1da51a-99c9-4f8e-920d-ce0973af6370-host-cni-netd\") pod \"ovnkube-node-6lfvk\" (UID: \"4b1da51a-99c9-4f8e-920d-ce0973af6370\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.844197 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4b1da51a-99c9-4f8e-920d-ce0973af6370-ovnkube-config\") pod \"ovnkube-node-6lfvk\" (UID: \"4b1da51a-99c9-4f8e-920d-ce0973af6370\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.844273 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4b1da51a-99c9-4f8e-920d-ce0973af6370-etc-openvswitch\") pod \"ovnkube-node-6lfvk\" (UID: \"4b1da51a-99c9-4f8e-920d-ce0973af6370\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.844309 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4b1da51a-99c9-4f8e-920d-ce0973af6370-ovnkube-script-lib\") pod \"ovnkube-node-6lfvk\" (UID: \"4b1da51a-99c9-4f8e-920d-ce0973af6370\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.844343 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4b1da51a-99c9-4f8e-920d-ce0973af6370-host-kubelet\") pod \"ovnkube-node-6lfvk\" (UID: \"4b1da51a-99c9-4f8e-920d-ce0973af6370\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.844371 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4b1da51a-99c9-4f8e-920d-ce0973af6370-run-ovn\") pod \"ovnkube-node-6lfvk\" (UID: \"4b1da51a-99c9-4f8e-920d-ce0973af6370\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.844404 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4b1da51a-99c9-4f8e-920d-ce0973af6370-env-overrides\") pod \"ovnkube-node-6lfvk\" (UID: \"4b1da51a-99c9-4f8e-920d-ce0973af6370\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.844449 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4b1da51a-99c9-4f8e-920d-ce0973af6370-var-lib-openvswitch\") pod \"ovnkube-node-6lfvk\" (UID: \"4b1da51a-99c9-4f8e-920d-ce0973af6370\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.844555 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4b1da51a-99c9-4f8e-920d-ce0973af6370-host-slash\") pod \"ovnkube-node-6lfvk\" (UID: \"4b1da51a-99c9-4f8e-920d-ce0973af6370\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.844592 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4b1da51a-99c9-4f8e-920d-ce0973af6370-ovn-node-metrics-cert\") pod \"ovnkube-node-6lfvk\" (UID: \"4b1da51a-99c9-4f8e-920d-ce0973af6370\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.861074 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:44Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.875798 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"b1a93533535f32a7d54e7f12df7e1f8aca766aac5e5a0ac650f688f25b9fa58b"} Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.876522 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:44Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.876927 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"b7d089a8bddfa1f80d29011c7a6bab0a300f7dd44bdb2864f86951ebbb9ebea7"} Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.876963 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"c33ffbde7a69ee527169899eab188b348c362c6e70aa5b8518cb8d6abc7fd7b0"} Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.878194 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-n8c6s" event={"ID":"8b56ae78-835a-45da-bc46-5adff2bdf9fd","Type":"ContainerStarted","Data":"1ce605df32c75f0cf50d9ac959cf362cb7d6f4e6c4701f95ae2483e545a8fc03"} Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.881499 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-wv8fh" event={"ID":"d150a22e-c59a-4376-a5c8-db4085ea0be0","Type":"ContainerStarted","Data":"c17e8b3555ac2aceb1d671f4603afcd20a45e33fd23318a1e7cc53c19eef8bc1"} Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.883198 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-zl2tx" event={"ID":"796da6d5-6ccd-4786-a03e-9a8e47a55031","Type":"ContainerStarted","Data":"de63a123c46563bd8cd07e669d192bc8b019a889b9bdb7af1b988872c8f1fc48"} Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.883222 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-zl2tx" event={"ID":"796da6d5-6ccd-4786-a03e-9a8e47a55031","Type":"ContainerStarted","Data":"58d9ca0bd4033738d8926038c5e42edfdb60130ca7cb98e028ce0211ff2071a7"} Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.888320 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"01ddc4056319db1f69268dcae192c3cb9db6c25284305803ae7588e59f77c346"} Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.888367 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"aae6bf174cdc9bde18d7c959e976454e73c1e67642f0158d365b79582f63f3cc"} Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.888378 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"380cec3fc916dc8a46d6cce101c04c60847f50811d30731427762ac4ddce4373"} Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.891652 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" event={"ID":"b132a8b9-1c99-414d-8773-229bf36b305d","Type":"ContainerStarted","Data":"a40c4bdaa23a60a665b8f565720d79b68cac62d40246be94fc6cd314b1bb3656"} Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.891686 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" event={"ID":"b132a8b9-1c99-414d-8773-229bf36b305d","Type":"ContainerStarted","Data":"e1ba527a2c00f911375a242e113c3bcaf4ad0a4c6e52153e895c16932d4f0779"} Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.898407 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zl2tx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"796da6d5-6ccd-4786-a03e-9a8e47a55031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzg4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zl2tx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:44Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.911809 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b132a8b9-1c99-414d-8773-229bf36b305d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2mdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2mdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ggt8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:44Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.925695 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aed6bb9-78c2-410b-9c58-b60ab22a7bd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70d014e7227746c46b30f8f5a1f307a422d2fa0d4b98d98bfe5ba6217223489e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://104d730ea666ffd2d639ba7fc8d1ac5b444586788a62656fd260cb249f82b1ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09dbec85547c7170bb9551e5657876814d48528e3047daf3547711a563d2b6ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8bf1de0d8bdc0a20bc42ba5097d849ae0f507e5fbc18fb17b4ff3650e46ff0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:44Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.944749 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n8c6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b56ae78-835a-45da-bc46-5adff2bdf9fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n8c6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:44Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.944979 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4b1da51a-99c9-4f8e-920d-ce0973af6370-host-cni-bin\") pod \"ovnkube-node-6lfvk\" (UID: \"4b1da51a-99c9-4f8e-920d-ce0973af6370\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.945017 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmtm2\" (UniqueName: \"kubernetes.io/projected/4b1da51a-99c9-4f8e-920d-ce0973af6370-kube-api-access-zmtm2\") pod \"ovnkube-node-6lfvk\" (UID: \"4b1da51a-99c9-4f8e-920d-ce0973af6370\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.945061 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4b1da51a-99c9-4f8e-920d-ce0973af6370-log-socket\") pod \"ovnkube-node-6lfvk\" (UID: \"4b1da51a-99c9-4f8e-920d-ce0973af6370\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.945081 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4b1da51a-99c9-4f8e-920d-ce0973af6370-systemd-units\") pod \"ovnkube-node-6lfvk\" (UID: \"4b1da51a-99c9-4f8e-920d-ce0973af6370\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.945130 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4b1da51a-99c9-4f8e-920d-ce0973af6370-host-run-netns\") pod \"ovnkube-node-6lfvk\" (UID: \"4b1da51a-99c9-4f8e-920d-ce0973af6370\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.945150 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4b1da51a-99c9-4f8e-920d-ce0973af6370-run-systemd\") pod \"ovnkube-node-6lfvk\" (UID: \"4b1da51a-99c9-4f8e-920d-ce0973af6370\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.945169 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4b1da51a-99c9-4f8e-920d-ce0973af6370-node-log\") pod \"ovnkube-node-6lfvk\" (UID: \"4b1da51a-99c9-4f8e-920d-ce0973af6370\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.945190 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4b1da51a-99c9-4f8e-920d-ce0973af6370-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6lfvk\" (UID: \"4b1da51a-99c9-4f8e-920d-ce0973af6370\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.945211 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4b1da51a-99c9-4f8e-920d-ce0973af6370-host-run-ovn-kubernetes\") pod \"ovnkube-node-6lfvk\" (UID: \"4b1da51a-99c9-4f8e-920d-ce0973af6370\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.945230 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4b1da51a-99c9-4f8e-920d-ce0973af6370-host-cni-netd\") pod \"ovnkube-node-6lfvk\" (UID: \"4b1da51a-99c9-4f8e-920d-ce0973af6370\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.945250 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4b1da51a-99c9-4f8e-920d-ce0973af6370-ovnkube-config\") pod \"ovnkube-node-6lfvk\" (UID: \"4b1da51a-99c9-4f8e-920d-ce0973af6370\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.945280 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4b1da51a-99c9-4f8e-920d-ce0973af6370-etc-openvswitch\") pod \"ovnkube-node-6lfvk\" (UID: \"4b1da51a-99c9-4f8e-920d-ce0973af6370\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.945319 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4b1da51a-99c9-4f8e-920d-ce0973af6370-ovnkube-script-lib\") pod \"ovnkube-node-6lfvk\" (UID: \"4b1da51a-99c9-4f8e-920d-ce0973af6370\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.945322 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4b1da51a-99c9-4f8e-920d-ce0973af6370-host-cni-bin\") pod \"ovnkube-node-6lfvk\" (UID: \"4b1da51a-99c9-4f8e-920d-ce0973af6370\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.945339 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4b1da51a-99c9-4f8e-920d-ce0973af6370-host-kubelet\") pod \"ovnkube-node-6lfvk\" (UID: \"4b1da51a-99c9-4f8e-920d-ce0973af6370\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.945371 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4b1da51a-99c9-4f8e-920d-ce0973af6370-run-ovn\") pod \"ovnkube-node-6lfvk\" (UID: \"4b1da51a-99c9-4f8e-920d-ce0973af6370\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.945426 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4b1da51a-99c9-4f8e-920d-ce0973af6370-env-overrides\") pod \"ovnkube-node-6lfvk\" (UID: \"4b1da51a-99c9-4f8e-920d-ce0973af6370\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.945488 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4b1da51a-99c9-4f8e-920d-ce0973af6370-var-lib-openvswitch\") pod \"ovnkube-node-6lfvk\" (UID: \"4b1da51a-99c9-4f8e-920d-ce0973af6370\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.945511 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4b1da51a-99c9-4f8e-920d-ce0973af6370-host-slash\") pod \"ovnkube-node-6lfvk\" (UID: \"4b1da51a-99c9-4f8e-920d-ce0973af6370\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.945533 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4b1da51a-99c9-4f8e-920d-ce0973af6370-ovn-node-metrics-cert\") pod \"ovnkube-node-6lfvk\" (UID: \"4b1da51a-99c9-4f8e-920d-ce0973af6370\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.945558 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4b1da51a-99c9-4f8e-920d-ce0973af6370-run-openvswitch\") pod \"ovnkube-node-6lfvk\" (UID: \"4b1da51a-99c9-4f8e-920d-ce0973af6370\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.945630 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4b1da51a-99c9-4f8e-920d-ce0973af6370-run-openvswitch\") pod \"ovnkube-node-6lfvk\" (UID: \"4b1da51a-99c9-4f8e-920d-ce0973af6370\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.945679 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4b1da51a-99c9-4f8e-920d-ce0973af6370-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6lfvk\" (UID: \"4b1da51a-99c9-4f8e-920d-ce0973af6370\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.945712 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4b1da51a-99c9-4f8e-920d-ce0973af6370-host-run-ovn-kubernetes\") pod \"ovnkube-node-6lfvk\" (UID: \"4b1da51a-99c9-4f8e-920d-ce0973af6370\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.945718 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4b1da51a-99c9-4f8e-920d-ce0973af6370-host-run-netns\") pod \"ovnkube-node-6lfvk\" (UID: \"4b1da51a-99c9-4f8e-920d-ce0973af6370\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.945743 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4b1da51a-99c9-4f8e-920d-ce0973af6370-host-cni-netd\") pod \"ovnkube-node-6lfvk\" (UID: \"4b1da51a-99c9-4f8e-920d-ce0973af6370\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.945854 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4b1da51a-99c9-4f8e-920d-ce0973af6370-log-socket\") pod \"ovnkube-node-6lfvk\" (UID: \"4b1da51a-99c9-4f8e-920d-ce0973af6370\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.945896 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4b1da51a-99c9-4f8e-920d-ce0973af6370-systemd-units\") pod \"ovnkube-node-6lfvk\" (UID: \"4b1da51a-99c9-4f8e-920d-ce0973af6370\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.945920 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4b1da51a-99c9-4f8e-920d-ce0973af6370-run-systemd\") pod \"ovnkube-node-6lfvk\" (UID: \"4b1da51a-99c9-4f8e-920d-ce0973af6370\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.945944 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4b1da51a-99c9-4f8e-920d-ce0973af6370-run-ovn\") pod \"ovnkube-node-6lfvk\" (UID: \"4b1da51a-99c9-4f8e-920d-ce0973af6370\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.946232 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4b1da51a-99c9-4f8e-920d-ce0973af6370-host-kubelet\") pod \"ovnkube-node-6lfvk\" (UID: \"4b1da51a-99c9-4f8e-920d-ce0973af6370\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.945373 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4b1da51a-99c9-4f8e-920d-ce0973af6370-node-log\") pod \"ovnkube-node-6lfvk\" (UID: \"4b1da51a-99c9-4f8e-920d-ce0973af6370\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.946319 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4b1da51a-99c9-4f8e-920d-ce0973af6370-etc-openvswitch\") pod \"ovnkube-node-6lfvk\" (UID: \"4b1da51a-99c9-4f8e-920d-ce0973af6370\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.946347 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4b1da51a-99c9-4f8e-920d-ce0973af6370-host-slash\") pod \"ovnkube-node-6lfvk\" (UID: \"4b1da51a-99c9-4f8e-920d-ce0973af6370\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.946370 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4b1da51a-99c9-4f8e-920d-ce0973af6370-var-lib-openvswitch\") pod \"ovnkube-node-6lfvk\" (UID: \"4b1da51a-99c9-4f8e-920d-ce0973af6370\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.946445 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4b1da51a-99c9-4f8e-920d-ce0973af6370-ovnkube-config\") pod \"ovnkube-node-6lfvk\" (UID: \"4b1da51a-99c9-4f8e-920d-ce0973af6370\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.946955 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4b1da51a-99c9-4f8e-920d-ce0973af6370-env-overrides\") pod \"ovnkube-node-6lfvk\" (UID: \"4b1da51a-99c9-4f8e-920d-ce0973af6370\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.947113 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4b1da51a-99c9-4f8e-920d-ce0973af6370-ovnkube-script-lib\") pod \"ovnkube-node-6lfvk\" (UID: \"4b1da51a-99c9-4f8e-920d-ce0973af6370\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.950506 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4b1da51a-99c9-4f8e-920d-ce0973af6370-ovn-node-metrics-cert\") pod \"ovnkube-node-6lfvk\" (UID: \"4b1da51a-99c9-4f8e-920d-ce0973af6370\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.968949 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7d089a8bddfa1f80d29011c7a6bab0a300f7dd44bdb2864f86951ebbb9ebea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:44Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.970544 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmtm2\" (UniqueName: \"kubernetes.io/projected/4b1da51a-99c9-4f8e-920d-ce0973af6370-kube-api-access-zmtm2\") pod \"ovnkube-node-6lfvk\" (UID: \"4b1da51a-99c9-4f8e-920d-ce0973af6370\") " pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" Dec 10 15:23:44 crc kubenswrapper[4755]: I1210 15:23:44.987522 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01ddc4056319db1f69268dcae192c3cb9db6c25284305803ae7588e59f77c346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aae6bf174cdc9bde18d7c959e976454e73c1e67642f0158d365b79582f63f3cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:44Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:45 crc kubenswrapper[4755]: I1210 15:23:45.009253 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:45Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:45 crc kubenswrapper[4755]: I1210 15:23:45.027040 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zl2tx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"796da6d5-6ccd-4786-a03e-9a8e47a55031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzg4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zl2tx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:45Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:45 crc kubenswrapper[4755]: I1210 15:23:45.045328 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b132a8b9-1c99-414d-8773-229bf36b305d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2mdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2mdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ggt8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:45Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:45 crc kubenswrapper[4755]: I1210 15:23:45.056823 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aed6bb9-78c2-410b-9c58-b60ab22a7bd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70d014e7227746c46b30f8f5a1f307a422d2fa0d4b98d98bfe5ba6217223489e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://104d730ea666ffd2d639ba7fc8d1ac5b444586788a62656fd260cb249f82b1ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09dbec85547c7170bb9551e5657876814d48528e3047daf3547711a563d2b6ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8bf1de0d8bdc0a20bc42ba5097d849ae0f507e5fbc18fb17b4ff3650e46ff0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:45Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:45 crc kubenswrapper[4755]: I1210 15:23:45.064014 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" Dec 10 15:23:45 crc kubenswrapper[4755]: I1210 15:23:45.073723 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n8c6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b56ae78-835a-45da-bc46-5adff2bdf9fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n8c6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:45Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:45 crc kubenswrapper[4755]: W1210 15:23:45.076700 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4b1da51a_99c9_4f8e_920d_ce0973af6370.slice/crio-f2515ff5ebf31c831fce05186e1650d702d16175753caf69db7cd998523f15f3 WatchSource:0}: Error finding container f2515ff5ebf31c831fce05186e1650d702d16175753caf69db7cd998523f15f3: Status 404 returned error can't find the container with id f2515ff5ebf31c831fce05186e1650d702d16175753caf69db7cd998523f15f3 Dec 10 15:23:45 crc kubenswrapper[4755]: I1210 15:23:45.102016 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:45Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:45 crc kubenswrapper[4755]: I1210 15:23:45.121303 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:45Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:45 crc kubenswrapper[4755]: I1210 15:23:45.140960 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wv8fh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d150a22e-c59a-4376-a5c8-db4085ea0be0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfw9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wv8fh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:45Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:45 crc kubenswrapper[4755]: I1210 15:23:45.171903 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e6db97-7e22-4fec-924e-20e90f463887\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef3871ddc16d2fc1b89a6f120390cf066424bd9ed4cca98f8dea4430c40c6c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08895c3ab20248c4bd70ccfae6442bdf8e76a602e605b043f0f00166624ada17\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5b2467645116856afa1e1e37501c7b453b03da193a96d1075886b85839a86ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16e58653dca08c9b33671279553975bca9507ffeeefea161119cec624813f167\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea18ef6c319336321d97c2a55449903b837b5a47da785da3ff5308244751943\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17ca0b6cc1b8b0bf9c34e03c494d1fc1594db013eccf862027ec8749ea51f6e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17ca0b6cc1b8b0bf9c34e03c494d1fc1594db013eccf862027ec8749ea51f6e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:45Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:45 crc kubenswrapper[4755]: I1210 15:23:45.190523 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:45Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:45 crc kubenswrapper[4755]: I1210 15:23:45.210260 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b1da51a-99c9-4f8e-920d-ce0973af6370\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6lfvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:45Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:45 crc kubenswrapper[4755]: I1210 15:23:45.450951 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 15:23:45 crc kubenswrapper[4755]: I1210 15:23:45.451053 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 15:23:45 crc kubenswrapper[4755]: I1210 15:23:45.451083 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 15:23:45 crc kubenswrapper[4755]: E1210 15:23:45.451107 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 15:23:47.451082303 +0000 UTC m=+24.051965935 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:23:45 crc kubenswrapper[4755]: I1210 15:23:45.451150 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 15:23:45 crc kubenswrapper[4755]: I1210 15:23:45.451185 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 15:23:45 crc kubenswrapper[4755]: E1210 15:23:45.451233 4755 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 10 15:23:45 crc kubenswrapper[4755]: E1210 15:23:45.451267 4755 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 10 15:23:45 crc kubenswrapper[4755]: E1210 15:23:45.451282 4755 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 10 15:23:45 crc kubenswrapper[4755]: E1210 15:23:45.451294 4755 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 15:23:45 crc kubenswrapper[4755]: E1210 15:23:45.451297 4755 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 10 15:23:45 crc kubenswrapper[4755]: E1210 15:23:45.451317 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-10 15:23:47.451300068 +0000 UTC m=+24.052183700 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 10 15:23:45 crc kubenswrapper[4755]: E1210 15:23:45.451337 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-10 15:23:47.451323519 +0000 UTC m=+24.052207151 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 15:23:45 crc kubenswrapper[4755]: E1210 15:23:45.451355 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-10 15:23:47.45134903 +0000 UTC m=+24.052232662 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 10 15:23:45 crc kubenswrapper[4755]: E1210 15:23:45.451366 4755 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 10 15:23:45 crc kubenswrapper[4755]: E1210 15:23:45.451376 4755 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 10 15:23:45 crc kubenswrapper[4755]: E1210 15:23:45.451382 4755 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 15:23:45 crc kubenswrapper[4755]: E1210 15:23:45.451412 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-10 15:23:47.451403731 +0000 UTC m=+24.052287473 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 15:23:45 crc kubenswrapper[4755]: I1210 15:23:45.757373 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 15:23:45 crc kubenswrapper[4755]: E1210 15:23:45.757800 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 15:23:45 crc kubenswrapper[4755]: I1210 15:23:45.757444 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 15:23:45 crc kubenswrapper[4755]: E1210 15:23:45.757888 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 15:23:45 crc kubenswrapper[4755]: I1210 15:23:45.757404 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 15:23:45 crc kubenswrapper[4755]: E1210 15:23:45.757974 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 15:23:45 crc kubenswrapper[4755]: I1210 15:23:45.761901 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Dec 10 15:23:45 crc kubenswrapper[4755]: I1210 15:23:45.762756 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Dec 10 15:23:45 crc kubenswrapper[4755]: I1210 15:23:45.763505 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Dec 10 15:23:45 crc kubenswrapper[4755]: I1210 15:23:45.764396 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Dec 10 15:23:45 crc kubenswrapper[4755]: I1210 15:23:45.765049 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Dec 10 15:23:45 crc kubenswrapper[4755]: I1210 15:23:45.765858 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Dec 10 15:23:45 crc kubenswrapper[4755]: I1210 15:23:45.766649 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Dec 10 15:23:45 crc kubenswrapper[4755]: I1210 15:23:45.767196 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Dec 10 15:23:45 crc kubenswrapper[4755]: I1210 15:23:45.767827 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Dec 10 15:23:45 crc kubenswrapper[4755]: I1210 15:23:45.768307 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Dec 10 15:23:45 crc kubenswrapper[4755]: I1210 15:23:45.768871 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Dec 10 15:23:45 crc kubenswrapper[4755]: I1210 15:23:45.769532 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Dec 10 15:23:45 crc kubenswrapper[4755]: I1210 15:23:45.770033 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Dec 10 15:23:45 crc kubenswrapper[4755]: I1210 15:23:45.771277 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Dec 10 15:23:45 crc kubenswrapper[4755]: I1210 15:23:45.772034 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Dec 10 15:23:45 crc kubenswrapper[4755]: I1210 15:23:45.772691 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Dec 10 15:23:45 crc kubenswrapper[4755]: I1210 15:23:45.773366 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Dec 10 15:23:45 crc kubenswrapper[4755]: I1210 15:23:45.773799 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Dec 10 15:23:45 crc kubenswrapper[4755]: I1210 15:23:45.774341 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Dec 10 15:23:45 crc kubenswrapper[4755]: I1210 15:23:45.774991 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Dec 10 15:23:45 crc kubenswrapper[4755]: I1210 15:23:45.775455 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Dec 10 15:23:45 crc kubenswrapper[4755]: I1210 15:23:45.776001 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Dec 10 15:23:45 crc kubenswrapper[4755]: I1210 15:23:45.776430 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Dec 10 15:23:45 crc kubenswrapper[4755]: I1210 15:23:45.777048 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Dec 10 15:23:45 crc kubenswrapper[4755]: I1210 15:23:45.777534 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Dec 10 15:23:45 crc kubenswrapper[4755]: I1210 15:23:45.778233 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Dec 10 15:23:45 crc kubenswrapper[4755]: I1210 15:23:45.778862 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Dec 10 15:23:45 crc kubenswrapper[4755]: I1210 15:23:45.779324 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Dec 10 15:23:45 crc kubenswrapper[4755]: I1210 15:23:45.781026 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Dec 10 15:23:45 crc kubenswrapper[4755]: I1210 15:23:45.781811 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Dec 10 15:23:45 crc kubenswrapper[4755]: I1210 15:23:45.782276 4755 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Dec 10 15:23:45 crc kubenswrapper[4755]: I1210 15:23:45.782413 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Dec 10 15:23:45 crc kubenswrapper[4755]: I1210 15:23:45.783900 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Dec 10 15:23:45 crc kubenswrapper[4755]: I1210 15:23:45.784574 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Dec 10 15:23:45 crc kubenswrapper[4755]: I1210 15:23:45.785076 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Dec 10 15:23:45 crc kubenswrapper[4755]: I1210 15:23:45.786452 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Dec 10 15:23:45 crc kubenswrapper[4755]: I1210 15:23:45.787324 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Dec 10 15:23:45 crc kubenswrapper[4755]: I1210 15:23:45.788056 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Dec 10 15:23:45 crc kubenswrapper[4755]: I1210 15:23:45.788882 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Dec 10 15:23:45 crc kubenswrapper[4755]: I1210 15:23:45.790267 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Dec 10 15:23:45 crc kubenswrapper[4755]: I1210 15:23:45.791198 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Dec 10 15:23:45 crc kubenswrapper[4755]: I1210 15:23:45.792049 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Dec 10 15:23:45 crc kubenswrapper[4755]: I1210 15:23:45.792978 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Dec 10 15:23:45 crc kubenswrapper[4755]: I1210 15:23:45.793805 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Dec 10 15:23:45 crc kubenswrapper[4755]: I1210 15:23:45.794296 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Dec 10 15:23:45 crc kubenswrapper[4755]: I1210 15:23:45.794879 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Dec 10 15:23:45 crc kubenswrapper[4755]: I1210 15:23:45.795621 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Dec 10 15:23:45 crc kubenswrapper[4755]: I1210 15:23:45.796436 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Dec 10 15:23:45 crc kubenswrapper[4755]: I1210 15:23:45.797066 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Dec 10 15:23:45 crc kubenswrapper[4755]: I1210 15:23:45.797672 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Dec 10 15:23:45 crc kubenswrapper[4755]: I1210 15:23:45.798145 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Dec 10 15:23:45 crc kubenswrapper[4755]: I1210 15:23:45.798731 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Dec 10 15:23:45 crc kubenswrapper[4755]: I1210 15:23:45.799284 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Dec 10 15:23:45 crc kubenswrapper[4755]: I1210 15:23:45.799798 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Dec 10 15:23:45 crc kubenswrapper[4755]: I1210 15:23:45.800347 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Dec 10 15:23:45 crc kubenswrapper[4755]: I1210 15:23:45.813064 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e6db97-7e22-4fec-924e-20e90f463887\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef3871ddc16d2fc1b89a6f120390cf066424bd9ed4cca98f8dea4430c40c6c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08895c3ab20248c4bd70ccfae6442bdf8e76a602e605b043f0f00166624ada17\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5b2467645116856afa1e1e37501c7b453b03da193a96d1075886b85839a86ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16e58653dca08c9b33671279553975bca9507ffeeefea161119cec624813f167\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea18ef6c319336321d97c2a55449903b837b5a47da785da3ff5308244751943\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17ca0b6cc1b8b0bf9c34e03c494d1fc1594db013eccf862027ec8749ea51f6e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17ca0b6cc1b8b0bf9c34e03c494d1fc1594db013eccf862027ec8749ea51f6e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:45Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:45 crc kubenswrapper[4755]: I1210 15:23:45.813692 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Dec 10 15:23:45 crc kubenswrapper[4755]: I1210 15:23:45.814698 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Dec 10 15:23:45 crc kubenswrapper[4755]: I1210 15:23:45.825674 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:45Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:45 crc kubenswrapper[4755]: I1210 15:23:45.846928 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b1da51a-99c9-4f8e-920d-ce0973af6370\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6lfvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:45Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:45 crc kubenswrapper[4755]: I1210 15:23:45.860168 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7d089a8bddfa1f80d29011c7a6bab0a300f7dd44bdb2864f86951ebbb9ebea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:45Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:45 crc kubenswrapper[4755]: I1210 15:23:45.876952 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01ddc4056319db1f69268dcae192c3cb9db6c25284305803ae7588e59f77c346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aae6bf174cdc9bde18d7c959e976454e73c1e67642f0158d365b79582f63f3cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:45Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:45 crc kubenswrapper[4755]: I1210 15:23:45.889988 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:45Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:45 crc kubenswrapper[4755]: I1210 15:23:45.920593 4755 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 15:23:45 crc kubenswrapper[4755]: I1210 15:23:45.920723 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zl2tx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"796da6d5-6ccd-4786-a03e-9a8e47a55031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzg4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zl2tx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:45Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:45 crc kubenswrapper[4755]: I1210 15:23:45.924730 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:45 crc kubenswrapper[4755]: I1210 15:23:45.924761 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:45 crc kubenswrapper[4755]: I1210 15:23:45.924770 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:45 crc kubenswrapper[4755]: I1210 15:23:45.924871 4755 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 10 15:23:45 crc kubenswrapper[4755]: I1210 15:23:45.934674 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-wv8fh" event={"ID":"d150a22e-c59a-4376-a5c8-db4085ea0be0","Type":"ContainerStarted","Data":"58d106c5d1b9525ec821d009ca556449cd8d7f0e1b9c8ec7dd969df996e73625"} Dec 10 15:23:45 crc kubenswrapper[4755]: I1210 15:23:45.943933 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b132a8b9-1c99-414d-8773-229bf36b305d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2mdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2mdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ggt8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:45Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:45 crc kubenswrapper[4755]: I1210 15:23:45.947793 4755 kubelet_node_status.go:115] "Node was previously registered" node="crc" Dec 10 15:23:45 crc kubenswrapper[4755]: I1210 15:23:45.948119 4755 kubelet_node_status.go:79] "Successfully registered node" node="crc" Dec 10 15:23:45 crc kubenswrapper[4755]: I1210 15:23:45.953401 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:45 crc kubenswrapper[4755]: I1210 15:23:45.953459 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:45 crc kubenswrapper[4755]: I1210 15:23:45.953493 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:45 crc kubenswrapper[4755]: I1210 15:23:45.953518 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:45 crc kubenswrapper[4755]: I1210 15:23:45.953536 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:45Z","lastTransitionTime":"2025-12-10T15:23:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:45 crc kubenswrapper[4755]: I1210 15:23:45.958153 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" event={"ID":"b132a8b9-1c99-414d-8773-229bf36b305d","Type":"ContainerStarted","Data":"f5a5a59e9f156fb791ec822c2d5efe3fc6ec0e84bfcb2b6f5da81396951984c0"} Dec 10 15:23:45 crc kubenswrapper[4755]: I1210 15:23:45.966546 4755 generic.go:334] "Generic (PLEG): container finished" podID="4b1da51a-99c9-4f8e-920d-ce0973af6370" containerID="375efb27cf6bef06a6a0ffc8f6b2963e1b9ffbca18b3c8e7b402bc552a411f6f" exitCode=0 Dec 10 15:23:45 crc kubenswrapper[4755]: I1210 15:23:45.966699 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" event={"ID":"4b1da51a-99c9-4f8e-920d-ce0973af6370","Type":"ContainerDied","Data":"375efb27cf6bef06a6a0ffc8f6b2963e1b9ffbca18b3c8e7b402bc552a411f6f"} Dec 10 15:23:45 crc kubenswrapper[4755]: I1210 15:23:45.966759 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" event={"ID":"4b1da51a-99c9-4f8e-920d-ce0973af6370","Type":"ContainerStarted","Data":"f2515ff5ebf31c831fce05186e1650d702d16175753caf69db7cd998523f15f3"} Dec 10 15:23:45 crc kubenswrapper[4755]: I1210 15:23:45.968943 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aed6bb9-78c2-410b-9c58-b60ab22a7bd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70d014e7227746c46b30f8f5a1f307a422d2fa0d4b98d98bfe5ba6217223489e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://104d730ea666ffd2d639ba7fc8d1ac5b444586788a62656fd260cb249f82b1ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09dbec85547c7170bb9551e5657876814d48528e3047daf3547711a563d2b6ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8bf1de0d8bdc0a20bc42ba5097d849ae0f507e5fbc18fb17b4ff3650e46ff0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:45Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:45 crc kubenswrapper[4755]: I1210 15:23:45.970602 4755 generic.go:334] "Generic (PLEG): container finished" podID="8b56ae78-835a-45da-bc46-5adff2bdf9fd" containerID="0b9a1485f6d0c0e53fd3505623b2788476bc00abea2f11650386eac40e449bcc" exitCode=0 Dec 10 15:23:45 crc kubenswrapper[4755]: I1210 15:23:45.970680 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-n8c6s" event={"ID":"8b56ae78-835a-45da-bc46-5adff2bdf9fd","Type":"ContainerDied","Data":"0b9a1485f6d0c0e53fd3505623b2788476bc00abea2f11650386eac40e449bcc"} Dec 10 15:23:45 crc kubenswrapper[4755]: E1210 15:23:45.987397 4755 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ba232303-88d5-4931-b82e-34d9a0e5c06a\\\",\\\"systemUUID\\\":\\\"ebd59de0-c6b0-47c1-bc17-6f665dcf344d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:45Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:45 crc kubenswrapper[4755]: I1210 15:23:45.993653 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:45 crc kubenswrapper[4755]: I1210 15:23:45.993690 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:45 crc kubenswrapper[4755]: I1210 15:23:45.993700 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:45 crc kubenswrapper[4755]: I1210 15:23:45.993716 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:45 crc kubenswrapper[4755]: I1210 15:23:45.993727 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:45Z","lastTransitionTime":"2025-12-10T15:23:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:45 crc kubenswrapper[4755]: I1210 15:23:45.995875 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n8c6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b56ae78-835a-45da-bc46-5adff2bdf9fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n8c6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:45Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:46 crc kubenswrapper[4755]: E1210 15:23:46.009796 4755 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ba232303-88d5-4931-b82e-34d9a0e5c06a\\\",\\\"systemUUID\\\":\\\"ebd59de0-c6b0-47c1-bc17-6f665dcf344d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:46Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:46 crc kubenswrapper[4755]: I1210 15:23:46.013652 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:46 crc kubenswrapper[4755]: I1210 15:23:46.013751 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:46 crc kubenswrapper[4755]: I1210 15:23:46.013763 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:46 crc kubenswrapper[4755]: I1210 15:23:46.013778 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:46 crc kubenswrapper[4755]: I1210 15:23:46.013787 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:46Z","lastTransitionTime":"2025-12-10T15:23:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:46 crc kubenswrapper[4755]: I1210 15:23:46.014773 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:46Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:46 crc kubenswrapper[4755]: I1210 15:23:46.028879 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:46Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:46 crc kubenswrapper[4755]: E1210 15:23:46.028911 4755 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:23:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:23:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:23:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:23:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ba232303-88d5-4931-b82e-34d9a0e5c06a\\\",\\\"systemUUID\\\":\\\"ebd59de0-c6b0-47c1-bc17-6f665dcf344d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:46Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:46 crc kubenswrapper[4755]: I1210 15:23:46.032575 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:46 crc kubenswrapper[4755]: I1210 15:23:46.032604 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:46 crc kubenswrapper[4755]: I1210 15:23:46.032614 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:46 crc kubenswrapper[4755]: I1210 15:23:46.032628 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:46 crc kubenswrapper[4755]: I1210 15:23:46.032637 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:46Z","lastTransitionTime":"2025-12-10T15:23:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:46 crc kubenswrapper[4755]: I1210 15:23:46.040530 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wv8fh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d150a22e-c59a-4376-a5c8-db4085ea0be0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfw9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wv8fh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:46Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:46 crc kubenswrapper[4755]: E1210 15:23:46.044636 4755 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:23:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:23:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:23:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:23:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ba232303-88d5-4931-b82e-34d9a0e5c06a\\\",\\\"systemUUID\\\":\\\"ebd59de0-c6b0-47c1-bc17-6f665dcf344d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:46Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:46 crc kubenswrapper[4755]: I1210 15:23:46.048132 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:46 crc kubenswrapper[4755]: I1210 15:23:46.048168 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:46 crc kubenswrapper[4755]: I1210 15:23:46.048177 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:46 crc kubenswrapper[4755]: I1210 15:23:46.048191 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:46 crc kubenswrapper[4755]: I1210 15:23:46.048201 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:46Z","lastTransitionTime":"2025-12-10T15:23:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:46 crc kubenswrapper[4755]: E1210 15:23:46.063505 4755 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:23:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:23:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:23:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:23:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ba232303-88d5-4931-b82e-34d9a0e5c06a\\\",\\\"systemUUID\\\":\\\"ebd59de0-c6b0-47c1-bc17-6f665dcf344d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:46Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:46 crc kubenswrapper[4755]: E1210 15:23:46.063689 4755 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 10 15:23:46 crc kubenswrapper[4755]: I1210 15:23:46.068782 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fa03060-a6e0-4aad-9aa1-43b1a0d00c85\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a823333ed5cb5de3988d25e50e4b7a0f9071c76fb39c22760a4f2acd5eb455d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d828f3f3b90a2dcb1c1908e6a686368af5b0d715b3251e4b8fcf3c8818ec75a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://172ab3fce08c8ba4095ff4095c89364a778644752bd7bb6c178d6e3ebcface69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfbb8e350ad18b78a6bcf6cfa4eb8f2fad90f970dba08a8b1b2026af6f255e83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c4b2bf0c88a16fd6b4b45a20730a92292895dc9f29ce756d347c302b25a8612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55da309288bc2c7ff1f22da0569d18155796700c200fe3ed508f6f8179756865\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55da309288bc2c7ff1f22da0569d18155796700c200fe3ed508f6f8179756865\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219c531368bb5379f3726d94ab0afc0f33eedbf91b0a4dbbe0757c6e232f79ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://219c531368bb5379f3726d94ab0afc0f33eedbf91b0a4dbbe0757c6e232f79ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f9da8f3d4f85ec83c16b71ecd983ff4f48049eb8e73461a2b46c4f66d71ed06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9da8f3d4f85ec83c16b71ecd983ff4f48049eb8e73461a2b46c4f66d71ed06e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:46Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:46 crc kubenswrapper[4755]: I1210 15:23:46.069335 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:46 crc kubenswrapper[4755]: I1210 15:23:46.069356 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:46 crc kubenswrapper[4755]: I1210 15:23:46.069370 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:46 crc kubenswrapper[4755]: I1210 15:23:46.069386 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:46 crc kubenswrapper[4755]: I1210 15:23:46.069395 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:46Z","lastTransitionTime":"2025-12-10T15:23:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:46 crc kubenswrapper[4755]: I1210 15:23:46.082717 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:46Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:46 crc kubenswrapper[4755]: I1210 15:23:46.100026 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:46Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:46 crc kubenswrapper[4755]: I1210 15:23:46.111009 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wv8fh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d150a22e-c59a-4376-a5c8-db4085ea0be0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58d106c5d1b9525ec821d009ca556449cd8d7f0e1b9c8ec7dd969df996e73625\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfw9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wv8fh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:46Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:46 crc kubenswrapper[4755]: I1210 15:23:46.126111 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e6db97-7e22-4fec-924e-20e90f463887\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef3871ddc16d2fc1b89a6f120390cf066424bd9ed4cca98f8dea4430c40c6c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08895c3ab20248c4bd70ccfae6442bdf8e76a602e605b043f0f00166624ada17\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5b2467645116856afa1e1e37501c7b453b03da193a96d1075886b85839a86ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16e58653dca08c9b33671279553975bca9507ffeeefea161119cec624813f167\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea18ef6c319336321d97c2a55449903b837b5a47da785da3ff5308244751943\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17ca0b6cc1b8b0bf9c34e03c494d1fc1594db013eccf862027ec8749ea51f6e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17ca0b6cc1b8b0bf9c34e03c494d1fc1594db013eccf862027ec8749ea51f6e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:46Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:46 crc kubenswrapper[4755]: I1210 15:23:46.138584 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:46Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:46 crc kubenswrapper[4755]: I1210 15:23:46.154841 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b1da51a-99c9-4f8e-920d-ce0973af6370\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://375efb27cf6bef06a6a0ffc8f6b2963e1b9ffbca18b3c8e7b402bc552a411f6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://375efb27cf6bef06a6a0ffc8f6b2963e1b9ffbca18b3c8e7b402bc552a411f6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6lfvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:46Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:46 crc kubenswrapper[4755]: I1210 15:23:46.166989 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zl2tx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"796da6d5-6ccd-4786-a03e-9a8e47a55031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de63a123c46563bd8cd07e669d192bc8b019a889b9bdb7af1b988872c8f1fc48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzg4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zl2tx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:46Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:46 crc kubenswrapper[4755]: I1210 15:23:46.171905 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:46 crc kubenswrapper[4755]: I1210 15:23:46.171943 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:46 crc kubenswrapper[4755]: I1210 15:23:46.171952 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:46 crc kubenswrapper[4755]: I1210 15:23:46.171968 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:46 crc kubenswrapper[4755]: I1210 15:23:46.171978 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:46Z","lastTransitionTime":"2025-12-10T15:23:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:46 crc kubenswrapper[4755]: I1210 15:23:46.179692 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b132a8b9-1c99-414d-8773-229bf36b305d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5a5a59e9f156fb791ec822c2d5efe3fc6ec0e84bfcb2b6f5da81396951984c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2mdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a40c4bdaa23a60a665b8f565720d79b68cac62d40246be94fc6cd314b1bb3656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2mdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ggt8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:46Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:46 crc kubenswrapper[4755]: I1210 15:23:46.192318 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7d089a8bddfa1f80d29011c7a6bab0a300f7dd44bdb2864f86951ebbb9ebea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:46Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:46 crc kubenswrapper[4755]: I1210 15:23:46.208595 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01ddc4056319db1f69268dcae192c3cb9db6c25284305803ae7588e59f77c346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aae6bf174cdc9bde18d7c959e976454e73c1e67642f0158d365b79582f63f3cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:46Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:46 crc kubenswrapper[4755]: I1210 15:23:46.224853 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:46Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:46 crc kubenswrapper[4755]: I1210 15:23:46.237731 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aed6bb9-78c2-410b-9c58-b60ab22a7bd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70d014e7227746c46b30f8f5a1f307a422d2fa0d4b98d98bfe5ba6217223489e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://104d730ea666ffd2d639ba7fc8d1ac5b444586788a62656fd260cb249f82b1ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09dbec85547c7170bb9551e5657876814d48528e3047daf3547711a563d2b6ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8bf1de0d8bdc0a20bc42ba5097d849ae0f507e5fbc18fb17b4ff3650e46ff0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:46Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:46 crc kubenswrapper[4755]: I1210 15:23:46.250913 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n8c6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b56ae78-835a-45da-bc46-5adff2bdf9fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b9a1485f6d0c0e53fd3505623b2788476bc00abea2f11650386eac40e449bcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9a1485f6d0c0e53fd3505623b2788476bc00abea2f11650386eac40e449bcc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n8c6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:46Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:46 crc kubenswrapper[4755]: I1210 15:23:46.274343 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:46 crc kubenswrapper[4755]: I1210 15:23:46.274378 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:46 crc kubenswrapper[4755]: I1210 15:23:46.274388 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:46 crc kubenswrapper[4755]: I1210 15:23:46.274403 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:46 crc kubenswrapper[4755]: I1210 15:23:46.274416 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:46Z","lastTransitionTime":"2025-12-10T15:23:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:46 crc kubenswrapper[4755]: I1210 15:23:46.376503 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:46 crc kubenswrapper[4755]: I1210 15:23:46.376555 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:46 crc kubenswrapper[4755]: I1210 15:23:46.376568 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:46 crc kubenswrapper[4755]: I1210 15:23:46.376586 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:46 crc kubenswrapper[4755]: I1210 15:23:46.376598 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:46Z","lastTransitionTime":"2025-12-10T15:23:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:46 crc kubenswrapper[4755]: I1210 15:23:46.479760 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:46 crc kubenswrapper[4755]: I1210 15:23:46.479800 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:46 crc kubenswrapper[4755]: I1210 15:23:46.479811 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:46 crc kubenswrapper[4755]: I1210 15:23:46.479830 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:46 crc kubenswrapper[4755]: I1210 15:23:46.479843 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:46Z","lastTransitionTime":"2025-12-10T15:23:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:46 crc kubenswrapper[4755]: I1210 15:23:46.582354 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:46 crc kubenswrapper[4755]: I1210 15:23:46.582398 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:46 crc kubenswrapper[4755]: I1210 15:23:46.582406 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:46 crc kubenswrapper[4755]: I1210 15:23:46.582418 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:46 crc kubenswrapper[4755]: I1210 15:23:46.582427 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:46Z","lastTransitionTime":"2025-12-10T15:23:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:46 crc kubenswrapper[4755]: I1210 15:23:46.684806 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:46 crc kubenswrapper[4755]: I1210 15:23:46.684845 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:46 crc kubenswrapper[4755]: I1210 15:23:46.684853 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:46 crc kubenswrapper[4755]: I1210 15:23:46.684866 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:46 crc kubenswrapper[4755]: I1210 15:23:46.684875 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:46Z","lastTransitionTime":"2025-12-10T15:23:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:46 crc kubenswrapper[4755]: I1210 15:23:46.787608 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:46 crc kubenswrapper[4755]: I1210 15:23:46.787860 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:46 crc kubenswrapper[4755]: I1210 15:23:46.787874 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:46 crc kubenswrapper[4755]: I1210 15:23:46.787896 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:46 crc kubenswrapper[4755]: I1210 15:23:46.787917 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:46Z","lastTransitionTime":"2025-12-10T15:23:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:46 crc kubenswrapper[4755]: I1210 15:23:46.889930 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:46 crc kubenswrapper[4755]: I1210 15:23:46.889969 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:46 crc kubenswrapper[4755]: I1210 15:23:46.889982 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:46 crc kubenswrapper[4755]: I1210 15:23:46.889997 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:46 crc kubenswrapper[4755]: I1210 15:23:46.890007 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:46Z","lastTransitionTime":"2025-12-10T15:23:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:46 crc kubenswrapper[4755]: I1210 15:23:46.980424 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" event={"ID":"4b1da51a-99c9-4f8e-920d-ce0973af6370","Type":"ContainerStarted","Data":"e9ba47683cc23d5b531a45f0658b6a9378650400b35b5372642b0430a5ac503f"} Dec 10 15:23:46 crc kubenswrapper[4755]: I1210 15:23:46.980492 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" event={"ID":"4b1da51a-99c9-4f8e-920d-ce0973af6370","Type":"ContainerStarted","Data":"3a75407e83508af9adebb09c6466a966dd791d29f690c539656f9bd3396d7031"} Dec 10 15:23:46 crc kubenswrapper[4755]: I1210 15:23:46.980506 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" event={"ID":"4b1da51a-99c9-4f8e-920d-ce0973af6370","Type":"ContainerStarted","Data":"0e547993b9f2fa37bf924f909c47b62eb0cc02b659596b1cad9bbc42fdde8f9d"} Dec 10 15:23:46 crc kubenswrapper[4755]: I1210 15:23:46.980517 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" event={"ID":"4b1da51a-99c9-4f8e-920d-ce0973af6370","Type":"ContainerStarted","Data":"6eb065dc6c0cc8914cb95553eb2683d894fb9a4e78ce7fac73bcce8d7f6cced9"} Dec 10 15:23:46 crc kubenswrapper[4755]: I1210 15:23:46.980527 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" event={"ID":"4b1da51a-99c9-4f8e-920d-ce0973af6370","Type":"ContainerStarted","Data":"602b4e49987fa2cc6b54b822110aececbdddaf2bce8f27cce4ed906768d45791"} Dec 10 15:23:46 crc kubenswrapper[4755]: I1210 15:23:46.980546 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" event={"ID":"4b1da51a-99c9-4f8e-920d-ce0973af6370","Type":"ContainerStarted","Data":"335bcab3a79f09796e97560365e1211fb30ddf288f4773c05ab353197add4365"} Dec 10 15:23:46 crc kubenswrapper[4755]: I1210 15:23:46.982477 4755 generic.go:334] "Generic (PLEG): container finished" podID="8b56ae78-835a-45da-bc46-5adff2bdf9fd" containerID="8a532015d196c588ad3dd6c43d9fa3772ba2a42bd1b2231f830b60adb0addab0" exitCode=0 Dec 10 15:23:46 crc kubenswrapper[4755]: I1210 15:23:46.982531 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-n8c6s" event={"ID":"8b56ae78-835a-45da-bc46-5adff2bdf9fd","Type":"ContainerDied","Data":"8a532015d196c588ad3dd6c43d9fa3772ba2a42bd1b2231f830b60adb0addab0"} Dec 10 15:23:46 crc kubenswrapper[4755]: I1210 15:23:46.991776 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:46 crc kubenswrapper[4755]: I1210 15:23:46.991811 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:46 crc kubenswrapper[4755]: I1210 15:23:46.991823 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:46 crc kubenswrapper[4755]: I1210 15:23:46.991840 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:46 crc kubenswrapper[4755]: I1210 15:23:46.991851 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:46Z","lastTransitionTime":"2025-12-10T15:23:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:47 crc kubenswrapper[4755]: I1210 15:23:47.001861 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-qnmst"] Dec 10 15:23:47 crc kubenswrapper[4755]: I1210 15:23:47.002290 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-qnmst" Dec 10 15:23:47 crc kubenswrapper[4755]: I1210 15:23:47.011181 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e6db97-7e22-4fec-924e-20e90f463887\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef3871ddc16d2fc1b89a6f120390cf066424bd9ed4cca98f8dea4430c40c6c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08895c3ab20248c4bd70ccfae6442bdf8e76a602e605b043f0f00166624ada17\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5b2467645116856afa1e1e37501c7b453b03da193a96d1075886b85839a86ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16e58653dca08c9b33671279553975bca9507ffeeefea161119cec624813f167\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea18ef6c319336321d97c2a55449903b837b5a47da785da3ff5308244751943\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17ca0b6cc1b8b0bf9c34e03c494d1fc1594db013eccf862027ec8749ea51f6e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17ca0b6cc1b8b0bf9c34e03c494d1fc1594db013eccf862027ec8749ea51f6e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:46Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:47 crc kubenswrapper[4755]: I1210 15:23:47.013170 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 10 15:23:47 crc kubenswrapper[4755]: I1210 15:23:47.013180 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 10 15:23:47 crc kubenswrapper[4755]: I1210 15:23:47.014134 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 10 15:23:47 crc kubenswrapper[4755]: I1210 15:23:47.014305 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Dec 10 15:23:47 crc kubenswrapper[4755]: I1210 15:23:47.033239 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:47Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:47 crc kubenswrapper[4755]: I1210 15:23:47.054736 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b1da51a-99c9-4f8e-920d-ce0973af6370\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://375efb27cf6bef06a6a0ffc8f6b2963e1b9ffbca18b3c8e7b402bc552a411f6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://375efb27cf6bef06a6a0ffc8f6b2963e1b9ffbca18b3c8e7b402bc552a411f6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6lfvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:47Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:47 crc kubenswrapper[4755]: I1210 15:23:47.066532 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/1693fcf1-bef4-4f82-8dd8-f1797b03f5e2-serviceca\") pod \"node-ca-qnmst\" (UID: \"1693fcf1-bef4-4f82-8dd8-f1797b03f5e2\") " pod="openshift-image-registry/node-ca-qnmst" Dec 10 15:23:47 crc kubenswrapper[4755]: I1210 15:23:47.066657 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1693fcf1-bef4-4f82-8dd8-f1797b03f5e2-host\") pod \"node-ca-qnmst\" (UID: \"1693fcf1-bef4-4f82-8dd8-f1797b03f5e2\") " pod="openshift-image-registry/node-ca-qnmst" Dec 10 15:23:47 crc kubenswrapper[4755]: I1210 15:23:47.066757 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hzdh\" (UniqueName: \"kubernetes.io/projected/1693fcf1-bef4-4f82-8dd8-f1797b03f5e2-kube-api-access-2hzdh\") pod \"node-ca-qnmst\" (UID: \"1693fcf1-bef4-4f82-8dd8-f1797b03f5e2\") " pod="openshift-image-registry/node-ca-qnmst" Dec 10 15:23:47 crc kubenswrapper[4755]: I1210 15:23:47.068506 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7d089a8bddfa1f80d29011c7a6bab0a300f7dd44bdb2864f86951ebbb9ebea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:47Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:47 crc kubenswrapper[4755]: I1210 15:23:47.082162 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01ddc4056319db1f69268dcae192c3cb9db6c25284305803ae7588e59f77c346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aae6bf174cdc9bde18d7c959e976454e73c1e67642f0158d365b79582f63f3cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:47Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:47 crc kubenswrapper[4755]: I1210 15:23:47.096088 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:47 crc kubenswrapper[4755]: I1210 15:23:47.096128 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:47 crc kubenswrapper[4755]: I1210 15:23:47.096139 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:47 crc kubenswrapper[4755]: I1210 15:23:47.096181 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:47 crc kubenswrapper[4755]: I1210 15:23:47.096193 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:47Z","lastTransitionTime":"2025-12-10T15:23:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:47 crc kubenswrapper[4755]: I1210 15:23:47.098106 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:47Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:47 crc kubenswrapper[4755]: I1210 15:23:47.113947 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zl2tx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"796da6d5-6ccd-4786-a03e-9a8e47a55031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de63a123c46563bd8cd07e669d192bc8b019a889b9bdb7af1b988872c8f1fc48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzg4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zl2tx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:47Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:47 crc kubenswrapper[4755]: I1210 15:23:47.125721 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b132a8b9-1c99-414d-8773-229bf36b305d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5a5a59e9f156fb791ec822c2d5efe3fc6ec0e84bfcb2b6f5da81396951984c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2mdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a40c4bdaa23a60a665b8f565720d79b68cac62d40246be94fc6cd314b1bb3656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2mdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ggt8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:47Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:47 crc kubenswrapper[4755]: I1210 15:23:47.142115 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aed6bb9-78c2-410b-9c58-b60ab22a7bd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70d014e7227746c46b30f8f5a1f307a422d2fa0d4b98d98bfe5ba6217223489e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://104d730ea666ffd2d639ba7fc8d1ac5b444586788a62656fd260cb249f82b1ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09dbec85547c7170bb9551e5657876814d48528e3047daf3547711a563d2b6ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8bf1de0d8bdc0a20bc42ba5097d849ae0f507e5fbc18fb17b4ff3650e46ff0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:47Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:47 crc kubenswrapper[4755]: I1210 15:23:47.156339 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n8c6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b56ae78-835a-45da-bc46-5adff2bdf9fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b9a1485f6d0c0e53fd3505623b2788476bc00abea2f11650386eac40e449bcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9a1485f6d0c0e53fd3505623b2788476bc00abea2f11650386eac40e449bcc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a532015d196c588ad3dd6c43d9fa3772ba2a42bd1b2231f830b60adb0addab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a532015d196c588ad3dd6c43d9fa3772ba2a42bd1b2231f830b60adb0addab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n8c6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:47Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:47 crc kubenswrapper[4755]: I1210 15:23:47.169538 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1693fcf1-bef4-4f82-8dd8-f1797b03f5e2-host\") pod \"node-ca-qnmst\" (UID: \"1693fcf1-bef4-4f82-8dd8-f1797b03f5e2\") " pod="openshift-image-registry/node-ca-qnmst" Dec 10 15:23:47 crc kubenswrapper[4755]: I1210 15:23:47.169640 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1693fcf1-bef4-4f82-8dd8-f1797b03f5e2-host\") pod \"node-ca-qnmst\" (UID: \"1693fcf1-bef4-4f82-8dd8-f1797b03f5e2\") " pod="openshift-image-registry/node-ca-qnmst" Dec 10 15:23:47 crc kubenswrapper[4755]: I1210 15:23:47.170295 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hzdh\" (UniqueName: \"kubernetes.io/projected/1693fcf1-bef4-4f82-8dd8-f1797b03f5e2-kube-api-access-2hzdh\") pod \"node-ca-qnmst\" (UID: \"1693fcf1-bef4-4f82-8dd8-f1797b03f5e2\") " pod="openshift-image-registry/node-ca-qnmst" Dec 10 15:23:47 crc kubenswrapper[4755]: I1210 15:23:47.170405 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/1693fcf1-bef4-4f82-8dd8-f1797b03f5e2-serviceca\") pod \"node-ca-qnmst\" (UID: \"1693fcf1-bef4-4f82-8dd8-f1797b03f5e2\") " pod="openshift-image-registry/node-ca-qnmst" Dec 10 15:23:47 crc kubenswrapper[4755]: I1210 15:23:47.171775 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/1693fcf1-bef4-4f82-8dd8-f1797b03f5e2-serviceca\") pod \"node-ca-qnmst\" (UID: \"1693fcf1-bef4-4f82-8dd8-f1797b03f5e2\") " pod="openshift-image-registry/node-ca-qnmst" Dec 10 15:23:47 crc kubenswrapper[4755]: I1210 15:23:47.175370 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fa03060-a6e0-4aad-9aa1-43b1a0d00c85\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a823333ed5cb5de3988d25e50e4b7a0f9071c76fb39c22760a4f2acd5eb455d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d828f3f3b90a2dcb1c1908e6a686368af5b0d715b3251e4b8fcf3c8818ec75a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://172ab3fce08c8ba4095ff4095c89364a778644752bd7bb6c178d6e3ebcface69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfbb8e350ad18b78a6bcf6cfa4eb8f2fad90f970dba08a8b1b2026af6f255e83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c4b2bf0c88a16fd6b4b45a20730a92292895dc9f29ce756d347c302b25a8612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55da309288bc2c7ff1f22da0569d18155796700c200fe3ed508f6f8179756865\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55da309288bc2c7ff1f22da0569d18155796700c200fe3ed508f6f8179756865\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219c531368bb5379f3726d94ab0afc0f33eedbf91b0a4dbbe0757c6e232f79ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://219c531368bb5379f3726d94ab0afc0f33eedbf91b0a4dbbe0757c6e232f79ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f9da8f3d4f85ec83c16b71ecd983ff4f48049eb8e73461a2b46c4f66d71ed06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9da8f3d4f85ec83c16b71ecd983ff4f48049eb8e73461a2b46c4f66d71ed06e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:47Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:47 crc kubenswrapper[4755]: I1210 15:23:47.190732 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:47Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:47 crc kubenswrapper[4755]: I1210 15:23:47.195105 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hzdh\" (UniqueName: \"kubernetes.io/projected/1693fcf1-bef4-4f82-8dd8-f1797b03f5e2-kube-api-access-2hzdh\") pod \"node-ca-qnmst\" (UID: \"1693fcf1-bef4-4f82-8dd8-f1797b03f5e2\") " pod="openshift-image-registry/node-ca-qnmst" Dec 10 15:23:47 crc kubenswrapper[4755]: I1210 15:23:47.200088 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:47 crc kubenswrapper[4755]: I1210 15:23:47.200139 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:47 crc kubenswrapper[4755]: I1210 15:23:47.200150 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:47 crc kubenswrapper[4755]: I1210 15:23:47.200166 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:47 crc kubenswrapper[4755]: I1210 15:23:47.200176 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:47Z","lastTransitionTime":"2025-12-10T15:23:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:47 crc kubenswrapper[4755]: I1210 15:23:47.210060 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:47Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:47 crc kubenswrapper[4755]: I1210 15:23:47.221032 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wv8fh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d150a22e-c59a-4376-a5c8-db4085ea0be0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58d106c5d1b9525ec821d009ca556449cd8d7f0e1b9c8ec7dd969df996e73625\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfw9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wv8fh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:47Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:47 crc kubenswrapper[4755]: I1210 15:23:47.232208 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wv8fh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d150a22e-c59a-4376-a5c8-db4085ea0be0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58d106c5d1b9525ec821d009ca556449cd8d7f0e1b9c8ec7dd969df996e73625\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfw9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wv8fh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:47Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:47 crc kubenswrapper[4755]: I1210 15:23:47.246615 4755 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 10 15:23:47 crc kubenswrapper[4755]: I1210 15:23:47.254333 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fa03060-a6e0-4aad-9aa1-43b1a0d00c85\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a823333ed5cb5de3988d25e50e4b7a0f9071c76fb39c22760a4f2acd5eb455d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d828f3f3b90a2dcb1c1908e6a686368af5b0d715b3251e4b8fcf3c8818ec75a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://172ab3fce08c8ba4095ff4095c89364a778644752bd7bb6c178d6e3ebcface69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfbb8e350ad18b78a6bcf6cfa4eb8f2fad90f970dba08a8b1b2026af6f255e83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c4b2bf0c88a16fd6b4b45a20730a92292895dc9f29ce756d347c302b25a8612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55da309288bc2c7ff1f22da0569d18155796700c200fe3ed508f6f8179756865\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55da309288bc2c7ff1f22da0569d18155796700c200fe3ed508f6f8179756865\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219c531368bb5379f3726d94ab0afc0f33eedbf91b0a4dbbe0757c6e232f79ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://219c531368bb5379f3726d94ab0afc0f33eedbf91b0a4dbbe0757c6e232f79ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f9da8f3d4f85ec83c16b71ecd983ff4f48049eb8e73461a2b46c4f66d71ed06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9da8f3d4f85ec83c16b71ecd983ff4f48049eb8e73461a2b46c4f66d71ed06e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:47Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:47 crc kubenswrapper[4755]: I1210 15:23:47.267094 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:47Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:47 crc kubenswrapper[4755]: I1210 15:23:47.279459 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:47Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:47 crc kubenswrapper[4755]: I1210 15:23:47.298710 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b1da51a-99c9-4f8e-920d-ce0973af6370\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://375efb27cf6bef06a6a0ffc8f6b2963e1b9ffbca18b3c8e7b402bc552a411f6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://375efb27cf6bef06a6a0ffc8f6b2963e1b9ffbca18b3c8e7b402bc552a411f6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6lfvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:47Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:47 crc kubenswrapper[4755]: I1210 15:23:47.302859 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:47 crc kubenswrapper[4755]: I1210 15:23:47.302922 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:47 crc kubenswrapper[4755]: I1210 15:23:47.302931 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:47 crc kubenswrapper[4755]: I1210 15:23:47.302950 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:47 crc kubenswrapper[4755]: I1210 15:23:47.302960 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:47Z","lastTransitionTime":"2025-12-10T15:23:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:47 crc kubenswrapper[4755]: I1210 15:23:47.310685 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e6db97-7e22-4fec-924e-20e90f463887\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef3871ddc16d2fc1b89a6f120390cf066424bd9ed4cca98f8dea4430c40c6c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08895c3ab20248c4bd70ccfae6442bdf8e76a602e605b043f0f00166624ada17\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5b2467645116856afa1e1e37501c7b453b03da193a96d1075886b85839a86ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16e58653dca08c9b33671279553975bca9507ffeeefea161119cec624813f167\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea18ef6c319336321d97c2a55449903b837b5a47da785da3ff5308244751943\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17ca0b6cc1b8b0bf9c34e03c494d1fc1594db013eccf862027ec8749ea51f6e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17ca0b6cc1b8b0bf9c34e03c494d1fc1594db013eccf862027ec8749ea51f6e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:47Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:47 crc kubenswrapper[4755]: I1210 15:23:47.321117 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:47Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:47 crc kubenswrapper[4755]: I1210 15:23:47.322245 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-qnmst" Dec 10 15:23:47 crc kubenswrapper[4755]: I1210 15:23:47.334591 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:47Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:47 crc kubenswrapper[4755]: W1210 15:23:47.342339 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1693fcf1_bef4_4f82_8dd8_f1797b03f5e2.slice/crio-03916562c3456627c24d79f50221a6fc4d6b03e90ae5e16571ce0065bea7c2c1 WatchSource:0}: Error finding container 03916562c3456627c24d79f50221a6fc4d6b03e90ae5e16571ce0065bea7c2c1: Status 404 returned error can't find the container with id 03916562c3456627c24d79f50221a6fc4d6b03e90ae5e16571ce0065bea7c2c1 Dec 10 15:23:47 crc kubenswrapper[4755]: I1210 15:23:47.354291 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zl2tx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"796da6d5-6ccd-4786-a03e-9a8e47a55031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de63a123c46563bd8cd07e669d192bc8b019a889b9bdb7af1b988872c8f1fc48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzg4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zl2tx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:47Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:47 crc kubenswrapper[4755]: I1210 15:23:47.365910 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b132a8b9-1c99-414d-8773-229bf36b305d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5a5a59e9f156fb791ec822c2d5efe3fc6ec0e84bfcb2b6f5da81396951984c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2mdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a40c4bdaa23a60a665b8f565720d79b68cac62d40246be94fc6cd314b1bb3656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2mdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ggt8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:47Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:47 crc kubenswrapper[4755]: I1210 15:23:47.381003 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7d089a8bddfa1f80d29011c7a6bab0a300f7dd44bdb2864f86951ebbb9ebea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:47Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:47 crc kubenswrapper[4755]: I1210 15:23:47.399309 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01ddc4056319db1f69268dcae192c3cb9db6c25284305803ae7588e59f77c346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aae6bf174cdc9bde18d7c959e976454e73c1e67642f0158d365b79582f63f3cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:47Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:47 crc kubenswrapper[4755]: I1210 15:23:47.406969 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:47 crc kubenswrapper[4755]: I1210 15:23:47.407022 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:47 crc kubenswrapper[4755]: I1210 15:23:47.407039 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:47 crc kubenswrapper[4755]: I1210 15:23:47.407056 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:47 crc kubenswrapper[4755]: I1210 15:23:47.407070 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:47Z","lastTransitionTime":"2025-12-10T15:23:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:47 crc kubenswrapper[4755]: I1210 15:23:47.419911 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aed6bb9-78c2-410b-9c58-b60ab22a7bd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70d014e7227746c46b30f8f5a1f307a422d2fa0d4b98d98bfe5ba6217223489e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://104d730ea666ffd2d639ba7fc8d1ac5b444586788a62656fd260cb249f82b1ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09dbec85547c7170bb9551e5657876814d48528e3047daf3547711a563d2b6ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8bf1de0d8bdc0a20bc42ba5097d849ae0f507e5fbc18fb17b4ff3650e46ff0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:47Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:47 crc kubenswrapper[4755]: I1210 15:23:47.435567 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n8c6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b56ae78-835a-45da-bc46-5adff2bdf9fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b9a1485f6d0c0e53fd3505623b2788476bc00abea2f11650386eac40e449bcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9a1485f6d0c0e53fd3505623b2788476bc00abea2f11650386eac40e449bcc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a532015d196c588ad3dd6c43d9fa3772ba2a42bd1b2231f830b60adb0addab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a532015d196c588ad3dd6c43d9fa3772ba2a42bd1b2231f830b60adb0addab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n8c6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:47Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:47 crc kubenswrapper[4755]: I1210 15:23:47.458763 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qnmst" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1693fcf1-bef4-4f82-8dd8-f1797b03f5e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hzdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qnmst\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:47Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:47 crc kubenswrapper[4755]: I1210 15:23:47.472891 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 15:23:47 crc kubenswrapper[4755]: I1210 15:23:47.472999 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 15:23:47 crc kubenswrapper[4755]: I1210 15:23:47.473020 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 15:23:47 crc kubenswrapper[4755]: I1210 15:23:47.473040 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 15:23:47 crc kubenswrapper[4755]: I1210 15:23:47.473065 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 15:23:47 crc kubenswrapper[4755]: E1210 15:23:47.473131 4755 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 10 15:23:47 crc kubenswrapper[4755]: E1210 15:23:47.473172 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-10 15:23:51.473159895 +0000 UTC m=+28.074043527 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 10 15:23:47 crc kubenswrapper[4755]: E1210 15:23:47.473450 4755 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 10 15:23:47 crc kubenswrapper[4755]: E1210 15:23:47.473494 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-10 15:23:51.473486824 +0000 UTC m=+28.074370446 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 10 15:23:47 crc kubenswrapper[4755]: E1210 15:23:47.473547 4755 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 10 15:23:47 crc kubenswrapper[4755]: E1210 15:23:47.473559 4755 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 10 15:23:47 crc kubenswrapper[4755]: E1210 15:23:47.473568 4755 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 15:23:47 crc kubenswrapper[4755]: E1210 15:23:47.473767 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-10 15:23:51.473757391 +0000 UTC m=+28.074641023 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 15:23:47 crc kubenswrapper[4755]: E1210 15:23:47.474040 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 15:23:51.474011248 +0000 UTC m=+28.074894880 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:23:47 crc kubenswrapper[4755]: E1210 15:23:47.474299 4755 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 10 15:23:47 crc kubenswrapper[4755]: E1210 15:23:47.474340 4755 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 10 15:23:47 crc kubenswrapper[4755]: E1210 15:23:47.474364 4755 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 15:23:47 crc kubenswrapper[4755]: E1210 15:23:47.474424 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-10 15:23:51.474408278 +0000 UTC m=+28.075291910 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 15:23:47 crc kubenswrapper[4755]: I1210 15:23:47.513813 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:47 crc kubenswrapper[4755]: I1210 15:23:47.514714 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:47 crc kubenswrapper[4755]: I1210 15:23:47.514767 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:47 crc kubenswrapper[4755]: I1210 15:23:47.514788 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:47 crc kubenswrapper[4755]: I1210 15:23:47.514800 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:47Z","lastTransitionTime":"2025-12-10T15:23:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:47 crc kubenswrapper[4755]: I1210 15:23:47.617436 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:47 crc kubenswrapper[4755]: I1210 15:23:47.617516 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:47 crc kubenswrapper[4755]: I1210 15:23:47.617530 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:47 crc kubenswrapper[4755]: I1210 15:23:47.617550 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:47 crc kubenswrapper[4755]: I1210 15:23:47.617563 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:47Z","lastTransitionTime":"2025-12-10T15:23:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:47 crc kubenswrapper[4755]: I1210 15:23:47.720171 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:47 crc kubenswrapper[4755]: I1210 15:23:47.720215 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:47 crc kubenswrapper[4755]: I1210 15:23:47.720227 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:47 crc kubenswrapper[4755]: I1210 15:23:47.720245 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:47 crc kubenswrapper[4755]: I1210 15:23:47.720257 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:47Z","lastTransitionTime":"2025-12-10T15:23:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:47 crc kubenswrapper[4755]: I1210 15:23:47.756917 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 15:23:47 crc kubenswrapper[4755]: I1210 15:23:47.756989 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 15:23:47 crc kubenswrapper[4755]: E1210 15:23:47.757054 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 15:23:47 crc kubenswrapper[4755]: E1210 15:23:47.757114 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 15:23:47 crc kubenswrapper[4755]: I1210 15:23:47.757159 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 15:23:47 crc kubenswrapper[4755]: E1210 15:23:47.757258 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 15:23:47 crc kubenswrapper[4755]: I1210 15:23:47.822450 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:47 crc kubenswrapper[4755]: I1210 15:23:47.822515 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:47 crc kubenswrapper[4755]: I1210 15:23:47.822529 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:47 crc kubenswrapper[4755]: I1210 15:23:47.822542 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:47 crc kubenswrapper[4755]: I1210 15:23:47.822552 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:47Z","lastTransitionTime":"2025-12-10T15:23:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:47 crc kubenswrapper[4755]: I1210 15:23:47.926142 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:47 crc kubenswrapper[4755]: I1210 15:23:47.926187 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:47 crc kubenswrapper[4755]: I1210 15:23:47.926199 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:47 crc kubenswrapper[4755]: I1210 15:23:47.926215 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:47 crc kubenswrapper[4755]: I1210 15:23:47.926225 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:47Z","lastTransitionTime":"2025-12-10T15:23:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:47 crc kubenswrapper[4755]: I1210 15:23:47.988262 4755 generic.go:334] "Generic (PLEG): container finished" podID="8b56ae78-835a-45da-bc46-5adff2bdf9fd" containerID="417e326f541e04df27d540c14ec572bb8f1d749a9e3dc88f483241d69f7d0679" exitCode=0 Dec 10 15:23:47 crc kubenswrapper[4755]: I1210 15:23:47.988587 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-n8c6s" event={"ID":"8b56ae78-835a-45da-bc46-5adff2bdf9fd","Type":"ContainerDied","Data":"417e326f541e04df27d540c14ec572bb8f1d749a9e3dc88f483241d69f7d0679"} Dec 10 15:23:47 crc kubenswrapper[4755]: I1210 15:23:47.989952 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-qnmst" event={"ID":"1693fcf1-bef4-4f82-8dd8-f1797b03f5e2","Type":"ContainerStarted","Data":"f8fd6a2ab2b9557574951c0ebbccd663fe576262b0de2c3c655427c977f62d82"} Dec 10 15:23:47 crc kubenswrapper[4755]: I1210 15:23:47.989985 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-qnmst" event={"ID":"1693fcf1-bef4-4f82-8dd8-f1797b03f5e2","Type":"ContainerStarted","Data":"03916562c3456627c24d79f50221a6fc4d6b03e90ae5e16571ce0065bea7c2c1"} Dec 10 15:23:48 crc kubenswrapper[4755]: I1210 15:23:48.010254 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n8c6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b56ae78-835a-45da-bc46-5adff2bdf9fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b9a1485f6d0c0e53fd3505623b2788476bc00abea2f11650386eac40e449bcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9a1485f6d0c0e53fd3505623b2788476bc00abea2f11650386eac40e449bcc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a532015d196c588ad3dd6c43d9fa3772ba2a42bd1b2231f830b60adb0addab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a532015d196c588ad3dd6c43d9fa3772ba2a42bd1b2231f830b60adb0addab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://417e326f541e04df27d540c14ec572bb8f1d749a9e3dc88f483241d69f7d0679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://417e326f541e04df27d540c14ec572bb8f1d749a9e3dc88f483241d69f7d0679\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n8c6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:48Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:48 crc kubenswrapper[4755]: I1210 15:23:48.021606 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qnmst" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1693fcf1-bef4-4f82-8dd8-f1797b03f5e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hzdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qnmst\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:48Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:48 crc kubenswrapper[4755]: I1210 15:23:48.027905 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:48 crc kubenswrapper[4755]: I1210 15:23:48.027932 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:48 crc kubenswrapper[4755]: I1210 15:23:48.027941 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:48 crc kubenswrapper[4755]: I1210 15:23:48.027956 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:48 crc kubenswrapper[4755]: I1210 15:23:48.027966 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:48Z","lastTransitionTime":"2025-12-10T15:23:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:48 crc kubenswrapper[4755]: I1210 15:23:48.040568 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aed6bb9-78c2-410b-9c58-b60ab22a7bd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70d014e7227746c46b30f8f5a1f307a422d2fa0d4b98d98bfe5ba6217223489e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://104d730ea666ffd2d639ba7fc8d1ac5b444586788a62656fd260cb249f82b1ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09dbec85547c7170bb9551e5657876814d48528e3047daf3547711a563d2b6ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8bf1de0d8bdc0a20bc42ba5097d849ae0f507e5fbc18fb17b4ff3650e46ff0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:48Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:48 crc kubenswrapper[4755]: I1210 15:23:48.052422 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:48Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:48 crc kubenswrapper[4755]: I1210 15:23:48.064369 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:48Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:48 crc kubenswrapper[4755]: I1210 15:23:48.074073 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wv8fh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d150a22e-c59a-4376-a5c8-db4085ea0be0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58d106c5d1b9525ec821d009ca556449cd8d7f0e1b9c8ec7dd969df996e73625\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfw9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wv8fh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:48Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:48 crc kubenswrapper[4755]: I1210 15:23:48.094515 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fa03060-a6e0-4aad-9aa1-43b1a0d00c85\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a823333ed5cb5de3988d25e50e4b7a0f9071c76fb39c22760a4f2acd5eb455d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d828f3f3b90a2dcb1c1908e6a686368af5b0d715b3251e4b8fcf3c8818ec75a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://172ab3fce08c8ba4095ff4095c89364a778644752bd7bb6c178d6e3ebcface69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfbb8e350ad18b78a6bcf6cfa4eb8f2fad90f970dba08a8b1b2026af6f255e83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c4b2bf0c88a16fd6b4b45a20730a92292895dc9f29ce756d347c302b25a8612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55da309288bc2c7ff1f22da0569d18155796700c200fe3ed508f6f8179756865\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55da309288bc2c7ff1f22da0569d18155796700c200fe3ed508f6f8179756865\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219c531368bb5379f3726d94ab0afc0f33eedbf91b0a4dbbe0757c6e232f79ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://219c531368bb5379f3726d94ab0afc0f33eedbf91b0a4dbbe0757c6e232f79ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f9da8f3d4f85ec83c16b71ecd983ff4f48049eb8e73461a2b46c4f66d71ed06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9da8f3d4f85ec83c16b71ecd983ff4f48049eb8e73461a2b46c4f66d71ed06e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:48Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:48 crc kubenswrapper[4755]: I1210 15:23:48.108087 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e6db97-7e22-4fec-924e-20e90f463887\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef3871ddc16d2fc1b89a6f120390cf066424bd9ed4cca98f8dea4430c40c6c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08895c3ab20248c4bd70ccfae6442bdf8e76a602e605b043f0f00166624ada17\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5b2467645116856afa1e1e37501c7b453b03da193a96d1075886b85839a86ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16e58653dca08c9b33671279553975bca9507ffeeefea161119cec624813f167\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea18ef6c319336321d97c2a55449903b837b5a47da785da3ff5308244751943\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17ca0b6cc1b8b0bf9c34e03c494d1fc1594db013eccf862027ec8749ea51f6e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17ca0b6cc1b8b0bf9c34e03c494d1fc1594db013eccf862027ec8749ea51f6e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:48Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:48 crc kubenswrapper[4755]: I1210 15:23:48.121229 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:48Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:48 crc kubenswrapper[4755]: I1210 15:23:48.130782 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:48 crc kubenswrapper[4755]: I1210 15:23:48.130830 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:48 crc kubenswrapper[4755]: I1210 15:23:48.130842 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:48 crc kubenswrapper[4755]: I1210 15:23:48.130861 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:48 crc kubenswrapper[4755]: I1210 15:23:48.130871 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:48Z","lastTransitionTime":"2025-12-10T15:23:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:48 crc kubenswrapper[4755]: I1210 15:23:48.138438 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b1da51a-99c9-4f8e-920d-ce0973af6370\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://375efb27cf6bef06a6a0ffc8f6b2963e1b9ffbca18b3c8e7b402bc552a411f6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://375efb27cf6bef06a6a0ffc8f6b2963e1b9ffbca18b3c8e7b402bc552a411f6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6lfvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:48Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:48 crc kubenswrapper[4755]: I1210 15:23:48.152177 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7d089a8bddfa1f80d29011c7a6bab0a300f7dd44bdb2864f86951ebbb9ebea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:48Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:48 crc kubenswrapper[4755]: I1210 15:23:48.165095 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01ddc4056319db1f69268dcae192c3cb9db6c25284305803ae7588e59f77c346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aae6bf174cdc9bde18d7c959e976454e73c1e67642f0158d365b79582f63f3cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:48Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:48 crc kubenswrapper[4755]: I1210 15:23:48.177050 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:48Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:48 crc kubenswrapper[4755]: I1210 15:23:48.192436 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zl2tx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"796da6d5-6ccd-4786-a03e-9a8e47a55031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de63a123c46563bd8cd07e669d192bc8b019a889b9bdb7af1b988872c8f1fc48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzg4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zl2tx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:48Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:48 crc kubenswrapper[4755]: I1210 15:23:48.206919 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b132a8b9-1c99-414d-8773-229bf36b305d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5a5a59e9f156fb791ec822c2d5efe3fc6ec0e84bfcb2b6f5da81396951984c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2mdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a40c4bdaa23a60a665b8f565720d79b68cac62d40246be94fc6cd314b1bb3656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2mdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ggt8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:48Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:48 crc kubenswrapper[4755]: I1210 15:23:48.229852 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fa03060-a6e0-4aad-9aa1-43b1a0d00c85\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a823333ed5cb5de3988d25e50e4b7a0f9071c76fb39c22760a4f2acd5eb455d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d828f3f3b90a2dcb1c1908e6a686368af5b0d715b3251e4b8fcf3c8818ec75a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://172ab3fce08c8ba4095ff4095c89364a778644752bd7bb6c178d6e3ebcface69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfbb8e350ad18b78a6bcf6cfa4eb8f2fad90f970dba08a8b1b2026af6f255e83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c4b2bf0c88a16fd6b4b45a20730a92292895dc9f29ce756d347c302b25a8612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55da309288bc2c7ff1f22da0569d18155796700c200fe3ed508f6f8179756865\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55da309288bc2c7ff1f22da0569d18155796700c200fe3ed508f6f8179756865\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219c531368bb5379f3726d94ab0afc0f33eedbf91b0a4dbbe0757c6e232f79ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://219c531368bb5379f3726d94ab0afc0f33eedbf91b0a4dbbe0757c6e232f79ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f9da8f3d4f85ec83c16b71ecd983ff4f48049eb8e73461a2b46c4f66d71ed06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9da8f3d4f85ec83c16b71ecd983ff4f48049eb8e73461a2b46c4f66d71ed06e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:48Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:48 crc kubenswrapper[4755]: I1210 15:23:48.233061 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:48 crc kubenswrapper[4755]: I1210 15:23:48.233086 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:48 crc kubenswrapper[4755]: I1210 15:23:48.233094 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:48 crc kubenswrapper[4755]: I1210 15:23:48.233108 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:48 crc kubenswrapper[4755]: I1210 15:23:48.233118 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:48Z","lastTransitionTime":"2025-12-10T15:23:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:48 crc kubenswrapper[4755]: I1210 15:23:48.242833 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:48Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:48 crc kubenswrapper[4755]: I1210 15:23:48.255914 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:48Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:48 crc kubenswrapper[4755]: I1210 15:23:48.267054 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wv8fh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d150a22e-c59a-4376-a5c8-db4085ea0be0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58d106c5d1b9525ec821d009ca556449cd8d7f0e1b9c8ec7dd969df996e73625\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfw9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wv8fh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:48Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:48 crc kubenswrapper[4755]: I1210 15:23:48.281859 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e6db97-7e22-4fec-924e-20e90f463887\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef3871ddc16d2fc1b89a6f120390cf066424bd9ed4cca98f8dea4430c40c6c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08895c3ab20248c4bd70ccfae6442bdf8e76a602e605b043f0f00166624ada17\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5b2467645116856afa1e1e37501c7b453b03da193a96d1075886b85839a86ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16e58653dca08c9b33671279553975bca9507ffeeefea161119cec624813f167\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea18ef6c319336321d97c2a55449903b837b5a47da785da3ff5308244751943\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17ca0b6cc1b8b0bf9c34e03c494d1fc1594db013eccf862027ec8749ea51f6e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17ca0b6cc1b8b0bf9c34e03c494d1fc1594db013eccf862027ec8749ea51f6e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:48Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:48 crc kubenswrapper[4755]: I1210 15:23:48.302563 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:48Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:48 crc kubenswrapper[4755]: I1210 15:23:48.334640 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:48 crc kubenswrapper[4755]: I1210 15:23:48.334670 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:48 crc kubenswrapper[4755]: I1210 15:23:48.334679 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:48 crc kubenswrapper[4755]: I1210 15:23:48.334692 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:48 crc kubenswrapper[4755]: I1210 15:23:48.334701 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:48Z","lastTransitionTime":"2025-12-10T15:23:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:48 crc kubenswrapper[4755]: I1210 15:23:48.350166 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b1da51a-99c9-4f8e-920d-ce0973af6370\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://375efb27cf6bef06a6a0ffc8f6b2963e1b9ffbca18b3c8e7b402bc552a411f6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://375efb27cf6bef06a6a0ffc8f6b2963e1b9ffbca18b3c8e7b402bc552a411f6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6lfvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:48Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:48 crc kubenswrapper[4755]: I1210 15:23:48.380352 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b132a8b9-1c99-414d-8773-229bf36b305d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5a5a59e9f156fb791ec822c2d5efe3fc6ec0e84bfcb2b6f5da81396951984c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2mdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a40c4bdaa23a60a665b8f565720d79b68cac62d40246be94fc6cd314b1bb3656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2mdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ggt8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:48Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:48 crc kubenswrapper[4755]: I1210 15:23:48.420944 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7d089a8bddfa1f80d29011c7a6bab0a300f7dd44bdb2864f86951ebbb9ebea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:48Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:48 crc kubenswrapper[4755]: I1210 15:23:48.436447 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:48 crc kubenswrapper[4755]: I1210 15:23:48.436502 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:48 crc kubenswrapper[4755]: I1210 15:23:48.436511 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:48 crc kubenswrapper[4755]: I1210 15:23:48.436526 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:48 crc kubenswrapper[4755]: I1210 15:23:48.436537 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:48Z","lastTransitionTime":"2025-12-10T15:23:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:48 crc kubenswrapper[4755]: I1210 15:23:48.459822 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01ddc4056319db1f69268dcae192c3cb9db6c25284305803ae7588e59f77c346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aae6bf174cdc9bde18d7c959e976454e73c1e67642f0158d365b79582f63f3cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:48Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:48 crc kubenswrapper[4755]: I1210 15:23:48.502039 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:48Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:48 crc kubenswrapper[4755]: I1210 15:23:48.538343 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:48 crc kubenswrapper[4755]: I1210 15:23:48.538384 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:48 crc kubenswrapper[4755]: I1210 15:23:48.538398 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:48 crc kubenswrapper[4755]: I1210 15:23:48.538415 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:48 crc kubenswrapper[4755]: I1210 15:23:48.538424 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:48Z","lastTransitionTime":"2025-12-10T15:23:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:48 crc kubenswrapper[4755]: I1210 15:23:48.540387 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zl2tx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"796da6d5-6ccd-4786-a03e-9a8e47a55031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de63a123c46563bd8cd07e669d192bc8b019a889b9bdb7af1b988872c8f1fc48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzg4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zl2tx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:48Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:48 crc kubenswrapper[4755]: I1210 15:23:48.581500 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aed6bb9-78c2-410b-9c58-b60ab22a7bd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70d014e7227746c46b30f8f5a1f307a422d2fa0d4b98d98bfe5ba6217223489e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://104d730ea666ffd2d639ba7fc8d1ac5b444586788a62656fd260cb249f82b1ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09dbec85547c7170bb9551e5657876814d48528e3047daf3547711a563d2b6ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8bf1de0d8bdc0a20bc42ba5097d849ae0f507e5fbc18fb17b4ff3650e46ff0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:48Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:48 crc kubenswrapper[4755]: I1210 15:23:48.620576 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n8c6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b56ae78-835a-45da-bc46-5adff2bdf9fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b9a1485f6d0c0e53fd3505623b2788476bc00abea2f11650386eac40e449bcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9a1485f6d0c0e53fd3505623b2788476bc00abea2f11650386eac40e449bcc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a532015d196c588ad3dd6c43d9fa3772ba2a42bd1b2231f830b60adb0addab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a532015d196c588ad3dd6c43d9fa3772ba2a42bd1b2231f830b60adb0addab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://417e326f541e04df27d540c14ec572bb8f1d749a9e3dc88f483241d69f7d0679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://417e326f541e04df27d540c14ec572bb8f1d749a9e3dc88f483241d69f7d0679\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n8c6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:48Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:48 crc kubenswrapper[4755]: I1210 15:23:48.639955 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:48 crc kubenswrapper[4755]: I1210 15:23:48.639985 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:48 crc kubenswrapper[4755]: I1210 15:23:48.639994 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:48 crc kubenswrapper[4755]: I1210 15:23:48.640009 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:48 crc kubenswrapper[4755]: I1210 15:23:48.640021 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:48Z","lastTransitionTime":"2025-12-10T15:23:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:48 crc kubenswrapper[4755]: I1210 15:23:48.657825 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qnmst" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1693fcf1-bef4-4f82-8dd8-f1797b03f5e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8fd6a2ab2b9557574951c0ebbccd663fe576262b0de2c3c655427c977f62d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hzdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qnmst\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:48Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:48 crc kubenswrapper[4755]: I1210 15:23:48.742153 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:48 crc kubenswrapper[4755]: I1210 15:23:48.742192 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:48 crc kubenswrapper[4755]: I1210 15:23:48.742201 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:48 crc kubenswrapper[4755]: I1210 15:23:48.742218 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:48 crc kubenswrapper[4755]: I1210 15:23:48.742230 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:48Z","lastTransitionTime":"2025-12-10T15:23:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:48 crc kubenswrapper[4755]: I1210 15:23:48.844293 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:48 crc kubenswrapper[4755]: I1210 15:23:48.844345 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:48 crc kubenswrapper[4755]: I1210 15:23:48.844355 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:48 crc kubenswrapper[4755]: I1210 15:23:48.844373 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:48 crc kubenswrapper[4755]: I1210 15:23:48.844385 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:48Z","lastTransitionTime":"2025-12-10T15:23:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:48 crc kubenswrapper[4755]: I1210 15:23:48.946163 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:48 crc kubenswrapper[4755]: I1210 15:23:48.946201 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:48 crc kubenswrapper[4755]: I1210 15:23:48.946209 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:48 crc kubenswrapper[4755]: I1210 15:23:48.946222 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:48 crc kubenswrapper[4755]: I1210 15:23:48.946233 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:48Z","lastTransitionTime":"2025-12-10T15:23:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:48 crc kubenswrapper[4755]: I1210 15:23:48.996044 4755 generic.go:334] "Generic (PLEG): container finished" podID="8b56ae78-835a-45da-bc46-5adff2bdf9fd" containerID="42867f7a1f6aceaf708ad650b7634bdafb3f850872c540d424916440014e7941" exitCode=0 Dec 10 15:23:48 crc kubenswrapper[4755]: I1210 15:23:48.996091 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-n8c6s" event={"ID":"8b56ae78-835a-45da-bc46-5adff2bdf9fd","Type":"ContainerDied","Data":"42867f7a1f6aceaf708ad650b7634bdafb3f850872c540d424916440014e7941"} Dec 10 15:23:49 crc kubenswrapper[4755]: I1210 15:23:49.014441 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aed6bb9-78c2-410b-9c58-b60ab22a7bd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70d014e7227746c46b30f8f5a1f307a422d2fa0d4b98d98bfe5ba6217223489e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://104d730ea666ffd2d639ba7fc8d1ac5b444586788a62656fd260cb249f82b1ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09dbec85547c7170bb9551e5657876814d48528e3047daf3547711a563d2b6ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8bf1de0d8bdc0a20bc42ba5097d849ae0f507e5fbc18fb17b4ff3650e46ff0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:49Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:49 crc kubenswrapper[4755]: I1210 15:23:49.035907 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n8c6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b56ae78-835a-45da-bc46-5adff2bdf9fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b9a1485f6d0c0e53fd3505623b2788476bc00abea2f11650386eac40e449bcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9a1485f6d0c0e53fd3505623b2788476bc00abea2f11650386eac40e449bcc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a532015d196c588ad3dd6c43d9fa3772ba2a42bd1b2231f830b60adb0addab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a532015d196c588ad3dd6c43d9fa3772ba2a42bd1b2231f830b60adb0addab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://417e326f541e04df27d540c14ec572bb8f1d749a9e3dc88f483241d69f7d0679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://417e326f541e04df27d540c14ec572bb8f1d749a9e3dc88f483241d69f7d0679\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42867f7a1f6aceaf708ad650b7634bdafb3f850872c540d424916440014e7941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42867f7a1f6aceaf708ad650b7634bdafb3f850872c540d424916440014e7941\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n8c6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:49Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:49 crc kubenswrapper[4755]: I1210 15:23:49.048431 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:49 crc kubenswrapper[4755]: I1210 15:23:49.048483 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:49 crc kubenswrapper[4755]: I1210 15:23:49.048497 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:49 crc kubenswrapper[4755]: I1210 15:23:49.048514 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:49 crc kubenswrapper[4755]: I1210 15:23:49.048525 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:49Z","lastTransitionTime":"2025-12-10T15:23:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:49 crc kubenswrapper[4755]: I1210 15:23:49.050722 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qnmst" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1693fcf1-bef4-4f82-8dd8-f1797b03f5e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8fd6a2ab2b9557574951c0ebbccd663fe576262b0de2c3c655427c977f62d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hzdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qnmst\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:49Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:49 crc kubenswrapper[4755]: I1210 15:23:49.073678 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fa03060-a6e0-4aad-9aa1-43b1a0d00c85\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a823333ed5cb5de3988d25e50e4b7a0f9071c76fb39c22760a4f2acd5eb455d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d828f3f3b90a2dcb1c1908e6a686368af5b0d715b3251e4b8fcf3c8818ec75a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://172ab3fce08c8ba4095ff4095c89364a778644752bd7bb6c178d6e3ebcface69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfbb8e350ad18b78a6bcf6cfa4eb8f2fad90f970dba08a8b1b2026af6f255e83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c4b2bf0c88a16fd6b4b45a20730a92292895dc9f29ce756d347c302b25a8612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55da309288bc2c7ff1f22da0569d18155796700c200fe3ed508f6f8179756865\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55da309288bc2c7ff1f22da0569d18155796700c200fe3ed508f6f8179756865\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219c531368bb5379f3726d94ab0afc0f33eedbf91b0a4dbbe0757c6e232f79ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://219c531368bb5379f3726d94ab0afc0f33eedbf91b0a4dbbe0757c6e232f79ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f9da8f3d4f85ec83c16b71ecd983ff4f48049eb8e73461a2b46c4f66d71ed06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9da8f3d4f85ec83c16b71ecd983ff4f48049eb8e73461a2b46c4f66d71ed06e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:49Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:49 crc kubenswrapper[4755]: I1210 15:23:49.091437 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:49Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:49 crc kubenswrapper[4755]: I1210 15:23:49.103977 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:49Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:49 crc kubenswrapper[4755]: I1210 15:23:49.115011 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wv8fh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d150a22e-c59a-4376-a5c8-db4085ea0be0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58d106c5d1b9525ec821d009ca556449cd8d7f0e1b9c8ec7dd969df996e73625\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfw9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wv8fh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:49Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:49 crc kubenswrapper[4755]: I1210 15:23:49.130601 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e6db97-7e22-4fec-924e-20e90f463887\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef3871ddc16d2fc1b89a6f120390cf066424bd9ed4cca98f8dea4430c40c6c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08895c3ab20248c4bd70ccfae6442bdf8e76a602e605b043f0f00166624ada17\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5b2467645116856afa1e1e37501c7b453b03da193a96d1075886b85839a86ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16e58653dca08c9b33671279553975bca9507ffeeefea161119cec624813f167\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea18ef6c319336321d97c2a55449903b837b5a47da785da3ff5308244751943\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17ca0b6cc1b8b0bf9c34e03c494d1fc1594db013eccf862027ec8749ea51f6e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17ca0b6cc1b8b0bf9c34e03c494d1fc1594db013eccf862027ec8749ea51f6e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:49Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:49 crc kubenswrapper[4755]: I1210 15:23:49.142428 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:49Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:49 crc kubenswrapper[4755]: I1210 15:23:49.150519 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:49 crc kubenswrapper[4755]: I1210 15:23:49.150555 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:49 crc kubenswrapper[4755]: I1210 15:23:49.150566 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:49 crc kubenswrapper[4755]: I1210 15:23:49.150580 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:49 crc kubenswrapper[4755]: I1210 15:23:49.150591 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:49Z","lastTransitionTime":"2025-12-10T15:23:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:49 crc kubenswrapper[4755]: I1210 15:23:49.161347 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b1da51a-99c9-4f8e-920d-ce0973af6370\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://375efb27cf6bef06a6a0ffc8f6b2963e1b9ffbca18b3c8e7b402bc552a411f6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://375efb27cf6bef06a6a0ffc8f6b2963e1b9ffbca18b3c8e7b402bc552a411f6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6lfvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:49Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:49 crc kubenswrapper[4755]: I1210 15:23:49.172891 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7d089a8bddfa1f80d29011c7a6bab0a300f7dd44bdb2864f86951ebbb9ebea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:49Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:49 crc kubenswrapper[4755]: I1210 15:23:49.186233 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01ddc4056319db1f69268dcae192c3cb9db6c25284305803ae7588e59f77c346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aae6bf174cdc9bde18d7c959e976454e73c1e67642f0158d365b79582f63f3cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:49Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:49 crc kubenswrapper[4755]: I1210 15:23:49.212996 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:49Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:49 crc kubenswrapper[4755]: I1210 15:23:49.237452 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zl2tx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"796da6d5-6ccd-4786-a03e-9a8e47a55031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de63a123c46563bd8cd07e669d192bc8b019a889b9bdb7af1b988872c8f1fc48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzg4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zl2tx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:49Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:49 crc kubenswrapper[4755]: I1210 15:23:49.252724 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:49 crc kubenswrapper[4755]: I1210 15:23:49.252778 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:49 crc kubenswrapper[4755]: I1210 15:23:49.252789 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:49 crc kubenswrapper[4755]: I1210 15:23:49.252808 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:49 crc kubenswrapper[4755]: I1210 15:23:49.252821 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:49Z","lastTransitionTime":"2025-12-10T15:23:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:49 crc kubenswrapper[4755]: I1210 15:23:49.270516 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b132a8b9-1c99-414d-8773-229bf36b305d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5a5a59e9f156fb791ec822c2d5efe3fc6ec0e84bfcb2b6f5da81396951984c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2mdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a40c4bdaa23a60a665b8f565720d79b68cac62d40246be94fc6cd314b1bb3656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2mdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ggt8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:49Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:49 crc kubenswrapper[4755]: I1210 15:23:49.280111 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 10 15:23:49 crc kubenswrapper[4755]: I1210 15:23:49.283771 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 10 15:23:49 crc kubenswrapper[4755]: I1210 15:23:49.310306 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fa03060-a6e0-4aad-9aa1-43b1a0d00c85\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a823333ed5cb5de3988d25e50e4b7a0f9071c76fb39c22760a4f2acd5eb455d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d828f3f3b90a2dcb1c1908e6a686368af5b0d715b3251e4b8fcf3c8818ec75a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://172ab3fce08c8ba4095ff4095c89364a778644752bd7bb6c178d6e3ebcface69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfbb8e350ad18b78a6bcf6cfa4eb8f2fad90f970dba08a8b1b2026af6f255e83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c4b2bf0c88a16fd6b4b45a20730a92292895dc9f29ce756d347c302b25a8612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55da309288bc2c7ff1f22da0569d18155796700c200fe3ed508f6f8179756865\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55da309288bc2c7ff1f22da0569d18155796700c200fe3ed508f6f8179756865\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219c531368bb5379f3726d94ab0afc0f33eedbf91b0a4dbbe0757c6e232f79ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://219c531368bb5379f3726d94ab0afc0f33eedbf91b0a4dbbe0757c6e232f79ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f9da8f3d4f85ec83c16b71ecd983ff4f48049eb8e73461a2b46c4f66d71ed06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9da8f3d4f85ec83c16b71ecd983ff4f48049eb8e73461a2b46c4f66d71ed06e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:49Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:49 crc kubenswrapper[4755]: I1210 15:23:49.339591 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:49Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:49 crc kubenswrapper[4755]: I1210 15:23:49.354619 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:49 crc kubenswrapper[4755]: I1210 15:23:49.354651 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:49 crc kubenswrapper[4755]: I1210 15:23:49.354659 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:49 crc kubenswrapper[4755]: I1210 15:23:49.354671 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:49 crc kubenswrapper[4755]: I1210 15:23:49.354679 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:49Z","lastTransitionTime":"2025-12-10T15:23:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:49 crc kubenswrapper[4755]: I1210 15:23:49.380151 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:49Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:49 crc kubenswrapper[4755]: I1210 15:23:49.422035 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wv8fh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d150a22e-c59a-4376-a5c8-db4085ea0be0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58d106c5d1b9525ec821d009ca556449cd8d7f0e1b9c8ec7dd969df996e73625\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfw9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wv8fh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:49Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:49 crc kubenswrapper[4755]: I1210 15:23:49.452068 4755 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 10 15:23:49 crc kubenswrapper[4755]: I1210 15:23:49.456611 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:49 crc kubenswrapper[4755]: I1210 15:23:49.456647 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:49 crc kubenswrapper[4755]: I1210 15:23:49.456658 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:49 crc kubenswrapper[4755]: I1210 15:23:49.456673 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:49 crc kubenswrapper[4755]: I1210 15:23:49.456685 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:49Z","lastTransitionTime":"2025-12-10T15:23:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:49 crc kubenswrapper[4755]: I1210 15:23:49.482139 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e6db97-7e22-4fec-924e-20e90f463887\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef3871ddc16d2fc1b89a6f120390cf066424bd9ed4cca98f8dea4430c40c6c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08895c3ab20248c4bd70ccfae6442bdf8e76a602e605b043f0f00166624ada17\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5b2467645116856afa1e1e37501c7b453b03da193a96d1075886b85839a86ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16e58653dca08c9b33671279553975bca9507ffeeefea161119cec624813f167\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea18ef6c319336321d97c2a55449903b837b5a47da785da3ff5308244751943\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17ca0b6cc1b8b0bf9c34e03c494d1fc1594db013eccf862027ec8749ea51f6e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17ca0b6cc1b8b0bf9c34e03c494d1fc1594db013eccf862027ec8749ea51f6e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:49Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:49 crc kubenswrapper[4755]: I1210 15:23:49.520936 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:49Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:49 crc kubenswrapper[4755]: I1210 15:23:49.559290 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:49 crc kubenswrapper[4755]: I1210 15:23:49.559326 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:49 crc kubenswrapper[4755]: I1210 15:23:49.559339 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:49 crc kubenswrapper[4755]: I1210 15:23:49.559354 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:49 crc kubenswrapper[4755]: I1210 15:23:49.559369 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:49Z","lastTransitionTime":"2025-12-10T15:23:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:49 crc kubenswrapper[4755]: I1210 15:23:49.568241 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b1da51a-99c9-4f8e-920d-ce0973af6370\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://375efb27cf6bef06a6a0ffc8f6b2963e1b9ffbca18b3c8e7b402bc552a411f6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://375efb27cf6bef06a6a0ffc8f6b2963e1b9ffbca18b3c8e7b402bc552a411f6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6lfvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:49Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:49 crc kubenswrapper[4755]: I1210 15:23:49.599416 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b132a8b9-1c99-414d-8773-229bf36b305d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5a5a59e9f156fb791ec822c2d5efe3fc6ec0e84bfcb2b6f5da81396951984c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2mdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a40c4bdaa23a60a665b8f565720d79b68cac62d40246be94fc6cd314b1bb3656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2mdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ggt8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:49Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:49 crc kubenswrapper[4755]: I1210 15:23:49.641410 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7d089a8bddfa1f80d29011c7a6bab0a300f7dd44bdb2864f86951ebbb9ebea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:49Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:49 crc kubenswrapper[4755]: I1210 15:23:49.661281 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:49 crc kubenswrapper[4755]: I1210 15:23:49.661310 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:49 crc kubenswrapper[4755]: I1210 15:23:49.661318 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:49 crc kubenswrapper[4755]: I1210 15:23:49.661331 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:49 crc kubenswrapper[4755]: I1210 15:23:49.661340 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:49Z","lastTransitionTime":"2025-12-10T15:23:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:49 crc kubenswrapper[4755]: I1210 15:23:49.680960 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01ddc4056319db1f69268dcae192c3cb9db6c25284305803ae7588e59f77c346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aae6bf174cdc9bde18d7c959e976454e73c1e67642f0158d365b79582f63f3cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:49Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:49 crc kubenswrapper[4755]: I1210 15:23:49.720053 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:49Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:49 crc kubenswrapper[4755]: I1210 15:23:49.756908 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 15:23:49 crc kubenswrapper[4755]: I1210 15:23:49.756979 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 15:23:49 crc kubenswrapper[4755]: E1210 15:23:49.757060 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 15:23:49 crc kubenswrapper[4755]: I1210 15:23:49.757162 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 15:23:49 crc kubenswrapper[4755]: E1210 15:23:49.757327 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 15:23:49 crc kubenswrapper[4755]: E1210 15:23:49.757493 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 15:23:49 crc kubenswrapper[4755]: I1210 15:23:49.762823 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:49 crc kubenswrapper[4755]: I1210 15:23:49.762860 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:49 crc kubenswrapper[4755]: I1210 15:23:49.762871 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:49 crc kubenswrapper[4755]: I1210 15:23:49.762891 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:49 crc kubenswrapper[4755]: I1210 15:23:49.762906 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:49Z","lastTransitionTime":"2025-12-10T15:23:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:49 crc kubenswrapper[4755]: I1210 15:23:49.763834 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zl2tx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"796da6d5-6ccd-4786-a03e-9a8e47a55031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de63a123c46563bd8cd07e669d192bc8b019a889b9bdb7af1b988872c8f1fc48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzg4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zl2tx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:49Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:49 crc kubenswrapper[4755]: I1210 15:23:49.804123 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aed6bb9-78c2-410b-9c58-b60ab22a7bd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70d014e7227746c46b30f8f5a1f307a422d2fa0d4b98d98bfe5ba6217223489e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://104d730ea666ffd2d639ba7fc8d1ac5b444586788a62656fd260cb249f82b1ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09dbec85547c7170bb9551e5657876814d48528e3047daf3547711a563d2b6ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8bf1de0d8bdc0a20bc42ba5097d849ae0f507e5fbc18fb17b4ff3650e46ff0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:49Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:49 crc kubenswrapper[4755]: I1210 15:23:49.841229 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n8c6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b56ae78-835a-45da-bc46-5adff2bdf9fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b9a1485f6d0c0e53fd3505623b2788476bc00abea2f11650386eac40e449bcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9a1485f6d0c0e53fd3505623b2788476bc00abea2f11650386eac40e449bcc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a532015d196c588ad3dd6c43d9fa3772ba2a42bd1b2231f830b60adb0addab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a532015d196c588ad3dd6c43d9fa3772ba2a42bd1b2231f830b60adb0addab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://417e326f541e04df27d540c14ec572bb8f1d749a9e3dc88f483241d69f7d0679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://417e326f541e04df27d540c14ec572bb8f1d749a9e3dc88f483241d69f7d0679\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42867f7a1f6aceaf708ad650b7634bdafb3f850872c540d424916440014e7941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42867f7a1f6aceaf708ad650b7634bdafb3f850872c540d424916440014e7941\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n8c6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:49Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:49 crc kubenswrapper[4755]: I1210 15:23:49.865032 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:49 crc kubenswrapper[4755]: I1210 15:23:49.865071 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:49 crc kubenswrapper[4755]: I1210 15:23:49.865081 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:49 crc kubenswrapper[4755]: I1210 15:23:49.865096 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:49 crc kubenswrapper[4755]: I1210 15:23:49.865106 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:49Z","lastTransitionTime":"2025-12-10T15:23:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:49 crc kubenswrapper[4755]: I1210 15:23:49.878889 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qnmst" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1693fcf1-bef4-4f82-8dd8-f1797b03f5e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8fd6a2ab2b9557574951c0ebbccd663fe576262b0de2c3c655427c977f62d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hzdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qnmst\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:49Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:49 crc kubenswrapper[4755]: I1210 15:23:49.918544 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qnmst" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1693fcf1-bef4-4f82-8dd8-f1797b03f5e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8fd6a2ab2b9557574951c0ebbccd663fe576262b0de2c3c655427c977f62d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hzdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qnmst\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:49Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:49 crc kubenswrapper[4755]: I1210 15:23:49.961045 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aed6bb9-78c2-410b-9c58-b60ab22a7bd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70d014e7227746c46b30f8f5a1f307a422d2fa0d4b98d98bfe5ba6217223489e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://104d730ea666ffd2d639ba7fc8d1ac5b444586788a62656fd260cb249f82b1ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09dbec85547c7170bb9551e5657876814d48528e3047daf3547711a563d2b6ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8bf1de0d8bdc0a20bc42ba5097d849ae0f507e5fbc18fb17b4ff3650e46ff0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:49Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:49 crc kubenswrapper[4755]: I1210 15:23:49.967752 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:49 crc kubenswrapper[4755]: I1210 15:23:49.967795 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:49 crc kubenswrapper[4755]: I1210 15:23:49.967805 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:49 crc kubenswrapper[4755]: I1210 15:23:49.967820 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:49 crc kubenswrapper[4755]: I1210 15:23:49.967829 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:49Z","lastTransitionTime":"2025-12-10T15:23:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:50 crc kubenswrapper[4755]: I1210 15:23:50.004504 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n8c6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b56ae78-835a-45da-bc46-5adff2bdf9fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b9a1485f6d0c0e53fd3505623b2788476bc00abea2f11650386eac40e449bcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9a1485f6d0c0e53fd3505623b2788476bc00abea2f11650386eac40e449bcc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a532015d196c588ad3dd6c43d9fa3772ba2a42bd1b2231f830b60adb0addab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a532015d196c588ad3dd6c43d9fa3772ba2a42bd1b2231f830b60adb0addab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://417e326f541e04df27d540c14ec572bb8f1d749a9e3dc88f483241d69f7d0679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://417e326f541e04df27d540c14ec572bb8f1d749a9e3dc88f483241d69f7d0679\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42867f7a1f6aceaf708ad650b7634bdafb3f850872c540d424916440014e7941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42867f7a1f6aceaf708ad650b7634bdafb3f850872c540d424916440014e7941\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n8c6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:50Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:50 crc kubenswrapper[4755]: I1210 15:23:50.005715 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" event={"ID":"4b1da51a-99c9-4f8e-920d-ce0973af6370","Type":"ContainerStarted","Data":"59bf59d1b7fbc365a916fbbceca7ae30b7ebc754b34f2f7a34c2e21e1e1d2166"} Dec 10 15:23:50 crc kubenswrapper[4755]: I1210 15:23:50.008158 4755 generic.go:334] "Generic (PLEG): container finished" podID="8b56ae78-835a-45da-bc46-5adff2bdf9fd" containerID="6f2d22bfc26958a3aad99de39f5c831d36c1ba4f563851baafc73735e80d5c91" exitCode=0 Dec 10 15:23:50 crc kubenswrapper[4755]: I1210 15:23:50.008386 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-n8c6s" event={"ID":"8b56ae78-835a-45da-bc46-5adff2bdf9fd","Type":"ContainerDied","Data":"6f2d22bfc26958a3aad99de39f5c831d36c1ba4f563851baafc73735e80d5c91"} Dec 10 15:23:50 crc kubenswrapper[4755]: I1210 15:23:50.041054 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:50Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:50 crc kubenswrapper[4755]: I1210 15:23:50.071579 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:50 crc kubenswrapper[4755]: I1210 15:23:50.071609 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:50 crc kubenswrapper[4755]: I1210 15:23:50.071617 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:50 crc kubenswrapper[4755]: I1210 15:23:50.071630 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:50 crc kubenswrapper[4755]: I1210 15:23:50.071640 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:50Z","lastTransitionTime":"2025-12-10T15:23:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:50 crc kubenswrapper[4755]: I1210 15:23:50.081754 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wv8fh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d150a22e-c59a-4376-a5c8-db4085ea0be0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58d106c5d1b9525ec821d009ca556449cd8d7f0e1b9c8ec7dd969df996e73625\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfw9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wv8fh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:50Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:50 crc kubenswrapper[4755]: I1210 15:23:50.126311 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fa03060-a6e0-4aad-9aa1-43b1a0d00c85\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a823333ed5cb5de3988d25e50e4b7a0f9071c76fb39c22760a4f2acd5eb455d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d828f3f3b90a2dcb1c1908e6a686368af5b0d715b3251e4b8fcf3c8818ec75a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://172ab3fce08c8ba4095ff4095c89364a778644752bd7bb6c178d6e3ebcface69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfbb8e350ad18b78a6bcf6cfa4eb8f2fad90f970dba08a8b1b2026af6f255e83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c4b2bf0c88a16fd6b4b45a20730a92292895dc9f29ce756d347c302b25a8612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55da309288bc2c7ff1f22da0569d18155796700c200fe3ed508f6f8179756865\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55da309288bc2c7ff1f22da0569d18155796700c200fe3ed508f6f8179756865\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219c531368bb5379f3726d94ab0afc0f33eedbf91b0a4dbbe0757c6e232f79ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://219c531368bb5379f3726d94ab0afc0f33eedbf91b0a4dbbe0757c6e232f79ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f9da8f3d4f85ec83c16b71ecd983ff4f48049eb8e73461a2b46c4f66d71ed06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9da8f3d4f85ec83c16b71ecd983ff4f48049eb8e73461a2b46c4f66d71ed06e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:50Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:50 crc kubenswrapper[4755]: I1210 15:23:50.160931 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:50Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:50 crc kubenswrapper[4755]: I1210 15:23:50.173969 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:50 crc kubenswrapper[4755]: I1210 15:23:50.173993 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:50 crc kubenswrapper[4755]: I1210 15:23:50.174002 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:50 crc kubenswrapper[4755]: I1210 15:23:50.174015 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:50 crc kubenswrapper[4755]: I1210 15:23:50.174025 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:50Z","lastTransitionTime":"2025-12-10T15:23:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:50 crc kubenswrapper[4755]: I1210 15:23:50.199573 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:50Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:50 crc kubenswrapper[4755]: I1210 15:23:50.245987 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b1da51a-99c9-4f8e-920d-ce0973af6370\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://375efb27cf6bef06a6a0ffc8f6b2963e1b9ffbca18b3c8e7b402bc552a411f6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://375efb27cf6bef06a6a0ffc8f6b2963e1b9ffbca18b3c8e7b402bc552a411f6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6lfvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:50Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:50 crc kubenswrapper[4755]: I1210 15:23:50.276694 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:50 crc kubenswrapper[4755]: I1210 15:23:50.276739 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:50 crc kubenswrapper[4755]: I1210 15:23:50.276751 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:50 crc kubenswrapper[4755]: I1210 15:23:50.276769 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:50 crc kubenswrapper[4755]: I1210 15:23:50.276782 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:50Z","lastTransitionTime":"2025-12-10T15:23:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:50 crc kubenswrapper[4755]: I1210 15:23:50.283105 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e6db97-7e22-4fec-924e-20e90f463887\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef3871ddc16d2fc1b89a6f120390cf066424bd9ed4cca98f8dea4430c40c6c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08895c3ab20248c4bd70ccfae6442bdf8e76a602e605b043f0f00166624ada17\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5b2467645116856afa1e1e37501c7b453b03da193a96d1075886b85839a86ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16e58653dca08c9b33671279553975bca9507ffeeefea161119cec624813f167\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea18ef6c319336321d97c2a55449903b837b5a47da785da3ff5308244751943\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17ca0b6cc1b8b0bf9c34e03c494d1fc1594db013eccf862027ec8749ea51f6e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17ca0b6cc1b8b0bf9c34e03c494d1fc1594db013eccf862027ec8749ea51f6e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:50Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:50 crc kubenswrapper[4755]: I1210 15:23:50.320640 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01ddc4056319db1f69268dcae192c3cb9db6c25284305803ae7588e59f77c346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aae6bf174cdc9bde18d7c959e976454e73c1e67642f0158d365b79582f63f3cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:50Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:50 crc kubenswrapper[4755]: I1210 15:23:50.359858 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:50Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:50 crc kubenswrapper[4755]: I1210 15:23:50.379294 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:50 crc kubenswrapper[4755]: I1210 15:23:50.379332 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:50 crc kubenswrapper[4755]: I1210 15:23:50.379343 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:50 crc kubenswrapper[4755]: I1210 15:23:50.379359 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:50 crc kubenswrapper[4755]: I1210 15:23:50.379370 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:50Z","lastTransitionTime":"2025-12-10T15:23:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:50 crc kubenswrapper[4755]: I1210 15:23:50.399860 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zl2tx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"796da6d5-6ccd-4786-a03e-9a8e47a55031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de63a123c46563bd8cd07e669d192bc8b019a889b9bdb7af1b988872c8f1fc48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzg4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zl2tx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:50Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:50 crc kubenswrapper[4755]: I1210 15:23:50.441312 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b132a8b9-1c99-414d-8773-229bf36b305d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5a5a59e9f156fb791ec822c2d5efe3fc6ec0e84bfcb2b6f5da81396951984c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2mdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a40c4bdaa23a60a665b8f565720d79b68cac62d40246be94fc6cd314b1bb3656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2mdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ggt8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:50Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:50 crc kubenswrapper[4755]: I1210 15:23:50.480198 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7d089a8bddfa1f80d29011c7a6bab0a300f7dd44bdb2864f86951ebbb9ebea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:50Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:50 crc kubenswrapper[4755]: I1210 15:23:50.481245 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:50 crc kubenswrapper[4755]: I1210 15:23:50.481272 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:50 crc kubenswrapper[4755]: I1210 15:23:50.481282 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:50 crc kubenswrapper[4755]: I1210 15:23:50.481296 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:50 crc kubenswrapper[4755]: I1210 15:23:50.481305 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:50Z","lastTransitionTime":"2025-12-10T15:23:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:50 crc kubenswrapper[4755]: I1210 15:23:50.522786 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e6db97-7e22-4fec-924e-20e90f463887\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef3871ddc16d2fc1b89a6f120390cf066424bd9ed4cca98f8dea4430c40c6c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08895c3ab20248c4bd70ccfae6442bdf8e76a602e605b043f0f00166624ada17\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5b2467645116856afa1e1e37501c7b453b03da193a96d1075886b85839a86ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16e58653dca08c9b33671279553975bca9507ffeeefea161119cec624813f167\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea18ef6c319336321d97c2a55449903b837b5a47da785da3ff5308244751943\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17ca0b6cc1b8b0bf9c34e03c494d1fc1594db013eccf862027ec8749ea51f6e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17ca0b6cc1b8b0bf9c34e03c494d1fc1594db013eccf862027ec8749ea51f6e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:50Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:50 crc kubenswrapper[4755]: I1210 15:23:50.563435 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:50Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:50 crc kubenswrapper[4755]: I1210 15:23:50.583322 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:50 crc kubenswrapper[4755]: I1210 15:23:50.583371 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:50 crc kubenswrapper[4755]: I1210 15:23:50.583381 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:50 crc kubenswrapper[4755]: I1210 15:23:50.583397 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:50 crc kubenswrapper[4755]: I1210 15:23:50.583407 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:50Z","lastTransitionTime":"2025-12-10T15:23:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:50 crc kubenswrapper[4755]: I1210 15:23:50.606276 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b1da51a-99c9-4f8e-920d-ce0973af6370\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://375efb27cf6bef06a6a0ffc8f6b2963e1b9ffbca18b3c8e7b402bc552a411f6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://375efb27cf6bef06a6a0ffc8f6b2963e1b9ffbca18b3c8e7b402bc552a411f6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6lfvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:50Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:50 crc kubenswrapper[4755]: I1210 15:23:50.641607 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zl2tx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"796da6d5-6ccd-4786-a03e-9a8e47a55031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de63a123c46563bd8cd07e669d192bc8b019a889b9bdb7af1b988872c8f1fc48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzg4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zl2tx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:50Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:50 crc kubenswrapper[4755]: I1210 15:23:50.679859 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b132a8b9-1c99-414d-8773-229bf36b305d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5a5a59e9f156fb791ec822c2d5efe3fc6ec0e84bfcb2b6f5da81396951984c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2mdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a40c4bdaa23a60a665b8f565720d79b68cac62d40246be94fc6cd314b1bb3656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2mdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ggt8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:50Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:50 crc kubenswrapper[4755]: I1210 15:23:50.685347 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:50 crc kubenswrapper[4755]: I1210 15:23:50.685392 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:50 crc kubenswrapper[4755]: I1210 15:23:50.685406 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:50 crc kubenswrapper[4755]: I1210 15:23:50.685425 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:50 crc kubenswrapper[4755]: I1210 15:23:50.685438 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:50Z","lastTransitionTime":"2025-12-10T15:23:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:50 crc kubenswrapper[4755]: I1210 15:23:50.723044 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7d089a8bddfa1f80d29011c7a6bab0a300f7dd44bdb2864f86951ebbb9ebea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:50Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:50 crc kubenswrapper[4755]: I1210 15:23:50.769286 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01ddc4056319db1f69268dcae192c3cb9db6c25284305803ae7588e59f77c346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aae6bf174cdc9bde18d7c959e976454e73c1e67642f0158d365b79582f63f3cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:50Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:50 crc kubenswrapper[4755]: I1210 15:23:50.788428 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:50 crc kubenswrapper[4755]: I1210 15:23:50.788485 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:50 crc kubenswrapper[4755]: I1210 15:23:50.788495 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:50 crc kubenswrapper[4755]: I1210 15:23:50.788512 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:50 crc kubenswrapper[4755]: I1210 15:23:50.788530 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:50Z","lastTransitionTime":"2025-12-10T15:23:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:50 crc kubenswrapper[4755]: I1210 15:23:50.800366 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:50Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:50 crc kubenswrapper[4755]: I1210 15:23:50.840501 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aed6bb9-78c2-410b-9c58-b60ab22a7bd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70d014e7227746c46b30f8f5a1f307a422d2fa0d4b98d98bfe5ba6217223489e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://104d730ea666ffd2d639ba7fc8d1ac5b444586788a62656fd260cb249f82b1ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09dbec85547c7170bb9551e5657876814d48528e3047daf3547711a563d2b6ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8bf1de0d8bdc0a20bc42ba5097d849ae0f507e5fbc18fb17b4ff3650e46ff0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:50Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:50 crc kubenswrapper[4755]: I1210 15:23:50.886907 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n8c6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b56ae78-835a-45da-bc46-5adff2bdf9fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b9a1485f6d0c0e53fd3505623b2788476bc00abea2f11650386eac40e449bcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9a1485f6d0c0e53fd3505623b2788476bc00abea2f11650386eac40e449bcc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a532015d196c588ad3dd6c43d9fa3772ba2a42bd1b2231f830b60adb0addab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a532015d196c588ad3dd6c43d9fa3772ba2a42bd1b2231f830b60adb0addab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://417e326f541e04df27d540c14ec572bb8f1d749a9e3dc88f483241d69f7d0679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://417e326f541e04df27d540c14ec572bb8f1d749a9e3dc88f483241d69f7d0679\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42867f7a1f6aceaf708ad650b7634bdafb3f850872c540d424916440014e7941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42867f7a1f6aceaf708ad650b7634bdafb3f850872c540d424916440014e7941\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f2d22bfc26958a3aad99de39f5c831d36c1ba4f563851baafc73735e80d5c91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f2d22bfc26958a3aad99de39f5c831d36c1ba4f563851baafc73735e80d5c91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n8c6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:50Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:50 crc kubenswrapper[4755]: I1210 15:23:50.891405 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:50 crc kubenswrapper[4755]: I1210 15:23:50.891447 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:50 crc kubenswrapper[4755]: I1210 15:23:50.891459 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:50 crc kubenswrapper[4755]: I1210 15:23:50.891503 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:50 crc kubenswrapper[4755]: I1210 15:23:50.891517 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:50Z","lastTransitionTime":"2025-12-10T15:23:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:50 crc kubenswrapper[4755]: I1210 15:23:50.922436 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qnmst" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1693fcf1-bef4-4f82-8dd8-f1797b03f5e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8fd6a2ab2b9557574951c0ebbccd663fe576262b0de2c3c655427c977f62d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hzdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qnmst\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:50Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:50 crc kubenswrapper[4755]: I1210 15:23:50.966720 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fa03060-a6e0-4aad-9aa1-43b1a0d00c85\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a823333ed5cb5de3988d25e50e4b7a0f9071c76fb39c22760a4f2acd5eb455d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d828f3f3b90a2dcb1c1908e6a686368af5b0d715b3251e4b8fcf3c8818ec75a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://172ab3fce08c8ba4095ff4095c89364a778644752bd7bb6c178d6e3ebcface69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfbb8e350ad18b78a6bcf6cfa4eb8f2fad90f970dba08a8b1b2026af6f255e83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c4b2bf0c88a16fd6b4b45a20730a92292895dc9f29ce756d347c302b25a8612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55da309288bc2c7ff1f22da0569d18155796700c200fe3ed508f6f8179756865\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55da309288bc2c7ff1f22da0569d18155796700c200fe3ed508f6f8179756865\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219c531368bb5379f3726d94ab0afc0f33eedbf91b0a4dbbe0757c6e232f79ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://219c531368bb5379f3726d94ab0afc0f33eedbf91b0a4dbbe0757c6e232f79ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f9da8f3d4f85ec83c16b71ecd983ff4f48049eb8e73461a2b46c4f66d71ed06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9da8f3d4f85ec83c16b71ecd983ff4f48049eb8e73461a2b46c4f66d71ed06e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:50Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:50 crc kubenswrapper[4755]: I1210 15:23:50.994546 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:50 crc kubenswrapper[4755]: I1210 15:23:50.994596 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:50 crc kubenswrapper[4755]: I1210 15:23:50.994605 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:50 crc kubenswrapper[4755]: I1210 15:23:50.994624 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:50 crc kubenswrapper[4755]: I1210 15:23:50.994634 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:50Z","lastTransitionTime":"2025-12-10T15:23:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:50 crc kubenswrapper[4755]: I1210 15:23:50.999943 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:50Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:51 crc kubenswrapper[4755]: I1210 15:23:51.016424 4755 generic.go:334] "Generic (PLEG): container finished" podID="8b56ae78-835a-45da-bc46-5adff2bdf9fd" containerID="548d326fc5c14c16176d4fff3446d93c17322c11d2a34e9e42376b665de3848d" exitCode=0 Dec 10 15:23:51 crc kubenswrapper[4755]: I1210 15:23:51.016545 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-n8c6s" event={"ID":"8b56ae78-835a-45da-bc46-5adff2bdf9fd","Type":"ContainerDied","Data":"548d326fc5c14c16176d4fff3446d93c17322c11d2a34e9e42376b665de3848d"} Dec 10 15:23:51 crc kubenswrapper[4755]: I1210 15:23:51.039860 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:51Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:51 crc kubenswrapper[4755]: I1210 15:23:51.079594 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wv8fh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d150a22e-c59a-4376-a5c8-db4085ea0be0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58d106c5d1b9525ec821d009ca556449cd8d7f0e1b9c8ec7dd969df996e73625\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfw9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wv8fh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:51Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:51 crc kubenswrapper[4755]: I1210 15:23:51.097630 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:51 crc kubenswrapper[4755]: I1210 15:23:51.097738 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:51 crc kubenswrapper[4755]: I1210 15:23:51.097757 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:51 crc kubenswrapper[4755]: I1210 15:23:51.097774 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:51 crc kubenswrapper[4755]: I1210 15:23:51.097786 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:51Z","lastTransitionTime":"2025-12-10T15:23:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:51 crc kubenswrapper[4755]: I1210 15:23:51.127136 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fa03060-a6e0-4aad-9aa1-43b1a0d00c85\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a823333ed5cb5de3988d25e50e4b7a0f9071c76fb39c22760a4f2acd5eb455d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d828f3f3b90a2dcb1c1908e6a686368af5b0d715b3251e4b8fcf3c8818ec75a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://172ab3fce08c8ba4095ff4095c89364a778644752bd7bb6c178d6e3ebcface69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfbb8e350ad18b78a6bcf6cfa4eb8f2fad90f970dba08a8b1b2026af6f255e83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c4b2bf0c88a16fd6b4b45a20730a92292895dc9f29ce756d347c302b25a8612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55da309288bc2c7ff1f22da0569d18155796700c200fe3ed508f6f8179756865\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55da309288bc2c7ff1f22da0569d18155796700c200fe3ed508f6f8179756865\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219c531368bb5379f3726d94ab0afc0f33eedbf91b0a4dbbe0757c6e232f79ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://219c531368bb5379f3726d94ab0afc0f33eedbf91b0a4dbbe0757c6e232f79ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f9da8f3d4f85ec83c16b71ecd983ff4f48049eb8e73461a2b46c4f66d71ed06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9da8f3d4f85ec83c16b71ecd983ff4f48049eb8e73461a2b46c4f66d71ed06e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:51Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:51 crc kubenswrapper[4755]: I1210 15:23:51.160121 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:51Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:51 crc kubenswrapper[4755]: I1210 15:23:51.199786 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:51Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:51 crc kubenswrapper[4755]: I1210 15:23:51.200711 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:51 crc kubenswrapper[4755]: I1210 15:23:51.200740 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:51 crc kubenswrapper[4755]: I1210 15:23:51.200749 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:51 crc kubenswrapper[4755]: I1210 15:23:51.200765 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:51 crc kubenswrapper[4755]: I1210 15:23:51.200779 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:51Z","lastTransitionTime":"2025-12-10T15:23:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:51 crc kubenswrapper[4755]: I1210 15:23:51.238960 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wv8fh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d150a22e-c59a-4376-a5c8-db4085ea0be0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58d106c5d1b9525ec821d009ca556449cd8d7f0e1b9c8ec7dd969df996e73625\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfw9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wv8fh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:51Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:51 crc kubenswrapper[4755]: I1210 15:23:51.282405 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e6db97-7e22-4fec-924e-20e90f463887\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef3871ddc16d2fc1b89a6f120390cf066424bd9ed4cca98f8dea4430c40c6c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08895c3ab20248c4bd70ccfae6442bdf8e76a602e605b043f0f00166624ada17\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5b2467645116856afa1e1e37501c7b453b03da193a96d1075886b85839a86ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16e58653dca08c9b33671279553975bca9507ffeeefea161119cec624813f167\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea18ef6c319336321d97c2a55449903b837b5a47da785da3ff5308244751943\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17ca0b6cc1b8b0bf9c34e03c494d1fc1594db013eccf862027ec8749ea51f6e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17ca0b6cc1b8b0bf9c34e03c494d1fc1594db013eccf862027ec8749ea51f6e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:51Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:51 crc kubenswrapper[4755]: I1210 15:23:51.303273 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:51 crc kubenswrapper[4755]: I1210 15:23:51.303308 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:51 crc kubenswrapper[4755]: I1210 15:23:51.303317 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:51 crc kubenswrapper[4755]: I1210 15:23:51.303333 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:51 crc kubenswrapper[4755]: I1210 15:23:51.303342 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:51Z","lastTransitionTime":"2025-12-10T15:23:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:51 crc kubenswrapper[4755]: I1210 15:23:51.320131 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:51Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:51 crc kubenswrapper[4755]: I1210 15:23:51.365168 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b1da51a-99c9-4f8e-920d-ce0973af6370\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://375efb27cf6bef06a6a0ffc8f6b2963e1b9ffbca18b3c8e7b402bc552a411f6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://375efb27cf6bef06a6a0ffc8f6b2963e1b9ffbca18b3c8e7b402bc552a411f6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6lfvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:51Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:51 crc kubenswrapper[4755]: I1210 15:23:51.399674 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7d089a8bddfa1f80d29011c7a6bab0a300f7dd44bdb2864f86951ebbb9ebea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:51Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:51 crc kubenswrapper[4755]: I1210 15:23:51.405454 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:51 crc kubenswrapper[4755]: I1210 15:23:51.405539 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:51 crc kubenswrapper[4755]: I1210 15:23:51.405550 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:51 crc kubenswrapper[4755]: I1210 15:23:51.405573 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:51 crc kubenswrapper[4755]: I1210 15:23:51.405586 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:51Z","lastTransitionTime":"2025-12-10T15:23:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:51 crc kubenswrapper[4755]: I1210 15:23:51.439766 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01ddc4056319db1f69268dcae192c3cb9db6c25284305803ae7588e59f77c346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aae6bf174cdc9bde18d7c959e976454e73c1e67642f0158d365b79582f63f3cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:51Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:51 crc kubenswrapper[4755]: I1210 15:23:51.481143 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:51Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:51 crc kubenswrapper[4755]: I1210 15:23:51.507893 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:51 crc kubenswrapper[4755]: I1210 15:23:51.507932 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:51 crc kubenswrapper[4755]: I1210 15:23:51.507943 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:51 crc kubenswrapper[4755]: I1210 15:23:51.507958 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:51 crc kubenswrapper[4755]: I1210 15:23:51.507968 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:51Z","lastTransitionTime":"2025-12-10T15:23:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:51 crc kubenswrapper[4755]: I1210 15:23:51.514749 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 15:23:51 crc kubenswrapper[4755]: I1210 15:23:51.514893 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 15:23:51 crc kubenswrapper[4755]: I1210 15:23:51.514925 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 15:23:51 crc kubenswrapper[4755]: I1210 15:23:51.514958 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 15:23:51 crc kubenswrapper[4755]: I1210 15:23:51.514993 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 15:23:51 crc kubenswrapper[4755]: E1210 15:23:51.515093 4755 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 10 15:23:51 crc kubenswrapper[4755]: E1210 15:23:51.515148 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-10 15:23:59.515130873 +0000 UTC m=+36.116014505 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 10 15:23:51 crc kubenswrapper[4755]: E1210 15:23:51.515310 4755 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 10 15:23:51 crc kubenswrapper[4755]: E1210 15:23:51.515361 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-10 15:23:59.515346779 +0000 UTC m=+36.116230411 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 10 15:23:51 crc kubenswrapper[4755]: E1210 15:23:51.515387 4755 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 10 15:23:51 crc kubenswrapper[4755]: E1210 15:23:51.515414 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 15:23:59.51538817 +0000 UTC m=+36.116271802 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:23:51 crc kubenswrapper[4755]: E1210 15:23:51.515424 4755 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 10 15:23:51 crc kubenswrapper[4755]: E1210 15:23:51.515426 4755 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 10 15:23:51 crc kubenswrapper[4755]: E1210 15:23:51.515509 4755 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 10 15:23:51 crc kubenswrapper[4755]: E1210 15:23:51.515519 4755 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 15:23:51 crc kubenswrapper[4755]: E1210 15:23:51.515530 4755 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 15:23:51 crc kubenswrapper[4755]: E1210 15:23:51.515557 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-10 15:23:59.515546844 +0000 UTC m=+36.116430556 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 15:23:51 crc kubenswrapper[4755]: E1210 15:23:51.515580 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-10 15:23:59.515571074 +0000 UTC m=+36.116454836 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 15:23:51 crc kubenswrapper[4755]: I1210 15:23:51.520736 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zl2tx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"796da6d5-6ccd-4786-a03e-9a8e47a55031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de63a123c46563bd8cd07e669d192bc8b019a889b9bdb7af1b988872c8f1fc48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzg4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zl2tx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:51Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:51 crc kubenswrapper[4755]: I1210 15:23:51.558640 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b132a8b9-1c99-414d-8773-229bf36b305d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5a5a59e9f156fb791ec822c2d5efe3fc6ec0e84bfcb2b6f5da81396951984c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2mdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a40c4bdaa23a60a665b8f565720d79b68cac62d40246be94fc6cd314b1bb3656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2mdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ggt8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:51Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:51 crc kubenswrapper[4755]: I1210 15:23:51.604097 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aed6bb9-78c2-410b-9c58-b60ab22a7bd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70d014e7227746c46b30f8f5a1f307a422d2fa0d4b98d98bfe5ba6217223489e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://104d730ea666ffd2d639ba7fc8d1ac5b444586788a62656fd260cb249f82b1ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09dbec85547c7170bb9551e5657876814d48528e3047daf3547711a563d2b6ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8bf1de0d8bdc0a20bc42ba5097d849ae0f507e5fbc18fb17b4ff3650e46ff0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:51Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:51 crc kubenswrapper[4755]: I1210 15:23:51.610998 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:51 crc kubenswrapper[4755]: I1210 15:23:51.611095 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:51 crc kubenswrapper[4755]: I1210 15:23:51.611108 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:51 crc kubenswrapper[4755]: I1210 15:23:51.611133 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:51 crc kubenswrapper[4755]: I1210 15:23:51.611149 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:51Z","lastTransitionTime":"2025-12-10T15:23:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:51 crc kubenswrapper[4755]: I1210 15:23:51.642202 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n8c6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b56ae78-835a-45da-bc46-5adff2bdf9fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b9a1485f6d0c0e53fd3505623b2788476bc00abea2f11650386eac40e449bcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9a1485f6d0c0e53fd3505623b2788476bc00abea2f11650386eac40e449bcc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a532015d196c588ad3dd6c43d9fa3772ba2a42bd1b2231f830b60adb0addab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a532015d196c588ad3dd6c43d9fa3772ba2a42bd1b2231f830b60adb0addab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://417e326f541e04df27d540c14ec572bb8f1d749a9e3dc88f483241d69f7d0679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://417e326f541e04df27d540c14ec572bb8f1d749a9e3dc88f483241d69f7d0679\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42867f7a1f6aceaf708ad650b7634bdafb3f850872c540d424916440014e7941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42867f7a1f6aceaf708ad650b7634bdafb3f850872c540d424916440014e7941\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f2d22bfc26958a3aad99de39f5c831d36c1ba4f563851baafc73735e80d5c91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f2d22bfc26958a3aad99de39f5c831d36c1ba4f563851baafc73735e80d5c91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://548d326fc5c14c16176d4fff3446d93c17322c11d2a34e9e42376b665de3848d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://548d326fc5c14c16176d4fff3446d93c17322c11d2a34e9e42376b665de3848d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n8c6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:51Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:51 crc kubenswrapper[4755]: I1210 15:23:51.678273 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qnmst" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1693fcf1-bef4-4f82-8dd8-f1797b03f5e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8fd6a2ab2b9557574951c0ebbccd663fe576262b0de2c3c655427c977f62d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hzdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qnmst\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:51Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:51 crc kubenswrapper[4755]: I1210 15:23:51.713566 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:51 crc kubenswrapper[4755]: I1210 15:23:51.713611 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:51 crc kubenswrapper[4755]: I1210 15:23:51.713624 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:51 crc kubenswrapper[4755]: I1210 15:23:51.713640 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:51 crc kubenswrapper[4755]: I1210 15:23:51.713652 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:51Z","lastTransitionTime":"2025-12-10T15:23:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:51 crc kubenswrapper[4755]: I1210 15:23:51.757604 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 15:23:51 crc kubenswrapper[4755]: I1210 15:23:51.757750 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 15:23:51 crc kubenswrapper[4755]: E1210 15:23:51.757892 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 15:23:51 crc kubenswrapper[4755]: E1210 15:23:51.757747 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 15:23:51 crc kubenswrapper[4755]: I1210 15:23:51.757633 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 15:23:51 crc kubenswrapper[4755]: E1210 15:23:51.757999 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 15:23:51 crc kubenswrapper[4755]: I1210 15:23:51.815559 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:51 crc kubenswrapper[4755]: I1210 15:23:51.815596 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:51 crc kubenswrapper[4755]: I1210 15:23:51.815609 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:51 crc kubenswrapper[4755]: I1210 15:23:51.815625 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:51 crc kubenswrapper[4755]: I1210 15:23:51.815636 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:51Z","lastTransitionTime":"2025-12-10T15:23:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:51 crc kubenswrapper[4755]: I1210 15:23:51.918539 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:51 crc kubenswrapper[4755]: I1210 15:23:51.918581 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:51 crc kubenswrapper[4755]: I1210 15:23:51.918590 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:51 crc kubenswrapper[4755]: I1210 15:23:51.918605 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:51 crc kubenswrapper[4755]: I1210 15:23:51.918614 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:51Z","lastTransitionTime":"2025-12-10T15:23:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:52 crc kubenswrapper[4755]: I1210 15:23:52.019819 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:52 crc kubenswrapper[4755]: I1210 15:23:52.019859 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:52 crc kubenswrapper[4755]: I1210 15:23:52.019870 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:52 crc kubenswrapper[4755]: I1210 15:23:52.019884 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:52 crc kubenswrapper[4755]: I1210 15:23:52.019895 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:52Z","lastTransitionTime":"2025-12-10T15:23:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:52 crc kubenswrapper[4755]: I1210 15:23:52.023791 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" event={"ID":"4b1da51a-99c9-4f8e-920d-ce0973af6370","Type":"ContainerStarted","Data":"d89ebb4c7085b2d1d38908446db4cfa84ccbb4e5cf150249d1b2b508631e72d6"} Dec 10 15:23:52 crc kubenswrapper[4755]: I1210 15:23:52.024103 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" Dec 10 15:23:52 crc kubenswrapper[4755]: I1210 15:23:52.028184 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-n8c6s" event={"ID":"8b56ae78-835a-45da-bc46-5adff2bdf9fd","Type":"ContainerStarted","Data":"e8957db8ccef7b3c449920471d345aa81ce9a7ab9be36b2350ec428aebec7ac3"} Dec 10 15:23:52 crc kubenswrapper[4755]: I1210 15:23:52.050150 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" Dec 10 15:23:52 crc kubenswrapper[4755]: I1210 15:23:52.052071 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fa03060-a6e0-4aad-9aa1-43b1a0d00c85\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a823333ed5cb5de3988d25e50e4b7a0f9071c76fb39c22760a4f2acd5eb455d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d828f3f3b90a2dcb1c1908e6a686368af5b0d715b3251e4b8fcf3c8818ec75a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://172ab3fce08c8ba4095ff4095c89364a778644752bd7bb6c178d6e3ebcface69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfbb8e350ad18b78a6bcf6cfa4eb8f2fad90f970dba08a8b1b2026af6f255e83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c4b2bf0c88a16fd6b4b45a20730a92292895dc9f29ce756d347c302b25a8612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55da309288bc2c7ff1f22da0569d18155796700c200fe3ed508f6f8179756865\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55da309288bc2c7ff1f22da0569d18155796700c200fe3ed508f6f8179756865\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219c531368bb5379f3726d94ab0afc0f33eedbf91b0a4dbbe0757c6e232f79ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://219c531368bb5379f3726d94ab0afc0f33eedbf91b0a4dbbe0757c6e232f79ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f9da8f3d4f85ec83c16b71ecd983ff4f48049eb8e73461a2b46c4f66d71ed06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9da8f3d4f85ec83c16b71ecd983ff4f48049eb8e73461a2b46c4f66d71ed06e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:52Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:52 crc kubenswrapper[4755]: I1210 15:23:52.065221 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:52Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:52 crc kubenswrapper[4755]: I1210 15:23:52.079108 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:52Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:52 crc kubenswrapper[4755]: I1210 15:23:52.089852 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wv8fh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d150a22e-c59a-4376-a5c8-db4085ea0be0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58d106c5d1b9525ec821d009ca556449cd8d7f0e1b9c8ec7dd969df996e73625\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfw9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wv8fh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:52Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:52 crc kubenswrapper[4755]: I1210 15:23:52.104129 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e6db97-7e22-4fec-924e-20e90f463887\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef3871ddc16d2fc1b89a6f120390cf066424bd9ed4cca98f8dea4430c40c6c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08895c3ab20248c4bd70ccfae6442bdf8e76a602e605b043f0f00166624ada17\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5b2467645116856afa1e1e37501c7b453b03da193a96d1075886b85839a86ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16e58653dca08c9b33671279553975bca9507ffeeefea161119cec624813f167\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea18ef6c319336321d97c2a55449903b837b5a47da785da3ff5308244751943\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17ca0b6cc1b8b0bf9c34e03c494d1fc1594db013eccf862027ec8749ea51f6e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17ca0b6cc1b8b0bf9c34e03c494d1fc1594db013eccf862027ec8749ea51f6e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:52Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:52 crc kubenswrapper[4755]: I1210 15:23:52.116362 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:52Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:52 crc kubenswrapper[4755]: I1210 15:23:52.121717 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:52 crc kubenswrapper[4755]: I1210 15:23:52.121759 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:52 crc kubenswrapper[4755]: I1210 15:23:52.121770 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:52 crc kubenswrapper[4755]: I1210 15:23:52.121789 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:52 crc kubenswrapper[4755]: I1210 15:23:52.121802 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:52Z","lastTransitionTime":"2025-12-10T15:23:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:52 crc kubenswrapper[4755]: I1210 15:23:52.135248 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b1da51a-99c9-4f8e-920d-ce0973af6370\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6eb065dc6c0cc8914cb95553eb2683d894fb9a4e78ce7fac73bcce8d7f6cced9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e547993b9f2fa37bf924f909c47b62eb0cc02b659596b1cad9bbc42fdde8f9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9ba47683cc23d5b531a45f0658b6a9378650400b35b5372642b0430a5ac503f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a75407e83508af9adebb09c6466a966dd791d29f690c539656f9bd3396d7031\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://602b4e49987fa2cc6b54b822110aececbdddaf2bce8f27cce4ed906768d45791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://335bcab3a79f09796e97560365e1211fb30ddf288f4773c05ab353197add4365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d89ebb4c7085b2d1d38908446db4cfa84ccbb4e5cf150249d1b2b508631e72d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59bf59d1b7fbc365a916fbbceca7ae30b7ebc754b34f2f7a34c2e21e1e1d2166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://375efb27cf6bef06a6a0ffc8f6b2963e1b9ffbca18b3c8e7b402bc552a411f6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://375efb27cf6bef06a6a0ffc8f6b2963e1b9ffbca18b3c8e7b402bc552a411f6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6lfvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:52Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:52 crc kubenswrapper[4755]: I1210 15:23:52.147830 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7d089a8bddfa1f80d29011c7a6bab0a300f7dd44bdb2864f86951ebbb9ebea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:52Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:52 crc kubenswrapper[4755]: I1210 15:23:52.168930 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01ddc4056319db1f69268dcae192c3cb9db6c25284305803ae7588e59f77c346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aae6bf174cdc9bde18d7c959e976454e73c1e67642f0158d365b79582f63f3cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:52Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:52 crc kubenswrapper[4755]: I1210 15:23:52.179492 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:52Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:52 crc kubenswrapper[4755]: I1210 15:23:52.193679 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zl2tx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"796da6d5-6ccd-4786-a03e-9a8e47a55031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de63a123c46563bd8cd07e669d192bc8b019a889b9bdb7af1b988872c8f1fc48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzg4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zl2tx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:52Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:52 crc kubenswrapper[4755]: I1210 15:23:52.204236 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b132a8b9-1c99-414d-8773-229bf36b305d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5a5a59e9f156fb791ec822c2d5efe3fc6ec0e84bfcb2b6f5da81396951984c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2mdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a40c4bdaa23a60a665b8f565720d79b68cac62d40246be94fc6cd314b1bb3656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2mdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ggt8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:52Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:52 crc kubenswrapper[4755]: I1210 15:23:52.214767 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aed6bb9-78c2-410b-9c58-b60ab22a7bd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70d014e7227746c46b30f8f5a1f307a422d2fa0d4b98d98bfe5ba6217223489e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://104d730ea666ffd2d639ba7fc8d1ac5b444586788a62656fd260cb249f82b1ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09dbec85547c7170bb9551e5657876814d48528e3047daf3547711a563d2b6ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8bf1de0d8bdc0a20bc42ba5097d849ae0f507e5fbc18fb17b4ff3650e46ff0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:52Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:52 crc kubenswrapper[4755]: I1210 15:23:52.223591 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:52 crc kubenswrapper[4755]: I1210 15:23:52.223638 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:52 crc kubenswrapper[4755]: I1210 15:23:52.223648 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:52 crc kubenswrapper[4755]: I1210 15:23:52.223665 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:52 crc kubenswrapper[4755]: I1210 15:23:52.223680 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:52Z","lastTransitionTime":"2025-12-10T15:23:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:52 crc kubenswrapper[4755]: I1210 15:23:52.242019 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n8c6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b56ae78-835a-45da-bc46-5adff2bdf9fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b9a1485f6d0c0e53fd3505623b2788476bc00abea2f11650386eac40e449bcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9a1485f6d0c0e53fd3505623b2788476bc00abea2f11650386eac40e449bcc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a532015d196c588ad3dd6c43d9fa3772ba2a42bd1b2231f830b60adb0addab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a532015d196c588ad3dd6c43d9fa3772ba2a42bd1b2231f830b60adb0addab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://417e326f541e04df27d540c14ec572bb8f1d749a9e3dc88f483241d69f7d0679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://417e326f541e04df27d540c14ec572bb8f1d749a9e3dc88f483241d69f7d0679\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42867f7a1f6aceaf708ad650b7634bdafb3f850872c540d424916440014e7941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42867f7a1f6aceaf708ad650b7634bdafb3f850872c540d424916440014e7941\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f2d22bfc26958a3aad99de39f5c831d36c1ba4f563851baafc73735e80d5c91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f2d22bfc26958a3aad99de39f5c831d36c1ba4f563851baafc73735e80d5c91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://548d326fc5c14c16176d4fff3446d93c17322c11d2a34e9e42376b665de3848d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://548d326fc5c14c16176d4fff3446d93c17322c11d2a34e9e42376b665de3848d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n8c6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:52Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:52 crc kubenswrapper[4755]: I1210 15:23:52.281178 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qnmst" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1693fcf1-bef4-4f82-8dd8-f1797b03f5e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8fd6a2ab2b9557574951c0ebbccd663fe576262b0de2c3c655427c977f62d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hzdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qnmst\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:52Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:52 crc kubenswrapper[4755]: I1210 15:23:52.323809 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7d089a8bddfa1f80d29011c7a6bab0a300f7dd44bdb2864f86951ebbb9ebea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:52Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:52 crc kubenswrapper[4755]: I1210 15:23:52.325497 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:52 crc kubenswrapper[4755]: I1210 15:23:52.325552 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:52 crc kubenswrapper[4755]: I1210 15:23:52.325561 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:52 crc kubenswrapper[4755]: I1210 15:23:52.325577 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:52 crc kubenswrapper[4755]: I1210 15:23:52.325589 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:52Z","lastTransitionTime":"2025-12-10T15:23:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:52 crc kubenswrapper[4755]: I1210 15:23:52.361360 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01ddc4056319db1f69268dcae192c3cb9db6c25284305803ae7588e59f77c346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aae6bf174cdc9bde18d7c959e976454e73c1e67642f0158d365b79582f63f3cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:52Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:52 crc kubenswrapper[4755]: I1210 15:23:52.403594 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:52Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:52 crc kubenswrapper[4755]: I1210 15:23:52.427733 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:52 crc kubenswrapper[4755]: I1210 15:23:52.427818 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:52 crc kubenswrapper[4755]: I1210 15:23:52.427843 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:52 crc kubenswrapper[4755]: I1210 15:23:52.427871 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:52 crc kubenswrapper[4755]: I1210 15:23:52.427888 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:52Z","lastTransitionTime":"2025-12-10T15:23:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:52 crc kubenswrapper[4755]: I1210 15:23:52.442586 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zl2tx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"796da6d5-6ccd-4786-a03e-9a8e47a55031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de63a123c46563bd8cd07e669d192bc8b019a889b9bdb7af1b988872c8f1fc48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzg4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zl2tx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:52Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:52 crc kubenswrapper[4755]: I1210 15:23:52.479361 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b132a8b9-1c99-414d-8773-229bf36b305d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5a5a59e9f156fb791ec822c2d5efe3fc6ec0e84bfcb2b6f5da81396951984c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2mdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a40c4bdaa23a60a665b8f565720d79b68cac62d40246be94fc6cd314b1bb3656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2mdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ggt8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:52Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:52 crc kubenswrapper[4755]: I1210 15:23:52.520100 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aed6bb9-78c2-410b-9c58-b60ab22a7bd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70d014e7227746c46b30f8f5a1f307a422d2fa0d4b98d98bfe5ba6217223489e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://104d730ea666ffd2d639ba7fc8d1ac5b444586788a62656fd260cb249f82b1ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09dbec85547c7170bb9551e5657876814d48528e3047daf3547711a563d2b6ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8bf1de0d8bdc0a20bc42ba5097d849ae0f507e5fbc18fb17b4ff3650e46ff0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:52Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:52 crc kubenswrapper[4755]: I1210 15:23:52.530254 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:52 crc kubenswrapper[4755]: I1210 15:23:52.530309 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:52 crc kubenswrapper[4755]: I1210 15:23:52.530321 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:52 crc kubenswrapper[4755]: I1210 15:23:52.530340 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:52 crc kubenswrapper[4755]: I1210 15:23:52.530355 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:52Z","lastTransitionTime":"2025-12-10T15:23:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:52 crc kubenswrapper[4755]: I1210 15:23:52.561378 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n8c6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b56ae78-835a-45da-bc46-5adff2bdf9fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8957db8ccef7b3c449920471d345aa81ce9a7ab9be36b2350ec428aebec7ac3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b9a1485f6d0c0e53fd3505623b2788476bc00abea2f11650386eac40e449bcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9a1485f6d0c0e53fd3505623b2788476bc00abea2f11650386eac40e449bcc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a532015d196c588ad3dd6c43d9fa3772ba2a42bd1b2231f830b60adb0addab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a532015d196c588ad3dd6c43d9fa3772ba2a42bd1b2231f830b60adb0addab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://417e326f541e04df27d540c14ec572bb8f1d749a9e3dc88f483241d69f7d0679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://417e326f541e04df27d540c14ec572bb8f1d749a9e3dc88f483241d69f7d0679\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42867f7a1f6aceaf708ad650b7634bdafb3f850872c540d424916440014e7941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42867f7a1f6aceaf708ad650b7634bdafb3f850872c540d424916440014e7941\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f2d22bfc26958a3aad99de39f5c831d36c1ba4f563851baafc73735e80d5c91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f2d22bfc26958a3aad99de39f5c831d36c1ba4f563851baafc73735e80d5c91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://548d326fc5c14c16176d4fff3446d93c17322c11d2a34e9e42376b665de3848d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://548d326fc5c14c16176d4fff3446d93c17322c11d2a34e9e42376b665de3848d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n8c6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:52Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:52 crc kubenswrapper[4755]: I1210 15:23:52.600383 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qnmst" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1693fcf1-bef4-4f82-8dd8-f1797b03f5e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8fd6a2ab2b9557574951c0ebbccd663fe576262b0de2c3c655427c977f62d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hzdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qnmst\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:52Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:52 crc kubenswrapper[4755]: I1210 15:23:52.632059 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:52 crc kubenswrapper[4755]: I1210 15:23:52.632100 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:52 crc kubenswrapper[4755]: I1210 15:23:52.632111 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:52 crc kubenswrapper[4755]: I1210 15:23:52.632128 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:52 crc kubenswrapper[4755]: I1210 15:23:52.632147 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:52Z","lastTransitionTime":"2025-12-10T15:23:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:52 crc kubenswrapper[4755]: I1210 15:23:52.647532 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fa03060-a6e0-4aad-9aa1-43b1a0d00c85\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a823333ed5cb5de3988d25e50e4b7a0f9071c76fb39c22760a4f2acd5eb455d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d828f3f3b90a2dcb1c1908e6a686368af5b0d715b3251e4b8fcf3c8818ec75a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://172ab3fce08c8ba4095ff4095c89364a778644752bd7bb6c178d6e3ebcface69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfbb8e350ad18b78a6bcf6cfa4eb8f2fad90f970dba08a8b1b2026af6f255e83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c4b2bf0c88a16fd6b4b45a20730a92292895dc9f29ce756d347c302b25a8612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55da309288bc2c7ff1f22da0569d18155796700c200fe3ed508f6f8179756865\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55da309288bc2c7ff1f22da0569d18155796700c200fe3ed508f6f8179756865\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219c531368bb5379f3726d94ab0afc0f33eedbf91b0a4dbbe0757c6e232f79ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://219c531368bb5379f3726d94ab0afc0f33eedbf91b0a4dbbe0757c6e232f79ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f9da8f3d4f85ec83c16b71ecd983ff4f48049eb8e73461a2b46c4f66d71ed06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9da8f3d4f85ec83c16b71ecd983ff4f48049eb8e73461a2b46c4f66d71ed06e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:52Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:52 crc kubenswrapper[4755]: I1210 15:23:52.687276 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:52Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:52 crc kubenswrapper[4755]: I1210 15:23:52.724816 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:52Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:52 crc kubenswrapper[4755]: I1210 15:23:52.734680 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:52 crc kubenswrapper[4755]: I1210 15:23:52.734724 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:52 crc kubenswrapper[4755]: I1210 15:23:52.734735 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:52 crc kubenswrapper[4755]: I1210 15:23:52.734753 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:52 crc kubenswrapper[4755]: I1210 15:23:52.734764 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:52Z","lastTransitionTime":"2025-12-10T15:23:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:52 crc kubenswrapper[4755]: I1210 15:23:52.758224 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wv8fh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d150a22e-c59a-4376-a5c8-db4085ea0be0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58d106c5d1b9525ec821d009ca556449cd8d7f0e1b9c8ec7dd969df996e73625\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfw9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wv8fh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:52Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:52 crc kubenswrapper[4755]: I1210 15:23:52.801230 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e6db97-7e22-4fec-924e-20e90f463887\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef3871ddc16d2fc1b89a6f120390cf066424bd9ed4cca98f8dea4430c40c6c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08895c3ab20248c4bd70ccfae6442bdf8e76a602e605b043f0f00166624ada17\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5b2467645116856afa1e1e37501c7b453b03da193a96d1075886b85839a86ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16e58653dca08c9b33671279553975bca9507ffeeefea161119cec624813f167\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea18ef6c319336321d97c2a55449903b837b5a47da785da3ff5308244751943\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17ca0b6cc1b8b0bf9c34e03c494d1fc1594db013eccf862027ec8749ea51f6e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17ca0b6cc1b8b0bf9c34e03c494d1fc1594db013eccf862027ec8749ea51f6e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:52Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:52 crc kubenswrapper[4755]: I1210 15:23:52.836743 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:52 crc kubenswrapper[4755]: I1210 15:23:52.836783 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:52 crc kubenswrapper[4755]: I1210 15:23:52.836796 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:52 crc kubenswrapper[4755]: I1210 15:23:52.836812 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:52 crc kubenswrapper[4755]: I1210 15:23:52.836825 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:52Z","lastTransitionTime":"2025-12-10T15:23:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:52 crc kubenswrapper[4755]: I1210 15:23:52.841260 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:52Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:52 crc kubenswrapper[4755]: I1210 15:23:52.887312 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b1da51a-99c9-4f8e-920d-ce0973af6370\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6eb065dc6c0cc8914cb95553eb2683d894fb9a4e78ce7fac73bcce8d7f6cced9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e547993b9f2fa37bf924f909c47b62eb0cc02b659596b1cad9bbc42fdde8f9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9ba47683cc23d5b531a45f0658b6a9378650400b35b5372642b0430a5ac503f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a75407e83508af9adebb09c6466a966dd791d29f690c539656f9bd3396d7031\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://602b4e49987fa2cc6b54b822110aececbdddaf2bce8f27cce4ed906768d45791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://335bcab3a79f09796e97560365e1211fb30ddf288f4773c05ab353197add4365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d89ebb4c7085b2d1d38908446db4cfa84ccbb4e5cf150249d1b2b508631e72d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59bf59d1b7fbc365a916fbbceca7ae30b7ebc754b34f2f7a34c2e21e1e1d2166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://375efb27cf6bef06a6a0ffc8f6b2963e1b9ffbca18b3c8e7b402bc552a411f6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://375efb27cf6bef06a6a0ffc8f6b2963e1b9ffbca18b3c8e7b402bc552a411f6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6lfvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:52Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:52 crc kubenswrapper[4755]: I1210 15:23:52.939557 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:52 crc kubenswrapper[4755]: I1210 15:23:52.939599 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:52 crc kubenswrapper[4755]: I1210 15:23:52.939610 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:52 crc kubenswrapper[4755]: I1210 15:23:52.939628 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:52 crc kubenswrapper[4755]: I1210 15:23:52.939640 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:52Z","lastTransitionTime":"2025-12-10T15:23:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:53 crc kubenswrapper[4755]: I1210 15:23:53.032456 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"c371b199c3643a68ea5eba935eb76a1b7e8a4027c9292a1324116ccfc14742ad"} Dec 10 15:23:53 crc kubenswrapper[4755]: I1210 15:23:53.033147 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" Dec 10 15:23:53 crc kubenswrapper[4755]: I1210 15:23:53.033180 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" Dec 10 15:23:53 crc kubenswrapper[4755]: I1210 15:23:53.042137 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:53 crc kubenswrapper[4755]: I1210 15:23:53.042170 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:53 crc kubenswrapper[4755]: I1210 15:23:53.042179 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:53 crc kubenswrapper[4755]: I1210 15:23:53.042194 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:53 crc kubenswrapper[4755]: I1210 15:23:53.042205 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:53Z","lastTransitionTime":"2025-12-10T15:23:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:53 crc kubenswrapper[4755]: I1210 15:23:53.053228 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fa03060-a6e0-4aad-9aa1-43b1a0d00c85\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a823333ed5cb5de3988d25e50e4b7a0f9071c76fb39c22760a4f2acd5eb455d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d828f3f3b90a2dcb1c1908e6a686368af5b0d715b3251e4b8fcf3c8818ec75a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://172ab3fce08c8ba4095ff4095c89364a778644752bd7bb6c178d6e3ebcface69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfbb8e350ad18b78a6bcf6cfa4eb8f2fad90f970dba08a8b1b2026af6f255e83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c4b2bf0c88a16fd6b4b45a20730a92292895dc9f29ce756d347c302b25a8612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55da309288bc2c7ff1f22da0569d18155796700c200fe3ed508f6f8179756865\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55da309288bc2c7ff1f22da0569d18155796700c200fe3ed508f6f8179756865\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219c531368bb5379f3726d94ab0afc0f33eedbf91b0a4dbbe0757c6e232f79ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://219c531368bb5379f3726d94ab0afc0f33eedbf91b0a4dbbe0757c6e232f79ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f9da8f3d4f85ec83c16b71ecd983ff4f48049eb8e73461a2b46c4f66d71ed06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9da8f3d4f85ec83c16b71ecd983ff4f48049eb8e73461a2b46c4f66d71ed06e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:53Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:53 crc kubenswrapper[4755]: I1210 15:23:53.055126 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" Dec 10 15:23:53 crc kubenswrapper[4755]: I1210 15:23:53.065772 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:53Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:53 crc kubenswrapper[4755]: I1210 15:23:53.077662 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c371b199c3643a68ea5eba935eb76a1b7e8a4027c9292a1324116ccfc14742ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:53Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:53 crc kubenswrapper[4755]: I1210 15:23:53.088149 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wv8fh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d150a22e-c59a-4376-a5c8-db4085ea0be0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58d106c5d1b9525ec821d009ca556449cd8d7f0e1b9c8ec7dd969df996e73625\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfw9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wv8fh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:53Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:53 crc kubenswrapper[4755]: I1210 15:23:53.102539 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e6db97-7e22-4fec-924e-20e90f463887\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef3871ddc16d2fc1b89a6f120390cf066424bd9ed4cca98f8dea4430c40c6c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08895c3ab20248c4bd70ccfae6442bdf8e76a602e605b043f0f00166624ada17\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5b2467645116856afa1e1e37501c7b453b03da193a96d1075886b85839a86ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16e58653dca08c9b33671279553975bca9507ffeeefea161119cec624813f167\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea18ef6c319336321d97c2a55449903b837b5a47da785da3ff5308244751943\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17ca0b6cc1b8b0bf9c34e03c494d1fc1594db013eccf862027ec8749ea51f6e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17ca0b6cc1b8b0bf9c34e03c494d1fc1594db013eccf862027ec8749ea51f6e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:53Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:53 crc kubenswrapper[4755]: I1210 15:23:53.121160 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:53Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:53 crc kubenswrapper[4755]: I1210 15:23:53.145238 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:53 crc kubenswrapper[4755]: I1210 15:23:53.146426 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:53 crc kubenswrapper[4755]: I1210 15:23:53.146544 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:53 crc kubenswrapper[4755]: I1210 15:23:53.146565 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:53 crc kubenswrapper[4755]: I1210 15:23:53.146587 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:53Z","lastTransitionTime":"2025-12-10T15:23:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:53 crc kubenswrapper[4755]: I1210 15:23:53.165077 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b1da51a-99c9-4f8e-920d-ce0973af6370\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6eb065dc6c0cc8914cb95553eb2683d894fb9a4e78ce7fac73bcce8d7f6cced9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e547993b9f2fa37bf924f909c47b62eb0cc02b659596b1cad9bbc42fdde8f9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9ba47683cc23d5b531a45f0658b6a9378650400b35b5372642b0430a5ac503f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a75407e83508af9adebb09c6466a966dd791d29f690c539656f9bd3396d7031\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://602b4e49987fa2cc6b54b822110aececbdddaf2bce8f27cce4ed906768d45791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://335bcab3a79f09796e97560365e1211fb30ddf288f4773c05ab353197add4365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d89ebb4c7085b2d1d38908446db4cfa84ccbb4e5cf150249d1b2b508631e72d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59bf59d1b7fbc365a916fbbceca7ae30b7ebc754b34f2f7a34c2e21e1e1d2166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://375efb27cf6bef06a6a0ffc8f6b2963e1b9ffbca18b3c8e7b402bc552a411f6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://375efb27cf6bef06a6a0ffc8f6b2963e1b9ffbca18b3c8e7b402bc552a411f6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6lfvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:53Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:53 crc kubenswrapper[4755]: I1210 15:23:53.201973 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zl2tx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"796da6d5-6ccd-4786-a03e-9a8e47a55031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de63a123c46563bd8cd07e669d192bc8b019a889b9bdb7af1b988872c8f1fc48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzg4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zl2tx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:53Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:53 crc kubenswrapper[4755]: I1210 15:23:53.238000 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b132a8b9-1c99-414d-8773-229bf36b305d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5a5a59e9f156fb791ec822c2d5efe3fc6ec0e84bfcb2b6f5da81396951984c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2mdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a40c4bdaa23a60a665b8f565720d79b68cac62d40246be94fc6cd314b1bb3656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2mdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ggt8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:53Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:53 crc kubenswrapper[4755]: I1210 15:23:53.248457 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:53 crc kubenswrapper[4755]: I1210 15:23:53.248509 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:53 crc kubenswrapper[4755]: I1210 15:23:53.248520 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:53 crc kubenswrapper[4755]: I1210 15:23:53.248536 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:53 crc kubenswrapper[4755]: I1210 15:23:53.248548 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:53Z","lastTransitionTime":"2025-12-10T15:23:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:53 crc kubenswrapper[4755]: I1210 15:23:53.281444 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7d089a8bddfa1f80d29011c7a6bab0a300f7dd44bdb2864f86951ebbb9ebea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:53Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:53 crc kubenswrapper[4755]: I1210 15:23:53.338896 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01ddc4056319db1f69268dcae192c3cb9db6c25284305803ae7588e59f77c346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aae6bf174cdc9bde18d7c959e976454e73c1e67642f0158d365b79582f63f3cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:53Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:53 crc kubenswrapper[4755]: I1210 15:23:53.351369 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:53 crc kubenswrapper[4755]: I1210 15:23:53.351419 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:53 crc kubenswrapper[4755]: I1210 15:23:53.351431 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:53 crc kubenswrapper[4755]: I1210 15:23:53.351451 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:53 crc kubenswrapper[4755]: I1210 15:23:53.351480 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:53Z","lastTransitionTime":"2025-12-10T15:23:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:53 crc kubenswrapper[4755]: I1210 15:23:53.362132 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:53Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:53 crc kubenswrapper[4755]: I1210 15:23:53.404975 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aed6bb9-78c2-410b-9c58-b60ab22a7bd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70d014e7227746c46b30f8f5a1f307a422d2fa0d4b98d98bfe5ba6217223489e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://104d730ea666ffd2d639ba7fc8d1ac5b444586788a62656fd260cb249f82b1ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09dbec85547c7170bb9551e5657876814d48528e3047daf3547711a563d2b6ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8bf1de0d8bdc0a20bc42ba5097d849ae0f507e5fbc18fb17b4ff3650e46ff0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:53Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:53 crc kubenswrapper[4755]: I1210 15:23:53.443894 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n8c6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b56ae78-835a-45da-bc46-5adff2bdf9fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8957db8ccef7b3c449920471d345aa81ce9a7ab9be36b2350ec428aebec7ac3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b9a1485f6d0c0e53fd3505623b2788476bc00abea2f11650386eac40e449bcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9a1485f6d0c0e53fd3505623b2788476bc00abea2f11650386eac40e449bcc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a532015d196c588ad3dd6c43d9fa3772ba2a42bd1b2231f830b60adb0addab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a532015d196c588ad3dd6c43d9fa3772ba2a42bd1b2231f830b60adb0addab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://417e326f541e04df27d540c14ec572bb8f1d749a9e3dc88f483241d69f7d0679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://417e326f541e04df27d540c14ec572bb8f1d749a9e3dc88f483241d69f7d0679\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42867f7a1f6aceaf708ad650b7634bdafb3f850872c540d424916440014e7941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42867f7a1f6aceaf708ad650b7634bdafb3f850872c540d424916440014e7941\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f2d22bfc26958a3aad99de39f5c831d36c1ba4f563851baafc73735e80d5c91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f2d22bfc26958a3aad99de39f5c831d36c1ba4f563851baafc73735e80d5c91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://548d326fc5c14c16176d4fff3446d93c17322c11d2a34e9e42376b665de3848d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://548d326fc5c14c16176d4fff3446d93c17322c11d2a34e9e42376b665de3848d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n8c6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:53Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:53 crc kubenswrapper[4755]: I1210 15:23:53.454398 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:53 crc kubenswrapper[4755]: I1210 15:23:53.454490 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:53 crc kubenswrapper[4755]: I1210 15:23:53.454507 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:53 crc kubenswrapper[4755]: I1210 15:23:53.454526 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:53 crc kubenswrapper[4755]: I1210 15:23:53.454537 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:53Z","lastTransitionTime":"2025-12-10T15:23:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:53 crc kubenswrapper[4755]: I1210 15:23:53.478778 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qnmst" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1693fcf1-bef4-4f82-8dd8-f1797b03f5e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8fd6a2ab2b9557574951c0ebbccd663fe576262b0de2c3c655427c977f62d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hzdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qnmst\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:53Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:53 crc kubenswrapper[4755]: I1210 15:23:53.534312 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b1da51a-99c9-4f8e-920d-ce0973af6370\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6eb065dc6c0cc8914cb95553eb2683d894fb9a4e78ce7fac73bcce8d7f6cced9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e547993b9f2fa37bf924f909c47b62eb0cc02b659596b1cad9bbc42fdde8f9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9ba47683cc23d5b531a45f0658b6a9378650400b35b5372642b0430a5ac503f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a75407e83508af9adebb09c6466a966dd791d29f690c539656f9bd3396d7031\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://602b4e49987fa2cc6b54b822110aececbdddaf2bce8f27cce4ed906768d45791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://335bcab3a79f09796e97560365e1211fb30ddf288f4773c05ab353197add4365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d89ebb4c7085b2d1d38908446db4cfa84ccbb4e5cf150249d1b2b508631e72d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59bf59d1b7fbc365a916fbbceca7ae30b7ebc754b34f2f7a34c2e21e1e1d2166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://375efb27cf6bef06a6a0ffc8f6b2963e1b9ffbca18b3c8e7b402bc552a411f6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://375efb27cf6bef06a6a0ffc8f6b2963e1b9ffbca18b3c8e7b402bc552a411f6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6lfvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:53Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:53 crc kubenswrapper[4755]: I1210 15:23:53.556835 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:53 crc kubenswrapper[4755]: I1210 15:23:53.556925 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:53 crc kubenswrapper[4755]: I1210 15:23:53.556937 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:53 crc kubenswrapper[4755]: I1210 15:23:53.556951 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:53 crc kubenswrapper[4755]: I1210 15:23:53.556962 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:53Z","lastTransitionTime":"2025-12-10T15:23:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:53 crc kubenswrapper[4755]: I1210 15:23:53.564194 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e6db97-7e22-4fec-924e-20e90f463887\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef3871ddc16d2fc1b89a6f120390cf066424bd9ed4cca98f8dea4430c40c6c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08895c3ab20248c4bd70ccfae6442bdf8e76a602e605b043f0f00166624ada17\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5b2467645116856afa1e1e37501c7b453b03da193a96d1075886b85839a86ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16e58653dca08c9b33671279553975bca9507ffeeefea161119cec624813f167\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea18ef6c319336321d97c2a55449903b837b5a47da785da3ff5308244751943\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17ca0b6cc1b8b0bf9c34e03c494d1fc1594db013eccf862027ec8749ea51f6e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17ca0b6cc1b8b0bf9c34e03c494d1fc1594db013eccf862027ec8749ea51f6e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:53Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:53 crc kubenswrapper[4755]: I1210 15:23:53.604991 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:53Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:53 crc kubenswrapper[4755]: I1210 15:23:53.641395 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:53Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:53 crc kubenswrapper[4755]: I1210 15:23:53.658701 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:53 crc kubenswrapper[4755]: I1210 15:23:53.658734 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:53 crc kubenswrapper[4755]: I1210 15:23:53.658751 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:53 crc kubenswrapper[4755]: I1210 15:23:53.658768 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:53 crc kubenswrapper[4755]: I1210 15:23:53.658777 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:53Z","lastTransitionTime":"2025-12-10T15:23:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:53 crc kubenswrapper[4755]: I1210 15:23:53.681076 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zl2tx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"796da6d5-6ccd-4786-a03e-9a8e47a55031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de63a123c46563bd8cd07e669d192bc8b019a889b9bdb7af1b988872c8f1fc48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzg4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zl2tx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:53Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:53 crc kubenswrapper[4755]: I1210 15:23:53.719117 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b132a8b9-1c99-414d-8773-229bf36b305d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5a5a59e9f156fb791ec822c2d5efe3fc6ec0e84bfcb2b6f5da81396951984c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2mdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a40c4bdaa23a60a665b8f565720d79b68cac62d40246be94fc6cd314b1bb3656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2mdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ggt8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:53Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:53 crc kubenswrapper[4755]: I1210 15:23:53.757132 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 15:23:53 crc kubenswrapper[4755]: E1210 15:23:53.757329 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 15:23:53 crc kubenswrapper[4755]: I1210 15:23:53.757669 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 15:23:53 crc kubenswrapper[4755]: E1210 15:23:53.757806 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 15:23:53 crc kubenswrapper[4755]: I1210 15:23:53.757870 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 15:23:53 crc kubenswrapper[4755]: E1210 15:23:53.757926 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 15:23:53 crc kubenswrapper[4755]: I1210 15:23:53.760584 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:53 crc kubenswrapper[4755]: I1210 15:23:53.760613 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:53 crc kubenswrapper[4755]: I1210 15:23:53.760625 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:53 crc kubenswrapper[4755]: I1210 15:23:53.760639 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:53 crc kubenswrapper[4755]: I1210 15:23:53.760649 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:53Z","lastTransitionTime":"2025-12-10T15:23:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:53 crc kubenswrapper[4755]: I1210 15:23:53.762353 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7d089a8bddfa1f80d29011c7a6bab0a300f7dd44bdb2864f86951ebbb9ebea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:53Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:53 crc kubenswrapper[4755]: I1210 15:23:53.801411 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01ddc4056319db1f69268dcae192c3cb9db6c25284305803ae7588e59f77c346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aae6bf174cdc9bde18d7c959e976454e73c1e67642f0158d365b79582f63f3cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:53Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:53 crc kubenswrapper[4755]: I1210 15:23:53.847776 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aed6bb9-78c2-410b-9c58-b60ab22a7bd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70d014e7227746c46b30f8f5a1f307a422d2fa0d4b98d98bfe5ba6217223489e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://104d730ea666ffd2d639ba7fc8d1ac5b444586788a62656fd260cb249f82b1ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09dbec85547c7170bb9551e5657876814d48528e3047daf3547711a563d2b6ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8bf1de0d8bdc0a20bc42ba5097d849ae0f507e5fbc18fb17b4ff3650e46ff0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:53Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:53 crc kubenswrapper[4755]: I1210 15:23:53.863185 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:53 crc kubenswrapper[4755]: I1210 15:23:53.863221 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:53 crc kubenswrapper[4755]: I1210 15:23:53.863230 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:53 crc kubenswrapper[4755]: I1210 15:23:53.863245 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:53 crc kubenswrapper[4755]: I1210 15:23:53.863254 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:53Z","lastTransitionTime":"2025-12-10T15:23:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:53 crc kubenswrapper[4755]: I1210 15:23:53.884704 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n8c6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b56ae78-835a-45da-bc46-5adff2bdf9fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8957db8ccef7b3c449920471d345aa81ce9a7ab9be36b2350ec428aebec7ac3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b9a1485f6d0c0e53fd3505623b2788476bc00abea2f11650386eac40e449bcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9a1485f6d0c0e53fd3505623b2788476bc00abea2f11650386eac40e449bcc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a532015d196c588ad3dd6c43d9fa3772ba2a42bd1b2231f830b60adb0addab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a532015d196c588ad3dd6c43d9fa3772ba2a42bd1b2231f830b60adb0addab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://417e326f541e04df27d540c14ec572bb8f1d749a9e3dc88f483241d69f7d0679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://417e326f541e04df27d540c14ec572bb8f1d749a9e3dc88f483241d69f7d0679\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42867f7a1f6aceaf708ad650b7634bdafb3f850872c540d424916440014e7941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42867f7a1f6aceaf708ad650b7634bdafb3f850872c540d424916440014e7941\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f2d22bfc26958a3aad99de39f5c831d36c1ba4f563851baafc73735e80d5c91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f2d22bfc26958a3aad99de39f5c831d36c1ba4f563851baafc73735e80d5c91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://548d326fc5c14c16176d4fff3446d93c17322c11d2a34e9e42376b665de3848d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://548d326fc5c14c16176d4fff3446d93c17322c11d2a34e9e42376b665de3848d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n8c6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:53Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:53 crc kubenswrapper[4755]: I1210 15:23:53.919112 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qnmst" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1693fcf1-bef4-4f82-8dd8-f1797b03f5e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8fd6a2ab2b9557574951c0ebbccd663fe576262b0de2c3c655427c977f62d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hzdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qnmst\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:53Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:53 crc kubenswrapper[4755]: I1210 15:23:53.960190 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wv8fh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d150a22e-c59a-4376-a5c8-db4085ea0be0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58d106c5d1b9525ec821d009ca556449cd8d7f0e1b9c8ec7dd969df996e73625\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfw9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wv8fh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:53Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:53 crc kubenswrapper[4755]: I1210 15:23:53.964813 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:53 crc kubenswrapper[4755]: I1210 15:23:53.964857 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:53 crc kubenswrapper[4755]: I1210 15:23:53.964871 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:53 crc kubenswrapper[4755]: I1210 15:23:53.964886 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:53 crc kubenswrapper[4755]: I1210 15:23:53.964896 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:53Z","lastTransitionTime":"2025-12-10T15:23:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:54 crc kubenswrapper[4755]: I1210 15:23:54.008743 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fa03060-a6e0-4aad-9aa1-43b1a0d00c85\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a823333ed5cb5de3988d25e50e4b7a0f9071c76fb39c22760a4f2acd5eb455d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d828f3f3b90a2dcb1c1908e6a686368af5b0d715b3251e4b8fcf3c8818ec75a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://172ab3fce08c8ba4095ff4095c89364a778644752bd7bb6c178d6e3ebcface69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfbb8e350ad18b78a6bcf6cfa4eb8f2fad90f970dba08a8b1b2026af6f255e83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c4b2bf0c88a16fd6b4b45a20730a92292895dc9f29ce756d347c302b25a8612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55da309288bc2c7ff1f22da0569d18155796700c200fe3ed508f6f8179756865\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55da309288bc2c7ff1f22da0569d18155796700c200fe3ed508f6f8179756865\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219c531368bb5379f3726d94ab0afc0f33eedbf91b0a4dbbe0757c6e232f79ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://219c531368bb5379f3726d94ab0afc0f33eedbf91b0a4dbbe0757c6e232f79ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f9da8f3d4f85ec83c16b71ecd983ff4f48049eb8e73461a2b46c4f66d71ed06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9da8f3d4f85ec83c16b71ecd983ff4f48049eb8e73461a2b46c4f66d71ed06e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:54Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:54 crc kubenswrapper[4755]: I1210 15:23:54.036659 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6lfvk_4b1da51a-99c9-4f8e-920d-ce0973af6370/ovnkube-controller/0.log" Dec 10 15:23:54 crc kubenswrapper[4755]: I1210 15:23:54.039520 4755 generic.go:334] "Generic (PLEG): container finished" podID="4b1da51a-99c9-4f8e-920d-ce0973af6370" containerID="d89ebb4c7085b2d1d38908446db4cfa84ccbb4e5cf150249d1b2b508631e72d6" exitCode=1 Dec 10 15:23:54 crc kubenswrapper[4755]: I1210 15:23:54.039629 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" event={"ID":"4b1da51a-99c9-4f8e-920d-ce0973af6370","Type":"ContainerDied","Data":"d89ebb4c7085b2d1d38908446db4cfa84ccbb4e5cf150249d1b2b508631e72d6"} Dec 10 15:23:54 crc kubenswrapper[4755]: I1210 15:23:54.040273 4755 scope.go:117] "RemoveContainer" containerID="d89ebb4c7085b2d1d38908446db4cfa84ccbb4e5cf150249d1b2b508631e72d6" Dec 10 15:23:54 crc kubenswrapper[4755]: I1210 15:23:54.040366 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:54Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:54 crc kubenswrapper[4755]: I1210 15:23:54.067518 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:54 crc kubenswrapper[4755]: I1210 15:23:54.067565 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:54 crc kubenswrapper[4755]: I1210 15:23:54.067576 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:54 crc kubenswrapper[4755]: I1210 15:23:54.067590 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:54 crc kubenswrapper[4755]: I1210 15:23:54.067599 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:54Z","lastTransitionTime":"2025-12-10T15:23:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:54 crc kubenswrapper[4755]: I1210 15:23:54.079246 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c371b199c3643a68ea5eba935eb76a1b7e8a4027c9292a1324116ccfc14742ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:54Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:54 crc kubenswrapper[4755]: I1210 15:23:54.129827 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fa03060-a6e0-4aad-9aa1-43b1a0d00c85\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a823333ed5cb5de3988d25e50e4b7a0f9071c76fb39c22760a4f2acd5eb455d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d828f3f3b90a2dcb1c1908e6a686368af5b0d715b3251e4b8fcf3c8818ec75a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://172ab3fce08c8ba4095ff4095c89364a778644752bd7bb6c178d6e3ebcface69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfbb8e350ad18b78a6bcf6cfa4eb8f2fad90f970dba08a8b1b2026af6f255e83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c4b2bf0c88a16fd6b4b45a20730a92292895dc9f29ce756d347c302b25a8612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55da309288bc2c7ff1f22da0569d18155796700c200fe3ed508f6f8179756865\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55da309288bc2c7ff1f22da0569d18155796700c200fe3ed508f6f8179756865\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219c531368bb5379f3726d94ab0afc0f33eedbf91b0a4dbbe0757c6e232f79ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://219c531368bb5379f3726d94ab0afc0f33eedbf91b0a4dbbe0757c6e232f79ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f9da8f3d4f85ec83c16b71ecd983ff4f48049eb8e73461a2b46c4f66d71ed06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9da8f3d4f85ec83c16b71ecd983ff4f48049eb8e73461a2b46c4f66d71ed06e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:54Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:54 crc kubenswrapper[4755]: I1210 15:23:54.159998 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:54Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:54 crc kubenswrapper[4755]: I1210 15:23:54.169658 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:54 crc kubenswrapper[4755]: I1210 15:23:54.169695 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:54 crc kubenswrapper[4755]: I1210 15:23:54.169706 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:54 crc kubenswrapper[4755]: I1210 15:23:54.169722 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:54 crc kubenswrapper[4755]: I1210 15:23:54.169731 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:54Z","lastTransitionTime":"2025-12-10T15:23:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:54 crc kubenswrapper[4755]: I1210 15:23:54.202061 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c371b199c3643a68ea5eba935eb76a1b7e8a4027c9292a1324116ccfc14742ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:54Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:54 crc kubenswrapper[4755]: I1210 15:23:54.240127 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wv8fh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d150a22e-c59a-4376-a5c8-db4085ea0be0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58d106c5d1b9525ec821d009ca556449cd8d7f0e1b9c8ec7dd969df996e73625\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfw9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wv8fh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:54Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:54 crc kubenswrapper[4755]: I1210 15:23:54.272281 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:54 crc kubenswrapper[4755]: I1210 15:23:54.272327 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:54 crc kubenswrapper[4755]: I1210 15:23:54.272341 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:54 crc kubenswrapper[4755]: I1210 15:23:54.272359 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:54 crc kubenswrapper[4755]: I1210 15:23:54.272370 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:54Z","lastTransitionTime":"2025-12-10T15:23:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:54 crc kubenswrapper[4755]: I1210 15:23:54.286394 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e6db97-7e22-4fec-924e-20e90f463887\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef3871ddc16d2fc1b89a6f120390cf066424bd9ed4cca98f8dea4430c40c6c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08895c3ab20248c4bd70ccfae6442bdf8e76a602e605b043f0f00166624ada17\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5b2467645116856afa1e1e37501c7b453b03da193a96d1075886b85839a86ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16e58653dca08c9b33671279553975bca9507ffeeefea161119cec624813f167\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea18ef6c319336321d97c2a55449903b837b5a47da785da3ff5308244751943\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17ca0b6cc1b8b0bf9c34e03c494d1fc1594db013eccf862027ec8749ea51f6e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17ca0b6cc1b8b0bf9c34e03c494d1fc1594db013eccf862027ec8749ea51f6e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:54Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:54 crc kubenswrapper[4755]: I1210 15:23:54.324584 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:54Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:54 crc kubenswrapper[4755]: I1210 15:23:54.366219 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b1da51a-99c9-4f8e-920d-ce0973af6370\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6eb065dc6c0cc8914cb95553eb2683d894fb9a4e78ce7fac73bcce8d7f6cced9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e547993b9f2fa37bf924f909c47b62eb0cc02b659596b1cad9bbc42fdde8f9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9ba47683cc23d5b531a45f0658b6a9378650400b35b5372642b0430a5ac503f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a75407e83508af9adebb09c6466a966dd791d29f690c539656f9bd3396d7031\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://602b4e49987fa2cc6b54b822110aececbdddaf2bce8f27cce4ed906768d45791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://335bcab3a79f09796e97560365e1211fb30ddf288f4773c05ab353197add4365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d89ebb4c7085b2d1d38908446db4cfa84ccbb4e5cf150249d1b2b508631e72d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d89ebb4c7085b2d1d38908446db4cfa84ccbb4e5cf150249d1b2b508631e72d6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T15:23:53Z\\\",\\\"message\\\":\\\"sip/v1/apis/informers/externalversions/factory.go:140\\\\nI1210 15:23:53.306420 6025 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1210 15:23:53.306456 6025 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1210 15:23:53.306496 6025 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1210 15:23:53.306514 6025 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1210 15:23:53.306525 6025 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1210 15:23:53.306537 6025 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1210 15:23:53.306542 6025 handler.go:208] Removed *v1.Node event handler 7\\\\nI1210 15:23:53.306551 6025 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1210 15:23:53.306555 6025 handler.go:208] Removed *v1.Node event handler 2\\\\nI1210 15:23:53.306556 6025 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1210 15:23:53.306565 6025 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1210 15:23:53.306573 6025 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1210 15:23:53.306573 6025 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1210 15:23:53.306573 6025 factory.go:656] Stopping watch factory\\\\nI1210 15:23:53.306591 6025 handler.go:208] Removed *v1.EgressFirewall ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59bf59d1b7fbc365a916fbbceca7ae30b7ebc754b34f2f7a34c2e21e1e1d2166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://375efb27cf6bef06a6a0ffc8f6b2963e1b9ffbca18b3c8e7b402bc552a411f6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://375efb27cf6bef06a6a0ffc8f6b2963e1b9ffbca18b3c8e7b402bc552a411f6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6lfvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:54Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:54 crc kubenswrapper[4755]: I1210 15:23:54.373973 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:54 crc kubenswrapper[4755]: I1210 15:23:54.374003 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:54 crc kubenswrapper[4755]: I1210 15:23:54.374011 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:54 crc kubenswrapper[4755]: I1210 15:23:54.374024 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:54 crc kubenswrapper[4755]: I1210 15:23:54.374035 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:54Z","lastTransitionTime":"2025-12-10T15:23:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:54 crc kubenswrapper[4755]: I1210 15:23:54.402101 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7d089a8bddfa1f80d29011c7a6bab0a300f7dd44bdb2864f86951ebbb9ebea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:54Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:54 crc kubenswrapper[4755]: I1210 15:23:54.443989 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01ddc4056319db1f69268dcae192c3cb9db6c25284305803ae7588e59f77c346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aae6bf174cdc9bde18d7c959e976454e73c1e67642f0158d365b79582f63f3cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:54Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:54 crc kubenswrapper[4755]: I1210 15:23:54.476795 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:54 crc kubenswrapper[4755]: I1210 15:23:54.476867 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:54 crc kubenswrapper[4755]: I1210 15:23:54.476943 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:54 crc kubenswrapper[4755]: I1210 15:23:54.476965 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:54 crc kubenswrapper[4755]: I1210 15:23:54.476982 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:54Z","lastTransitionTime":"2025-12-10T15:23:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:54 crc kubenswrapper[4755]: I1210 15:23:54.481307 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:54Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:54 crc kubenswrapper[4755]: I1210 15:23:54.522731 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zl2tx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"796da6d5-6ccd-4786-a03e-9a8e47a55031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de63a123c46563bd8cd07e669d192bc8b019a889b9bdb7af1b988872c8f1fc48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzg4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zl2tx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:54Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:54 crc kubenswrapper[4755]: I1210 15:23:54.559437 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b132a8b9-1c99-414d-8773-229bf36b305d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5a5a59e9f156fb791ec822c2d5efe3fc6ec0e84bfcb2b6f5da81396951984c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2mdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a40c4bdaa23a60a665b8f565720d79b68cac62d40246be94fc6cd314b1bb3656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2mdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ggt8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:54Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:54 crc kubenswrapper[4755]: I1210 15:23:54.579549 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:54 crc kubenswrapper[4755]: I1210 15:23:54.579760 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:54 crc kubenswrapper[4755]: I1210 15:23:54.579905 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:54 crc kubenswrapper[4755]: I1210 15:23:54.580007 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:54 crc kubenswrapper[4755]: I1210 15:23:54.580144 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:54Z","lastTransitionTime":"2025-12-10T15:23:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:54 crc kubenswrapper[4755]: I1210 15:23:54.600444 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aed6bb9-78c2-410b-9c58-b60ab22a7bd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70d014e7227746c46b30f8f5a1f307a422d2fa0d4b98d98bfe5ba6217223489e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://104d730ea666ffd2d639ba7fc8d1ac5b444586788a62656fd260cb249f82b1ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09dbec85547c7170bb9551e5657876814d48528e3047daf3547711a563d2b6ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8bf1de0d8bdc0a20bc42ba5097d849ae0f507e5fbc18fb17b4ff3650e46ff0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:54Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:54 crc kubenswrapper[4755]: I1210 15:23:54.647607 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n8c6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b56ae78-835a-45da-bc46-5adff2bdf9fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8957db8ccef7b3c449920471d345aa81ce9a7ab9be36b2350ec428aebec7ac3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b9a1485f6d0c0e53fd3505623b2788476bc00abea2f11650386eac40e449bcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9a1485f6d0c0e53fd3505623b2788476bc00abea2f11650386eac40e449bcc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a532015d196c588ad3dd6c43d9fa3772ba2a42bd1b2231f830b60adb0addab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a532015d196c588ad3dd6c43d9fa3772ba2a42bd1b2231f830b60adb0addab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://417e326f541e04df27d540c14ec572bb8f1d749a9e3dc88f483241d69f7d0679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://417e326f541e04df27d540c14ec572bb8f1d749a9e3dc88f483241d69f7d0679\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42867f7a1f6aceaf708ad650b7634bdafb3f850872c540d424916440014e7941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42867f7a1f6aceaf708ad650b7634bdafb3f850872c540d424916440014e7941\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f2d22bfc26958a3aad99de39f5c831d36c1ba4f563851baafc73735e80d5c91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f2d22bfc26958a3aad99de39f5c831d36c1ba4f563851baafc73735e80d5c91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://548d326fc5c14c16176d4fff3446d93c17322c11d2a34e9e42376b665de3848d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://548d326fc5c14c16176d4fff3446d93c17322c11d2a34e9e42376b665de3848d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n8c6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:54Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:54 crc kubenswrapper[4755]: I1210 15:23:54.682061 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qnmst" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1693fcf1-bef4-4f82-8dd8-f1797b03f5e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8fd6a2ab2b9557574951c0ebbccd663fe576262b0de2c3c655427c977f62d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hzdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qnmst\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:54Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:54 crc kubenswrapper[4755]: I1210 15:23:54.683122 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:54 crc kubenswrapper[4755]: I1210 15:23:54.683158 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:54 crc kubenswrapper[4755]: I1210 15:23:54.683174 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:54 crc kubenswrapper[4755]: I1210 15:23:54.683199 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:54 crc kubenswrapper[4755]: I1210 15:23:54.683215 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:54Z","lastTransitionTime":"2025-12-10T15:23:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:54 crc kubenswrapper[4755]: I1210 15:23:54.724094 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aed6bb9-78c2-410b-9c58-b60ab22a7bd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70d014e7227746c46b30f8f5a1f307a422d2fa0d4b98d98bfe5ba6217223489e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://104d730ea666ffd2d639ba7fc8d1ac5b444586788a62656fd260cb249f82b1ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09dbec85547c7170bb9551e5657876814d48528e3047daf3547711a563d2b6ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8bf1de0d8bdc0a20bc42ba5097d849ae0f507e5fbc18fb17b4ff3650e46ff0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:54Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:54 crc kubenswrapper[4755]: I1210 15:23:54.762249 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n8c6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b56ae78-835a-45da-bc46-5adff2bdf9fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8957db8ccef7b3c449920471d345aa81ce9a7ab9be36b2350ec428aebec7ac3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b9a1485f6d0c0e53fd3505623b2788476bc00abea2f11650386eac40e449bcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9a1485f6d0c0e53fd3505623b2788476bc00abea2f11650386eac40e449bcc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a532015d196c588ad3dd6c43d9fa3772ba2a42bd1b2231f830b60adb0addab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a532015d196c588ad3dd6c43d9fa3772ba2a42bd1b2231f830b60adb0addab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://417e326f541e04df27d540c14ec572bb8f1d749a9e3dc88f483241d69f7d0679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://417e326f541e04df27d540c14ec572bb8f1d749a9e3dc88f483241d69f7d0679\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42867f7a1f6aceaf708ad650b7634bdafb3f850872c540d424916440014e7941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42867f7a1f6aceaf708ad650b7634bdafb3f850872c540d424916440014e7941\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f2d22bfc26958a3aad99de39f5c831d36c1ba4f563851baafc73735e80d5c91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f2d22bfc26958a3aad99de39f5c831d36c1ba4f563851baafc73735e80d5c91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://548d326fc5c14c16176d4fff3446d93c17322c11d2a34e9e42376b665de3848d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://548d326fc5c14c16176d4fff3446d93c17322c11d2a34e9e42376b665de3848d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n8c6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:54Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:54 crc kubenswrapper[4755]: I1210 15:23:54.785595 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:54 crc kubenswrapper[4755]: I1210 15:23:54.785792 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:54 crc kubenswrapper[4755]: I1210 15:23:54.785861 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:54 crc kubenswrapper[4755]: I1210 15:23:54.785936 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:54 crc kubenswrapper[4755]: I1210 15:23:54.786001 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:54Z","lastTransitionTime":"2025-12-10T15:23:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:54 crc kubenswrapper[4755]: I1210 15:23:54.798914 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qnmst" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1693fcf1-bef4-4f82-8dd8-f1797b03f5e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8fd6a2ab2b9557574951c0ebbccd663fe576262b0de2c3c655427c977f62d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hzdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qnmst\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:54Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:54 crc kubenswrapper[4755]: I1210 15:23:54.845074 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fa03060-a6e0-4aad-9aa1-43b1a0d00c85\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a823333ed5cb5de3988d25e50e4b7a0f9071c76fb39c22760a4f2acd5eb455d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d828f3f3b90a2dcb1c1908e6a686368af5b0d715b3251e4b8fcf3c8818ec75a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://172ab3fce08c8ba4095ff4095c89364a778644752bd7bb6c178d6e3ebcface69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfbb8e350ad18b78a6bcf6cfa4eb8f2fad90f970dba08a8b1b2026af6f255e83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c4b2bf0c88a16fd6b4b45a20730a92292895dc9f29ce756d347c302b25a8612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55da309288bc2c7ff1f22da0569d18155796700c200fe3ed508f6f8179756865\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55da309288bc2c7ff1f22da0569d18155796700c200fe3ed508f6f8179756865\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219c531368bb5379f3726d94ab0afc0f33eedbf91b0a4dbbe0757c6e232f79ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://219c531368bb5379f3726d94ab0afc0f33eedbf91b0a4dbbe0757c6e232f79ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f9da8f3d4f85ec83c16b71ecd983ff4f48049eb8e73461a2b46c4f66d71ed06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9da8f3d4f85ec83c16b71ecd983ff4f48049eb8e73461a2b46c4f66d71ed06e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:54Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:54 crc kubenswrapper[4755]: I1210 15:23:54.880014 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:54Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:54 crc kubenswrapper[4755]: I1210 15:23:54.888566 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:54 crc kubenswrapper[4755]: I1210 15:23:54.888759 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:54 crc kubenswrapper[4755]: I1210 15:23:54.888848 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:54 crc kubenswrapper[4755]: I1210 15:23:54.888927 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:54 crc kubenswrapper[4755]: I1210 15:23:54.889003 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:54Z","lastTransitionTime":"2025-12-10T15:23:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:54 crc kubenswrapper[4755]: I1210 15:23:54.920032 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c371b199c3643a68ea5eba935eb76a1b7e8a4027c9292a1324116ccfc14742ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:54Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:54 crc kubenswrapper[4755]: I1210 15:23:54.962038 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wv8fh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d150a22e-c59a-4376-a5c8-db4085ea0be0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58d106c5d1b9525ec821d009ca556449cd8d7f0e1b9c8ec7dd969df996e73625\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfw9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wv8fh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:54Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:54 crc kubenswrapper[4755]: I1210 15:23:54.990988 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:54 crc kubenswrapper[4755]: I1210 15:23:54.991024 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:54 crc kubenswrapper[4755]: I1210 15:23:54.991033 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:54 crc kubenswrapper[4755]: I1210 15:23:54.991050 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:54 crc kubenswrapper[4755]: I1210 15:23:54.991060 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:54Z","lastTransitionTime":"2025-12-10T15:23:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:55 crc kubenswrapper[4755]: I1210 15:23:55.001735 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e6db97-7e22-4fec-924e-20e90f463887\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef3871ddc16d2fc1b89a6f120390cf066424bd9ed4cca98f8dea4430c40c6c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08895c3ab20248c4bd70ccfae6442bdf8e76a602e605b043f0f00166624ada17\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5b2467645116856afa1e1e37501c7b453b03da193a96d1075886b85839a86ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16e58653dca08c9b33671279553975bca9507ffeeefea161119cec624813f167\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea18ef6c319336321d97c2a55449903b837b5a47da785da3ff5308244751943\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17ca0b6cc1b8b0bf9c34e03c494d1fc1594db013eccf862027ec8749ea51f6e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17ca0b6cc1b8b0bf9c34e03c494d1fc1594db013eccf862027ec8749ea51f6e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:54Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:55 crc kubenswrapper[4755]: I1210 15:23:55.016441 4755 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 10 15:23:55 crc kubenswrapper[4755]: I1210 15:23:55.065464 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:55Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:55 crc kubenswrapper[4755]: I1210 15:23:55.093307 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:55 crc kubenswrapper[4755]: I1210 15:23:55.093341 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:55 crc kubenswrapper[4755]: I1210 15:23:55.093350 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:55 crc kubenswrapper[4755]: I1210 15:23:55.093364 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:55 crc kubenswrapper[4755]: I1210 15:23:55.093373 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:55Z","lastTransitionTime":"2025-12-10T15:23:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:55 crc kubenswrapper[4755]: I1210 15:23:55.115180 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b1da51a-99c9-4f8e-920d-ce0973af6370\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6eb065dc6c0cc8914cb95553eb2683d894fb9a4e78ce7fac73bcce8d7f6cced9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e547993b9f2fa37bf924f909c47b62eb0cc02b659596b1cad9bbc42fdde8f9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9ba47683cc23d5b531a45f0658b6a9378650400b35b5372642b0430a5ac503f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a75407e83508af9adebb09c6466a966dd791d29f690c539656f9bd3396d7031\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://602b4e49987fa2cc6b54b822110aececbdddaf2bce8f27cce4ed906768d45791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://335bcab3a79f09796e97560365e1211fb30ddf288f4773c05ab353197add4365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d89ebb4c7085b2d1d38908446db4cfa84ccbb4e5cf150249d1b2b508631e72d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d89ebb4c7085b2d1d38908446db4cfa84ccbb4e5cf150249d1b2b508631e72d6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T15:23:53Z\\\",\\\"message\\\":\\\"sip/v1/apis/informers/externalversions/factory.go:140\\\\nI1210 15:23:53.306420 6025 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1210 15:23:53.306456 6025 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1210 15:23:53.306496 6025 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1210 15:23:53.306514 6025 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1210 15:23:53.306525 6025 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1210 15:23:53.306537 6025 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1210 15:23:53.306542 6025 handler.go:208] Removed *v1.Node event handler 7\\\\nI1210 15:23:53.306551 6025 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1210 15:23:53.306555 6025 handler.go:208] Removed *v1.Node event handler 2\\\\nI1210 15:23:53.306556 6025 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1210 15:23:53.306565 6025 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1210 15:23:53.306573 6025 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1210 15:23:53.306573 6025 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1210 15:23:53.306573 6025 factory.go:656] Stopping watch factory\\\\nI1210 15:23:53.306591 6025 handler.go:208] Removed *v1.EgressFirewall ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59bf59d1b7fbc365a916fbbceca7ae30b7ebc754b34f2f7a34c2e21e1e1d2166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://375efb27cf6bef06a6a0ffc8f6b2963e1b9ffbca18b3c8e7b402bc552a411f6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://375efb27cf6bef06a6a0ffc8f6b2963e1b9ffbca18b3c8e7b402bc552a411f6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6lfvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:55Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:55 crc kubenswrapper[4755]: I1210 15:23:55.142202 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zl2tx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"796da6d5-6ccd-4786-a03e-9a8e47a55031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de63a123c46563bd8cd07e669d192bc8b019a889b9bdb7af1b988872c8f1fc48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzg4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zl2tx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:55Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:55 crc kubenswrapper[4755]: I1210 15:23:55.181130 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b132a8b9-1c99-414d-8773-229bf36b305d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5a5a59e9f156fb791ec822c2d5efe3fc6ec0e84bfcb2b6f5da81396951984c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2mdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a40c4bdaa23a60a665b8f565720d79b68cac62d40246be94fc6cd314b1bb3656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2mdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ggt8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:55Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:55 crc kubenswrapper[4755]: I1210 15:23:55.195181 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:55 crc kubenswrapper[4755]: I1210 15:23:55.195214 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:55 crc kubenswrapper[4755]: I1210 15:23:55.195224 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:55 crc kubenswrapper[4755]: I1210 15:23:55.195240 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:55 crc kubenswrapper[4755]: I1210 15:23:55.195250 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:55Z","lastTransitionTime":"2025-12-10T15:23:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:55 crc kubenswrapper[4755]: I1210 15:23:55.221409 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7d089a8bddfa1f80d29011c7a6bab0a300f7dd44bdb2864f86951ebbb9ebea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:55Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:55 crc kubenswrapper[4755]: I1210 15:23:55.262001 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01ddc4056319db1f69268dcae192c3cb9db6c25284305803ae7588e59f77c346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aae6bf174cdc9bde18d7c959e976454e73c1e67642f0158d365b79582f63f3cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:55Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:55 crc kubenswrapper[4755]: I1210 15:23:55.298094 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:55 crc kubenswrapper[4755]: I1210 15:23:55.298157 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:55 crc kubenswrapper[4755]: I1210 15:23:55.298178 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:55 crc kubenswrapper[4755]: I1210 15:23:55.298206 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:55 crc kubenswrapper[4755]: I1210 15:23:55.298224 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:55Z","lastTransitionTime":"2025-12-10T15:23:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:55 crc kubenswrapper[4755]: I1210 15:23:55.302613 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:55Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:55 crc kubenswrapper[4755]: I1210 15:23:55.401136 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:55 crc kubenswrapper[4755]: I1210 15:23:55.401212 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:55 crc kubenswrapper[4755]: I1210 15:23:55.401235 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:55 crc kubenswrapper[4755]: I1210 15:23:55.401268 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:55 crc kubenswrapper[4755]: I1210 15:23:55.401291 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:55Z","lastTransitionTime":"2025-12-10T15:23:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:55 crc kubenswrapper[4755]: I1210 15:23:55.503697 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:55 crc kubenswrapper[4755]: I1210 15:23:55.504018 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:55 crc kubenswrapper[4755]: I1210 15:23:55.504146 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:55 crc kubenswrapper[4755]: I1210 15:23:55.504247 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:55 crc kubenswrapper[4755]: I1210 15:23:55.504332 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:55Z","lastTransitionTime":"2025-12-10T15:23:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:55 crc kubenswrapper[4755]: I1210 15:23:55.607672 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:55 crc kubenswrapper[4755]: I1210 15:23:55.607986 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:55 crc kubenswrapper[4755]: I1210 15:23:55.608155 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:55 crc kubenswrapper[4755]: I1210 15:23:55.608306 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:55 crc kubenswrapper[4755]: I1210 15:23:55.608431 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:55Z","lastTransitionTime":"2025-12-10T15:23:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:55 crc kubenswrapper[4755]: I1210 15:23:55.711224 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:55 crc kubenswrapper[4755]: I1210 15:23:55.711975 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:55 crc kubenswrapper[4755]: I1210 15:23:55.712040 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:55 crc kubenswrapper[4755]: I1210 15:23:55.712107 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:55 crc kubenswrapper[4755]: I1210 15:23:55.712190 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:55Z","lastTransitionTime":"2025-12-10T15:23:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:55 crc kubenswrapper[4755]: I1210 15:23:55.736754 4755 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 10 15:23:55 crc kubenswrapper[4755]: I1210 15:23:55.757017 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 15:23:55 crc kubenswrapper[4755]: E1210 15:23:55.757147 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 15:23:55 crc kubenswrapper[4755]: I1210 15:23:55.757033 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 15:23:55 crc kubenswrapper[4755]: E1210 15:23:55.757224 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 15:23:55 crc kubenswrapper[4755]: I1210 15:23:55.757028 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 15:23:55 crc kubenswrapper[4755]: E1210 15:23:55.757297 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 15:23:55 crc kubenswrapper[4755]: I1210 15:23:55.815684 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:55 crc kubenswrapper[4755]: I1210 15:23:55.815723 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:55 crc kubenswrapper[4755]: I1210 15:23:55.815744 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:55 crc kubenswrapper[4755]: I1210 15:23:55.815760 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:55 crc kubenswrapper[4755]: I1210 15:23:55.815769 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:55Z","lastTransitionTime":"2025-12-10T15:23:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:55 crc kubenswrapper[4755]: I1210 15:23:55.918191 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:55 crc kubenswrapper[4755]: I1210 15:23:55.918446 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:55 crc kubenswrapper[4755]: I1210 15:23:55.918626 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:55 crc kubenswrapper[4755]: I1210 15:23:55.918743 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:55 crc kubenswrapper[4755]: I1210 15:23:55.918846 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:55Z","lastTransitionTime":"2025-12-10T15:23:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:56 crc kubenswrapper[4755]: I1210 15:23:56.022141 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:56 crc kubenswrapper[4755]: I1210 15:23:56.022452 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:56 crc kubenswrapper[4755]: I1210 15:23:56.022559 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:56 crc kubenswrapper[4755]: I1210 15:23:56.022640 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:56 crc kubenswrapper[4755]: I1210 15:23:56.022732 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:56Z","lastTransitionTime":"2025-12-10T15:23:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:56 crc kubenswrapper[4755]: I1210 15:23:56.126214 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:56 crc kubenswrapper[4755]: I1210 15:23:56.126266 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:56 crc kubenswrapper[4755]: I1210 15:23:56.126279 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:56 crc kubenswrapper[4755]: I1210 15:23:56.126304 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:56 crc kubenswrapper[4755]: I1210 15:23:56.126317 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:56Z","lastTransitionTime":"2025-12-10T15:23:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:56 crc kubenswrapper[4755]: I1210 15:23:56.185015 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-82wnw"] Dec 10 15:23:56 crc kubenswrapper[4755]: I1210 15:23:56.185632 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-82wnw" Dec 10 15:23:56 crc kubenswrapper[4755]: I1210 15:23:56.187506 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 10 15:23:56 crc kubenswrapper[4755]: I1210 15:23:56.189534 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Dec 10 15:23:56 crc kubenswrapper[4755]: I1210 15:23:56.205326 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:56Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:56 crc kubenswrapper[4755]: I1210 15:23:56.228944 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:56 crc kubenswrapper[4755]: I1210 15:23:56.228978 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:56 crc kubenswrapper[4755]: I1210 15:23:56.228989 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:56 crc kubenswrapper[4755]: I1210 15:23:56.229004 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:56 crc kubenswrapper[4755]: I1210 15:23:56.229013 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:56Z","lastTransitionTime":"2025-12-10T15:23:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:56 crc kubenswrapper[4755]: I1210 15:23:56.230660 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zl2tx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"796da6d5-6ccd-4786-a03e-9a8e47a55031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de63a123c46563bd8cd07e669d192bc8b019a889b9bdb7af1b988872c8f1fc48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzg4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zl2tx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:56Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:56 crc kubenswrapper[4755]: I1210 15:23:56.241013 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b132a8b9-1c99-414d-8773-229bf36b305d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5a5a59e9f156fb791ec822c2d5efe3fc6ec0e84bfcb2b6f5da81396951984c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2mdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a40c4bdaa23a60a665b8f565720d79b68cac62d40246be94fc6cd314b1bb3656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2mdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ggt8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:56Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:56 crc kubenswrapper[4755]: I1210 15:23:56.254522 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7d089a8bddfa1f80d29011c7a6bab0a300f7dd44bdb2864f86951ebbb9ebea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:56Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:56 crc kubenswrapper[4755]: I1210 15:23:56.264063 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c0a2f42c-a60e-4350-9ebb-c28d3cbfdad7-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-82wnw\" (UID: \"c0a2f42c-a60e-4350-9ebb-c28d3cbfdad7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-82wnw" Dec 10 15:23:56 crc kubenswrapper[4755]: I1210 15:23:56.264249 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xbzv\" (UniqueName: \"kubernetes.io/projected/c0a2f42c-a60e-4350-9ebb-c28d3cbfdad7-kube-api-access-2xbzv\") pod \"ovnkube-control-plane-749d76644c-82wnw\" (UID: \"c0a2f42c-a60e-4350-9ebb-c28d3cbfdad7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-82wnw" Dec 10 15:23:56 crc kubenswrapper[4755]: I1210 15:23:56.264384 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c0a2f42c-a60e-4350-9ebb-c28d3cbfdad7-env-overrides\") pod \"ovnkube-control-plane-749d76644c-82wnw\" (UID: \"c0a2f42c-a60e-4350-9ebb-c28d3cbfdad7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-82wnw" Dec 10 15:23:56 crc kubenswrapper[4755]: I1210 15:23:56.264493 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c0a2f42c-a60e-4350-9ebb-c28d3cbfdad7-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-82wnw\" (UID: \"c0a2f42c-a60e-4350-9ebb-c28d3cbfdad7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-82wnw" Dec 10 15:23:56 crc kubenswrapper[4755]: I1210 15:23:56.265886 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01ddc4056319db1f69268dcae192c3cb9db6c25284305803ae7588e59f77c346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aae6bf174cdc9bde18d7c959e976454e73c1e67642f0158d365b79582f63f3cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:56Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:56 crc kubenswrapper[4755]: I1210 15:23:56.276999 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-82wnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0a2f42c-a60e-4350-9ebb-c28d3cbfdad7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xbzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xbzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-82wnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:56Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:56 crc kubenswrapper[4755]: I1210 15:23:56.290141 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aed6bb9-78c2-410b-9c58-b60ab22a7bd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70d014e7227746c46b30f8f5a1f307a422d2fa0d4b98d98bfe5ba6217223489e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://104d730ea666ffd2d639ba7fc8d1ac5b444586788a62656fd260cb249f82b1ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09dbec85547c7170bb9551e5657876814d48528e3047daf3547711a563d2b6ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8bf1de0d8bdc0a20bc42ba5097d849ae0f507e5fbc18fb17b4ff3650e46ff0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:56Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:56 crc kubenswrapper[4755]: I1210 15:23:56.302619 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n8c6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b56ae78-835a-45da-bc46-5adff2bdf9fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8957db8ccef7b3c449920471d345aa81ce9a7ab9be36b2350ec428aebec7ac3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b9a1485f6d0c0e53fd3505623b2788476bc00abea2f11650386eac40e449bcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9a1485f6d0c0e53fd3505623b2788476bc00abea2f11650386eac40e449bcc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a532015d196c588ad3dd6c43d9fa3772ba2a42bd1b2231f830b60adb0addab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a532015d196c588ad3dd6c43d9fa3772ba2a42bd1b2231f830b60adb0addab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://417e326f541e04df27d540c14ec572bb8f1d749a9e3dc88f483241d69f7d0679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://417e326f541e04df27d540c14ec572bb8f1d749a9e3dc88f483241d69f7d0679\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42867f7a1f6aceaf708ad650b7634bdafb3f850872c540d424916440014e7941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42867f7a1f6aceaf708ad650b7634bdafb3f850872c540d424916440014e7941\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f2d22bfc26958a3aad99de39f5c831d36c1ba4f563851baafc73735e80d5c91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f2d22bfc26958a3aad99de39f5c831d36c1ba4f563851baafc73735e80d5c91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://548d326fc5c14c16176d4fff3446d93c17322c11d2a34e9e42376b665de3848d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://548d326fc5c14c16176d4fff3446d93c17322c11d2a34e9e42376b665de3848d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n8c6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:56Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:56 crc kubenswrapper[4755]: I1210 15:23:56.313396 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qnmst" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1693fcf1-bef4-4f82-8dd8-f1797b03f5e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8fd6a2ab2b9557574951c0ebbccd663fe576262b0de2c3c655427c977f62d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hzdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qnmst\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:56Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:56 crc kubenswrapper[4755]: I1210 15:23:56.324025 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wv8fh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d150a22e-c59a-4376-a5c8-db4085ea0be0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58d106c5d1b9525ec821d009ca556449cd8d7f0e1b9c8ec7dd969df996e73625\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfw9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wv8fh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:56Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:56 crc kubenswrapper[4755]: I1210 15:23:56.328063 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:56 crc kubenswrapper[4755]: I1210 15:23:56.328082 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:56 crc kubenswrapper[4755]: I1210 15:23:56.328090 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:56 crc kubenswrapper[4755]: I1210 15:23:56.328103 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:56 crc kubenswrapper[4755]: I1210 15:23:56.328112 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:56Z","lastTransitionTime":"2025-12-10T15:23:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:56 crc kubenswrapper[4755]: E1210 15:23:56.342430 4755 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:23:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:23:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:23:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:23:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ba232303-88d5-4931-b82e-34d9a0e5c06a\\\",\\\"systemUUID\\\":\\\"ebd59de0-c6b0-47c1-bc17-6f665dcf344d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:56Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:56 crc kubenswrapper[4755]: I1210 15:23:56.345179 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fa03060-a6e0-4aad-9aa1-43b1a0d00c85\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a823333ed5cb5de3988d25e50e4b7a0f9071c76fb39c22760a4f2acd5eb455d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d828f3f3b90a2dcb1c1908e6a686368af5b0d715b3251e4b8fcf3c8818ec75a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://172ab3fce08c8ba4095ff4095c89364a778644752bd7bb6c178d6e3ebcface69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfbb8e350ad18b78a6bcf6cfa4eb8f2fad90f970dba08a8b1b2026af6f255e83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c4b2bf0c88a16fd6b4b45a20730a92292895dc9f29ce756d347c302b25a8612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55da309288bc2c7ff1f22da0569d18155796700c200fe3ed508f6f8179756865\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55da309288bc2c7ff1f22da0569d18155796700c200fe3ed508f6f8179756865\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219c531368bb5379f3726d94ab0afc0f33eedbf91b0a4dbbe0757c6e232f79ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://219c531368bb5379f3726d94ab0afc0f33eedbf91b0a4dbbe0757c6e232f79ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f9da8f3d4f85ec83c16b71ecd983ff4f48049eb8e73461a2b46c4f66d71ed06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9da8f3d4f85ec83c16b71ecd983ff4f48049eb8e73461a2b46c4f66d71ed06e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:56Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:56 crc kubenswrapper[4755]: I1210 15:23:56.346540 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:56 crc kubenswrapper[4755]: I1210 15:23:56.346670 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:56 crc kubenswrapper[4755]: I1210 15:23:56.346734 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:56 crc kubenswrapper[4755]: I1210 15:23:56.346793 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:56 crc kubenswrapper[4755]: I1210 15:23:56.346870 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:56Z","lastTransitionTime":"2025-12-10T15:23:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:56 crc kubenswrapper[4755]: I1210 15:23:56.357862 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:56Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:56 crc kubenswrapper[4755]: I1210 15:23:56.364992 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c0a2f42c-a60e-4350-9ebb-c28d3cbfdad7-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-82wnw\" (UID: \"c0a2f42c-a60e-4350-9ebb-c28d3cbfdad7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-82wnw" Dec 10 15:23:56 crc kubenswrapper[4755]: I1210 15:23:56.365040 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xbzv\" (UniqueName: \"kubernetes.io/projected/c0a2f42c-a60e-4350-9ebb-c28d3cbfdad7-kube-api-access-2xbzv\") pod \"ovnkube-control-plane-749d76644c-82wnw\" (UID: \"c0a2f42c-a60e-4350-9ebb-c28d3cbfdad7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-82wnw" Dec 10 15:23:56 crc kubenswrapper[4755]: I1210 15:23:56.365100 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c0a2f42c-a60e-4350-9ebb-c28d3cbfdad7-env-overrides\") pod \"ovnkube-control-plane-749d76644c-82wnw\" (UID: \"c0a2f42c-a60e-4350-9ebb-c28d3cbfdad7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-82wnw" Dec 10 15:23:56 crc kubenswrapper[4755]: I1210 15:23:56.365141 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c0a2f42c-a60e-4350-9ebb-c28d3cbfdad7-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-82wnw\" (UID: \"c0a2f42c-a60e-4350-9ebb-c28d3cbfdad7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-82wnw" Dec 10 15:23:56 crc kubenswrapper[4755]: I1210 15:23:56.365885 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c0a2f42c-a60e-4350-9ebb-c28d3cbfdad7-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-82wnw\" (UID: \"c0a2f42c-a60e-4350-9ebb-c28d3cbfdad7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-82wnw" Dec 10 15:23:56 crc kubenswrapper[4755]: I1210 15:23:56.367182 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c0a2f42c-a60e-4350-9ebb-c28d3cbfdad7-env-overrides\") pod \"ovnkube-control-plane-749d76644c-82wnw\" (UID: \"c0a2f42c-a60e-4350-9ebb-c28d3cbfdad7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-82wnw" Dec 10 15:23:56 crc kubenswrapper[4755]: I1210 15:23:56.370361 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c0a2f42c-a60e-4350-9ebb-c28d3cbfdad7-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-82wnw\" (UID: \"c0a2f42c-a60e-4350-9ebb-c28d3cbfdad7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-82wnw" Dec 10 15:23:56 crc kubenswrapper[4755]: E1210 15:23:56.370810 4755 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:23:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:23:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:23:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:23:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ba232303-88d5-4931-b82e-34d9a0e5c06a\\\",\\\"systemUUID\\\":\\\"ebd59de0-c6b0-47c1-bc17-6f665dcf344d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:56Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:56 crc kubenswrapper[4755]: I1210 15:23:56.377969 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:56 crc kubenswrapper[4755]: I1210 15:23:56.378018 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:56 crc kubenswrapper[4755]: I1210 15:23:56.378038 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:56 crc kubenswrapper[4755]: I1210 15:23:56.378057 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:56 crc kubenswrapper[4755]: I1210 15:23:56.378071 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:56Z","lastTransitionTime":"2025-12-10T15:23:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:56 crc kubenswrapper[4755]: I1210 15:23:56.379307 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c371b199c3643a68ea5eba935eb76a1b7e8a4027c9292a1324116ccfc14742ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:56Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:56 crc kubenswrapper[4755]: I1210 15:23:56.391090 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xbzv\" (UniqueName: \"kubernetes.io/projected/c0a2f42c-a60e-4350-9ebb-c28d3cbfdad7-kube-api-access-2xbzv\") pod \"ovnkube-control-plane-749d76644c-82wnw\" (UID: \"c0a2f42c-a60e-4350-9ebb-c28d3cbfdad7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-82wnw" Dec 10 15:23:56 crc kubenswrapper[4755]: E1210 15:23:56.391424 4755 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:23:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:23:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:23:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:23:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ba232303-88d5-4931-b82e-34d9a0e5c06a\\\",\\\"systemUUID\\\":\\\"ebd59de0-c6b0-47c1-bc17-6f665dcf344d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:56Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:56 crc kubenswrapper[4755]: I1210 15:23:56.415280 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:56 crc kubenswrapper[4755]: I1210 15:23:56.415339 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:56 crc kubenswrapper[4755]: I1210 15:23:56.415350 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:56 crc kubenswrapper[4755]: I1210 15:23:56.415370 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:56 crc kubenswrapper[4755]: I1210 15:23:56.415381 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:56Z","lastTransitionTime":"2025-12-10T15:23:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:56 crc kubenswrapper[4755]: I1210 15:23:56.416321 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b1da51a-99c9-4f8e-920d-ce0973af6370\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6eb065dc6c0cc8914cb95553eb2683d894fb9a4e78ce7fac73bcce8d7f6cced9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e547993b9f2fa37bf924f909c47b62eb0cc02b659596b1cad9bbc42fdde8f9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9ba47683cc23d5b531a45f0658b6a9378650400b35b5372642b0430a5ac503f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a75407e83508af9adebb09c6466a966dd791d29f690c539656f9bd3396d7031\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://602b4e49987fa2cc6b54b822110aececbdddaf2bce8f27cce4ed906768d45791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://335bcab3a79f09796e97560365e1211fb30ddf288f4773c05ab353197add4365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d89ebb4c7085b2d1d38908446db4cfa84ccbb4e5cf150249d1b2b508631e72d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d89ebb4c7085b2d1d38908446db4cfa84ccbb4e5cf150249d1b2b508631e72d6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T15:23:53Z\\\",\\\"message\\\":\\\"sip/v1/apis/informers/externalversions/factory.go:140\\\\nI1210 15:23:53.306420 6025 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1210 15:23:53.306456 6025 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1210 15:23:53.306496 6025 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1210 15:23:53.306514 6025 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1210 15:23:53.306525 6025 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1210 15:23:53.306537 6025 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1210 15:23:53.306542 6025 handler.go:208] Removed *v1.Node event handler 7\\\\nI1210 15:23:53.306551 6025 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1210 15:23:53.306555 6025 handler.go:208] Removed *v1.Node event handler 2\\\\nI1210 15:23:53.306556 6025 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1210 15:23:53.306565 6025 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1210 15:23:53.306573 6025 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1210 15:23:53.306573 6025 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1210 15:23:53.306573 6025 factory.go:656] Stopping watch factory\\\\nI1210 15:23:53.306591 6025 handler.go:208] Removed *v1.EgressFirewall ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59bf59d1b7fbc365a916fbbceca7ae30b7ebc754b34f2f7a34c2e21e1e1d2166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://375efb27cf6bef06a6a0ffc8f6b2963e1b9ffbca18b3c8e7b402bc552a411f6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://375efb27cf6bef06a6a0ffc8f6b2963e1b9ffbca18b3c8e7b402bc552a411f6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6lfvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:56Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:56 crc kubenswrapper[4755]: E1210 15:23:56.426583 4755 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:23:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:23:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:23:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:23:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ba232303-88d5-4931-b82e-34d9a0e5c06a\\\",\\\"systemUUID\\\":\\\"ebd59de0-c6b0-47c1-bc17-6f665dcf344d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:56Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:56 crc kubenswrapper[4755]: I1210 15:23:56.429041 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e6db97-7e22-4fec-924e-20e90f463887\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef3871ddc16d2fc1b89a6f120390cf066424bd9ed4cca98f8dea4430c40c6c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08895c3ab20248c4bd70ccfae6442bdf8e76a602e605b043f0f00166624ada17\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5b2467645116856afa1e1e37501c7b453b03da193a96d1075886b85839a86ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16e58653dca08c9b33671279553975bca9507ffeeefea161119cec624813f167\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea18ef6c319336321d97c2a55449903b837b5a47da785da3ff5308244751943\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17ca0b6cc1b8b0bf9c34e03c494d1fc1594db013eccf862027ec8749ea51f6e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17ca0b6cc1b8b0bf9c34e03c494d1fc1594db013eccf862027ec8749ea51f6e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:56Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:56 crc kubenswrapper[4755]: I1210 15:23:56.430291 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:56 crc kubenswrapper[4755]: I1210 15:23:56.430366 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:56 crc kubenswrapper[4755]: I1210 15:23:56.430381 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:56 crc kubenswrapper[4755]: I1210 15:23:56.430410 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:56 crc kubenswrapper[4755]: I1210 15:23:56.430424 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:56Z","lastTransitionTime":"2025-12-10T15:23:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:56 crc kubenswrapper[4755]: I1210 15:23:56.441196 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:56Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:56 crc kubenswrapper[4755]: E1210 15:23:56.443239 4755 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:23:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:23:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:23:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:23:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ba232303-88d5-4931-b82e-34d9a0e5c06a\\\",\\\"systemUUID\\\":\\\"ebd59de0-c6b0-47c1-bc17-6f665dcf344d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:56Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:56 crc kubenswrapper[4755]: E1210 15:23:56.443378 4755 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 10 15:23:56 crc kubenswrapper[4755]: I1210 15:23:56.444636 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:56 crc kubenswrapper[4755]: I1210 15:23:56.444667 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:56 crc kubenswrapper[4755]: I1210 15:23:56.444676 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:56 crc kubenswrapper[4755]: I1210 15:23:56.444692 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:56 crc kubenswrapper[4755]: I1210 15:23:56.444701 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:56Z","lastTransitionTime":"2025-12-10T15:23:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:56 crc kubenswrapper[4755]: I1210 15:23:56.500498 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-82wnw" Dec 10 15:23:56 crc kubenswrapper[4755]: W1210 15:23:56.518801 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc0a2f42c_a60e_4350_9ebb_c28d3cbfdad7.slice/crio-317975d28c9d94d21c0aa6b1e79eeaa2e4b7b9db0a71065510854586ad9b373e WatchSource:0}: Error finding container 317975d28c9d94d21c0aa6b1e79eeaa2e4b7b9db0a71065510854586ad9b373e: Status 404 returned error can't find the container with id 317975d28c9d94d21c0aa6b1e79eeaa2e4b7b9db0a71065510854586ad9b373e Dec 10 15:23:56 crc kubenswrapper[4755]: I1210 15:23:56.546803 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:56 crc kubenswrapper[4755]: I1210 15:23:56.546870 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:56 crc kubenswrapper[4755]: I1210 15:23:56.546885 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:56 crc kubenswrapper[4755]: I1210 15:23:56.546916 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:56 crc kubenswrapper[4755]: I1210 15:23:56.546937 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:56Z","lastTransitionTime":"2025-12-10T15:23:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:56 crc kubenswrapper[4755]: I1210 15:23:56.649413 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:56 crc kubenswrapper[4755]: I1210 15:23:56.649488 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:56 crc kubenswrapper[4755]: I1210 15:23:56.649504 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:56 crc kubenswrapper[4755]: I1210 15:23:56.649525 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:56 crc kubenswrapper[4755]: I1210 15:23:56.649541 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:56Z","lastTransitionTime":"2025-12-10T15:23:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:56 crc kubenswrapper[4755]: I1210 15:23:56.752817 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:56 crc kubenswrapper[4755]: I1210 15:23:56.753165 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:56 crc kubenswrapper[4755]: I1210 15:23:56.753196 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:56 crc kubenswrapper[4755]: I1210 15:23:56.753212 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:56 crc kubenswrapper[4755]: I1210 15:23:56.753224 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:56Z","lastTransitionTime":"2025-12-10T15:23:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:56 crc kubenswrapper[4755]: I1210 15:23:56.856640 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:56 crc kubenswrapper[4755]: I1210 15:23:56.856707 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:56 crc kubenswrapper[4755]: I1210 15:23:56.856723 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:56 crc kubenswrapper[4755]: I1210 15:23:56.856748 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:56 crc kubenswrapper[4755]: I1210 15:23:56.856767 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:56Z","lastTransitionTime":"2025-12-10T15:23:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:56 crc kubenswrapper[4755]: I1210 15:23:56.960754 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:56 crc kubenswrapper[4755]: I1210 15:23:56.960813 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:56 crc kubenswrapper[4755]: I1210 15:23:56.960829 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:56 crc kubenswrapper[4755]: I1210 15:23:56.960853 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:56 crc kubenswrapper[4755]: I1210 15:23:56.960870 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:56Z","lastTransitionTime":"2025-12-10T15:23:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:57 crc kubenswrapper[4755]: I1210 15:23:57.048898 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-82wnw" event={"ID":"c0a2f42c-a60e-4350-9ebb-c28d3cbfdad7","Type":"ContainerStarted","Data":"317975d28c9d94d21c0aa6b1e79eeaa2e4b7b9db0a71065510854586ad9b373e"} Dec 10 15:23:57 crc kubenswrapper[4755]: I1210 15:23:57.063130 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:57 crc kubenswrapper[4755]: I1210 15:23:57.063175 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:57 crc kubenswrapper[4755]: I1210 15:23:57.063187 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:57 crc kubenswrapper[4755]: I1210 15:23:57.063206 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:57 crc kubenswrapper[4755]: I1210 15:23:57.063219 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:57Z","lastTransitionTime":"2025-12-10T15:23:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:57 crc kubenswrapper[4755]: I1210 15:23:57.165990 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:57 crc kubenswrapper[4755]: I1210 15:23:57.166104 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:57 crc kubenswrapper[4755]: I1210 15:23:57.166125 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:57 crc kubenswrapper[4755]: I1210 15:23:57.166149 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:57 crc kubenswrapper[4755]: I1210 15:23:57.166165 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:57Z","lastTransitionTime":"2025-12-10T15:23:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:57 crc kubenswrapper[4755]: I1210 15:23:57.270024 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:57 crc kubenswrapper[4755]: I1210 15:23:57.270131 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:57 crc kubenswrapper[4755]: I1210 15:23:57.270176 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:57 crc kubenswrapper[4755]: I1210 15:23:57.270208 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:57 crc kubenswrapper[4755]: I1210 15:23:57.270229 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:57Z","lastTransitionTime":"2025-12-10T15:23:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:57 crc kubenswrapper[4755]: I1210 15:23:57.372972 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:57 crc kubenswrapper[4755]: I1210 15:23:57.373012 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:57 crc kubenswrapper[4755]: I1210 15:23:57.373040 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:57 crc kubenswrapper[4755]: I1210 15:23:57.373056 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:57 crc kubenswrapper[4755]: I1210 15:23:57.373068 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:57Z","lastTransitionTime":"2025-12-10T15:23:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:57 crc kubenswrapper[4755]: I1210 15:23:57.475291 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:57 crc kubenswrapper[4755]: I1210 15:23:57.475348 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:57 crc kubenswrapper[4755]: I1210 15:23:57.475360 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:57 crc kubenswrapper[4755]: I1210 15:23:57.475379 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:57 crc kubenswrapper[4755]: I1210 15:23:57.475396 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:57Z","lastTransitionTime":"2025-12-10T15:23:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:57 crc kubenswrapper[4755]: I1210 15:23:57.578091 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:57 crc kubenswrapper[4755]: I1210 15:23:57.578137 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:57 crc kubenswrapper[4755]: I1210 15:23:57.578150 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:57 crc kubenswrapper[4755]: I1210 15:23:57.578168 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:57 crc kubenswrapper[4755]: I1210 15:23:57.578179 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:57Z","lastTransitionTime":"2025-12-10T15:23:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:57 crc kubenswrapper[4755]: I1210 15:23:57.680188 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:57 crc kubenswrapper[4755]: I1210 15:23:57.680222 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:57 crc kubenswrapper[4755]: I1210 15:23:57.680230 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:57 crc kubenswrapper[4755]: I1210 15:23:57.680243 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:57 crc kubenswrapper[4755]: I1210 15:23:57.680252 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:57Z","lastTransitionTime":"2025-12-10T15:23:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:57 crc kubenswrapper[4755]: I1210 15:23:57.756900 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 15:23:57 crc kubenswrapper[4755]: I1210 15:23:57.756932 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 15:23:57 crc kubenswrapper[4755]: E1210 15:23:57.757139 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 15:23:57 crc kubenswrapper[4755]: I1210 15:23:57.757223 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 15:23:57 crc kubenswrapper[4755]: E1210 15:23:57.757584 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 15:23:57 crc kubenswrapper[4755]: E1210 15:23:57.757665 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 15:23:57 crc kubenswrapper[4755]: I1210 15:23:57.782970 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:57 crc kubenswrapper[4755]: I1210 15:23:57.783017 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:57 crc kubenswrapper[4755]: I1210 15:23:57.783032 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:57 crc kubenswrapper[4755]: I1210 15:23:57.783052 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:57 crc kubenswrapper[4755]: I1210 15:23:57.783067 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:57Z","lastTransitionTime":"2025-12-10T15:23:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:57 crc kubenswrapper[4755]: I1210 15:23:57.885826 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:57 crc kubenswrapper[4755]: I1210 15:23:57.885895 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:57 crc kubenswrapper[4755]: I1210 15:23:57.885913 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:57 crc kubenswrapper[4755]: I1210 15:23:57.885936 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:57 crc kubenswrapper[4755]: I1210 15:23:57.885952 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:57Z","lastTransitionTime":"2025-12-10T15:23:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:57 crc kubenswrapper[4755]: I1210 15:23:57.989011 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:57 crc kubenswrapper[4755]: I1210 15:23:57.989069 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:57 crc kubenswrapper[4755]: I1210 15:23:57.989085 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:57 crc kubenswrapper[4755]: I1210 15:23:57.989109 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:57 crc kubenswrapper[4755]: I1210 15:23:57.989125 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:57Z","lastTransitionTime":"2025-12-10T15:23:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:58 crc kubenswrapper[4755]: I1210 15:23:58.031973 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-q5ctz"] Dec 10 15:23:58 crc kubenswrapper[4755]: I1210 15:23:58.033012 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5ctz" Dec 10 15:23:58 crc kubenswrapper[4755]: E1210 15:23:58.033118 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5ctz" podUID="17673130-8212-4f8f-8859-92774f0ee202" Dec 10 15:23:58 crc kubenswrapper[4755]: I1210 15:23:58.051754 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q5ctz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17673130-8212-4f8f-8859-92774f0ee202\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcscn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcscn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q5ctz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:58Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:58 crc kubenswrapper[4755]: I1210 15:23:58.059993 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6lfvk_4b1da51a-99c9-4f8e-920d-ce0973af6370/ovnkube-controller/0.log" Dec 10 15:23:58 crc kubenswrapper[4755]: I1210 15:23:58.064940 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" event={"ID":"4b1da51a-99c9-4f8e-920d-ce0973af6370","Type":"ContainerStarted","Data":"194ac31f80a7b5a0fde0c62cb794234cb84e5b12e4f3db214ec3d16b4250797f"} Dec 10 15:23:58 crc kubenswrapper[4755]: I1210 15:23:58.065559 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" Dec 10 15:23:58 crc kubenswrapper[4755]: I1210 15:23:58.067017 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-82wnw" event={"ID":"c0a2f42c-a60e-4350-9ebb-c28d3cbfdad7","Type":"ContainerStarted","Data":"7af643111a7a5d1d78b0412b1621b5ffac6389760ea9190e26e9e5d1704eed4f"} Dec 10 15:23:58 crc kubenswrapper[4755]: I1210 15:23:58.092776 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fa03060-a6e0-4aad-9aa1-43b1a0d00c85\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a823333ed5cb5de3988d25e50e4b7a0f9071c76fb39c22760a4f2acd5eb455d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d828f3f3b90a2dcb1c1908e6a686368af5b0d715b3251e4b8fcf3c8818ec75a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://172ab3fce08c8ba4095ff4095c89364a778644752bd7bb6c178d6e3ebcface69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfbb8e350ad18b78a6bcf6cfa4eb8f2fad90f970dba08a8b1b2026af6f255e83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c4b2bf0c88a16fd6b4b45a20730a92292895dc9f29ce756d347c302b25a8612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55da309288bc2c7ff1f22da0569d18155796700c200fe3ed508f6f8179756865\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55da309288bc2c7ff1f22da0569d18155796700c200fe3ed508f6f8179756865\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219c531368bb5379f3726d94ab0afc0f33eedbf91b0a4dbbe0757c6e232f79ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://219c531368bb5379f3726d94ab0afc0f33eedbf91b0a4dbbe0757c6e232f79ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f9da8f3d4f85ec83c16b71ecd983ff4f48049eb8e73461a2b46c4f66d71ed06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9da8f3d4f85ec83c16b71ecd983ff4f48049eb8e73461a2b46c4f66d71ed06e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:58Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:58 crc kubenswrapper[4755]: I1210 15:23:58.093125 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:58 crc kubenswrapper[4755]: I1210 15:23:58.093164 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:58 crc kubenswrapper[4755]: I1210 15:23:58.093179 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:58 crc kubenswrapper[4755]: I1210 15:23:58.093242 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:58 crc kubenswrapper[4755]: I1210 15:23:58.093260 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:58Z","lastTransitionTime":"2025-12-10T15:23:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:58 crc kubenswrapper[4755]: I1210 15:23:58.111018 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:58Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:58 crc kubenswrapper[4755]: I1210 15:23:58.120444 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c371b199c3643a68ea5eba935eb76a1b7e8a4027c9292a1324116ccfc14742ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:58Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:58 crc kubenswrapper[4755]: I1210 15:23:58.131188 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wv8fh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d150a22e-c59a-4376-a5c8-db4085ea0be0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58d106c5d1b9525ec821d009ca556449cd8d7f0e1b9c8ec7dd969df996e73625\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfw9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wv8fh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:58Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:58 crc kubenswrapper[4755]: I1210 15:23:58.144874 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e6db97-7e22-4fec-924e-20e90f463887\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef3871ddc16d2fc1b89a6f120390cf066424bd9ed4cca98f8dea4430c40c6c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08895c3ab20248c4bd70ccfae6442bdf8e76a602e605b043f0f00166624ada17\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5b2467645116856afa1e1e37501c7b453b03da193a96d1075886b85839a86ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16e58653dca08c9b33671279553975bca9507ffeeefea161119cec624813f167\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea18ef6c319336321d97c2a55449903b837b5a47da785da3ff5308244751943\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17ca0b6cc1b8b0bf9c34e03c494d1fc1594db013eccf862027ec8749ea51f6e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17ca0b6cc1b8b0bf9c34e03c494d1fc1594db013eccf862027ec8749ea51f6e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:58Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:58 crc kubenswrapper[4755]: I1210 15:23:58.158246 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:58Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:58 crc kubenswrapper[4755]: I1210 15:23:58.178755 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b1da51a-99c9-4f8e-920d-ce0973af6370\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6eb065dc6c0cc8914cb95553eb2683d894fb9a4e78ce7fac73bcce8d7f6cced9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e547993b9f2fa37bf924f909c47b62eb0cc02b659596b1cad9bbc42fdde8f9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9ba47683cc23d5b531a45f0658b6a9378650400b35b5372642b0430a5ac503f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a75407e83508af9adebb09c6466a966dd791d29f690c539656f9bd3396d7031\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://602b4e49987fa2cc6b54b822110aececbdddaf2bce8f27cce4ed906768d45791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://335bcab3a79f09796e97560365e1211fb30ddf288f4773c05ab353197add4365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d89ebb4c7085b2d1d38908446db4cfa84ccbb4e5cf150249d1b2b508631e72d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d89ebb4c7085b2d1d38908446db4cfa84ccbb4e5cf150249d1b2b508631e72d6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T15:23:53Z\\\",\\\"message\\\":\\\"sip/v1/apis/informers/externalversions/factory.go:140\\\\nI1210 15:23:53.306420 6025 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1210 15:23:53.306456 6025 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1210 15:23:53.306496 6025 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1210 15:23:53.306514 6025 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1210 15:23:53.306525 6025 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1210 15:23:53.306537 6025 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1210 15:23:53.306542 6025 handler.go:208] Removed *v1.Node event handler 7\\\\nI1210 15:23:53.306551 6025 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1210 15:23:53.306555 6025 handler.go:208] Removed *v1.Node event handler 2\\\\nI1210 15:23:53.306556 6025 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1210 15:23:53.306565 6025 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1210 15:23:53.306573 6025 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1210 15:23:53.306573 6025 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1210 15:23:53.306573 6025 factory.go:656] Stopping watch factory\\\\nI1210 15:23:53.306591 6025 handler.go:208] Removed *v1.EgressFirewall ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59bf59d1b7fbc365a916fbbceca7ae30b7ebc754b34f2f7a34c2e21e1e1d2166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://375efb27cf6bef06a6a0ffc8f6b2963e1b9ffbca18b3c8e7b402bc552a411f6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://375efb27cf6bef06a6a0ffc8f6b2963e1b9ffbca18b3c8e7b402bc552a411f6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6lfvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:58Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:58 crc kubenswrapper[4755]: I1210 15:23:58.183141 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/17673130-8212-4f8f-8859-92774f0ee202-metrics-certs\") pod \"network-metrics-daemon-q5ctz\" (UID: \"17673130-8212-4f8f-8859-92774f0ee202\") " pod="openshift-multus/network-metrics-daemon-q5ctz" Dec 10 15:23:58 crc kubenswrapper[4755]: I1210 15:23:58.183189 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcscn\" (UniqueName: \"kubernetes.io/projected/17673130-8212-4f8f-8859-92774f0ee202-kube-api-access-rcscn\") pod \"network-metrics-daemon-q5ctz\" (UID: \"17673130-8212-4f8f-8859-92774f0ee202\") " pod="openshift-multus/network-metrics-daemon-q5ctz" Dec 10 15:23:58 crc kubenswrapper[4755]: I1210 15:23:58.192325 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zl2tx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"796da6d5-6ccd-4786-a03e-9a8e47a55031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de63a123c46563bd8cd07e669d192bc8b019a889b9bdb7af1b988872c8f1fc48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzg4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zl2tx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:58Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:58 crc kubenswrapper[4755]: I1210 15:23:58.200158 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:58 crc kubenswrapper[4755]: I1210 15:23:58.200180 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:58 crc kubenswrapper[4755]: I1210 15:23:58.200187 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:58 crc kubenswrapper[4755]: I1210 15:23:58.200200 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:58 crc kubenswrapper[4755]: I1210 15:23:58.200208 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:58Z","lastTransitionTime":"2025-12-10T15:23:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:58 crc kubenswrapper[4755]: I1210 15:23:58.204580 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b132a8b9-1c99-414d-8773-229bf36b305d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5a5a59e9f156fb791ec822c2d5efe3fc6ec0e84bfcb2b6f5da81396951984c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2mdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a40c4bdaa23a60a665b8f565720d79b68cac62d40246be94fc6cd314b1bb3656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2mdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ggt8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:58Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:58 crc kubenswrapper[4755]: I1210 15:23:58.221007 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7d089a8bddfa1f80d29011c7a6bab0a300f7dd44bdb2864f86951ebbb9ebea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:58Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:58 crc kubenswrapper[4755]: I1210 15:23:58.235616 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01ddc4056319db1f69268dcae192c3cb9db6c25284305803ae7588e59f77c346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aae6bf174cdc9bde18d7c959e976454e73c1e67642f0158d365b79582f63f3cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:58Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:58 crc kubenswrapper[4755]: I1210 15:23:58.248313 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:58Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:58 crc kubenswrapper[4755]: I1210 15:23:58.260358 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aed6bb9-78c2-410b-9c58-b60ab22a7bd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70d014e7227746c46b30f8f5a1f307a422d2fa0d4b98d98bfe5ba6217223489e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://104d730ea666ffd2d639ba7fc8d1ac5b444586788a62656fd260cb249f82b1ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09dbec85547c7170bb9551e5657876814d48528e3047daf3547711a563d2b6ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8bf1de0d8bdc0a20bc42ba5097d849ae0f507e5fbc18fb17b4ff3650e46ff0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:58Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:58 crc kubenswrapper[4755]: I1210 15:23:58.274477 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n8c6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b56ae78-835a-45da-bc46-5adff2bdf9fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8957db8ccef7b3c449920471d345aa81ce9a7ab9be36b2350ec428aebec7ac3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b9a1485f6d0c0e53fd3505623b2788476bc00abea2f11650386eac40e449bcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9a1485f6d0c0e53fd3505623b2788476bc00abea2f11650386eac40e449bcc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a532015d196c588ad3dd6c43d9fa3772ba2a42bd1b2231f830b60adb0addab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a532015d196c588ad3dd6c43d9fa3772ba2a42bd1b2231f830b60adb0addab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://417e326f541e04df27d540c14ec572bb8f1d749a9e3dc88f483241d69f7d0679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://417e326f541e04df27d540c14ec572bb8f1d749a9e3dc88f483241d69f7d0679\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42867f7a1f6aceaf708ad650b7634bdafb3f850872c540d424916440014e7941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42867f7a1f6aceaf708ad650b7634bdafb3f850872c540d424916440014e7941\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f2d22bfc26958a3aad99de39f5c831d36c1ba4f563851baafc73735e80d5c91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f2d22bfc26958a3aad99de39f5c831d36c1ba4f563851baafc73735e80d5c91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://548d326fc5c14c16176d4fff3446d93c17322c11d2a34e9e42376b665de3848d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://548d326fc5c14c16176d4fff3446d93c17322c11d2a34e9e42376b665de3848d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n8c6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:58Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:58 crc kubenswrapper[4755]: I1210 15:23:58.284631 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/17673130-8212-4f8f-8859-92774f0ee202-metrics-certs\") pod \"network-metrics-daemon-q5ctz\" (UID: \"17673130-8212-4f8f-8859-92774f0ee202\") " pod="openshift-multus/network-metrics-daemon-q5ctz" Dec 10 15:23:58 crc kubenswrapper[4755]: I1210 15:23:58.284670 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcscn\" (UniqueName: \"kubernetes.io/projected/17673130-8212-4f8f-8859-92774f0ee202-kube-api-access-rcscn\") pod \"network-metrics-daemon-q5ctz\" (UID: \"17673130-8212-4f8f-8859-92774f0ee202\") " pod="openshift-multus/network-metrics-daemon-q5ctz" Dec 10 15:23:58 crc kubenswrapper[4755]: E1210 15:23:58.284762 4755 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 10 15:23:58 crc kubenswrapper[4755]: E1210 15:23:58.284822 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/17673130-8212-4f8f-8859-92774f0ee202-metrics-certs podName:17673130-8212-4f8f-8859-92774f0ee202 nodeName:}" failed. No retries permitted until 2025-12-10 15:23:58.784805952 +0000 UTC m=+35.385689584 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/17673130-8212-4f8f-8859-92774f0ee202-metrics-certs") pod "network-metrics-daemon-q5ctz" (UID: "17673130-8212-4f8f-8859-92774f0ee202") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 10 15:23:58 crc kubenswrapper[4755]: I1210 15:23:58.289927 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qnmst" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1693fcf1-bef4-4f82-8dd8-f1797b03f5e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8fd6a2ab2b9557574951c0ebbccd663fe576262b0de2c3c655427c977f62d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hzdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qnmst\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:58Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:58 crc kubenswrapper[4755]: I1210 15:23:58.302756 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:58 crc kubenswrapper[4755]: I1210 15:23:58.302787 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:58 crc kubenswrapper[4755]: I1210 15:23:58.302796 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:58 crc kubenswrapper[4755]: I1210 15:23:58.302812 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:58 crc kubenswrapper[4755]: I1210 15:23:58.302824 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:58Z","lastTransitionTime":"2025-12-10T15:23:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:58 crc kubenswrapper[4755]: I1210 15:23:58.305458 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcscn\" (UniqueName: \"kubernetes.io/projected/17673130-8212-4f8f-8859-92774f0ee202-kube-api-access-rcscn\") pod \"network-metrics-daemon-q5ctz\" (UID: \"17673130-8212-4f8f-8859-92774f0ee202\") " pod="openshift-multus/network-metrics-daemon-q5ctz" Dec 10 15:23:58 crc kubenswrapper[4755]: I1210 15:23:58.310231 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-82wnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0a2f42c-a60e-4350-9ebb-c28d3cbfdad7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xbzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xbzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-82wnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:58Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:58 crc kubenswrapper[4755]: I1210 15:23:58.336346 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e6db97-7e22-4fec-924e-20e90f463887\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef3871ddc16d2fc1b89a6f120390cf066424bd9ed4cca98f8dea4430c40c6c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08895c3ab20248c4bd70ccfae6442bdf8e76a602e605b043f0f00166624ada17\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5b2467645116856afa1e1e37501c7b453b03da193a96d1075886b85839a86ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16e58653dca08c9b33671279553975bca9507ffeeefea161119cec624813f167\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea18ef6c319336321d97c2a55449903b837b5a47da785da3ff5308244751943\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17ca0b6cc1b8b0bf9c34e03c494d1fc1594db013eccf862027ec8749ea51f6e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17ca0b6cc1b8b0bf9c34e03c494d1fc1594db013eccf862027ec8749ea51f6e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:58Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:58 crc kubenswrapper[4755]: I1210 15:23:58.357533 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:58Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:58 crc kubenswrapper[4755]: I1210 15:23:58.380913 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b1da51a-99c9-4f8e-920d-ce0973af6370\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6eb065dc6c0cc8914cb95553eb2683d894fb9a4e78ce7fac73bcce8d7f6cced9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e547993b9f2fa37bf924f909c47b62eb0cc02b659596b1cad9bbc42fdde8f9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9ba47683cc23d5b531a45f0658b6a9378650400b35b5372642b0430a5ac503f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a75407e83508af9adebb09c6466a966dd791d29f690c539656f9bd3396d7031\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://602b4e49987fa2cc6b54b822110aececbdddaf2bce8f27cce4ed906768d45791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://335bcab3a79f09796e97560365e1211fb30ddf288f4773c05ab353197add4365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://194ac31f80a7b5a0fde0c62cb794234cb84e5b12e4f3db214ec3d16b4250797f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d89ebb4c7085b2d1d38908446db4cfa84ccbb4e5cf150249d1b2b508631e72d6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T15:23:53Z\\\",\\\"message\\\":\\\"sip/v1/apis/informers/externalversions/factory.go:140\\\\nI1210 15:23:53.306420 6025 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1210 15:23:53.306456 6025 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1210 15:23:53.306496 6025 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1210 15:23:53.306514 6025 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1210 15:23:53.306525 6025 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1210 15:23:53.306537 6025 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1210 15:23:53.306542 6025 handler.go:208] Removed *v1.Node event handler 7\\\\nI1210 15:23:53.306551 6025 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1210 15:23:53.306555 6025 handler.go:208] Removed *v1.Node event handler 2\\\\nI1210 15:23:53.306556 6025 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1210 15:23:53.306565 6025 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1210 15:23:53.306573 6025 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1210 15:23:53.306573 6025 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1210 15:23:53.306573 6025 factory.go:656] Stopping watch factory\\\\nI1210 15:23:53.306591 6025 handler.go:208] Removed *v1.EgressFirewall ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59bf59d1b7fbc365a916fbbceca7ae30b7ebc754b34f2f7a34c2e21e1e1d2166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://375efb27cf6bef06a6a0ffc8f6b2963e1b9ffbca18b3c8e7b402bc552a411f6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://375efb27cf6bef06a6a0ffc8f6b2963e1b9ffbca18b3c8e7b402bc552a411f6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6lfvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:58Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:58 crc kubenswrapper[4755]: I1210 15:23:58.398589 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7d089a8bddfa1f80d29011c7a6bab0a300f7dd44bdb2864f86951ebbb9ebea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:58Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:58 crc kubenswrapper[4755]: I1210 15:23:58.404483 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:58 crc kubenswrapper[4755]: I1210 15:23:58.404522 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:58 crc kubenswrapper[4755]: I1210 15:23:58.404535 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:58 crc kubenswrapper[4755]: I1210 15:23:58.404573 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:58 crc kubenswrapper[4755]: I1210 15:23:58.404586 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:58Z","lastTransitionTime":"2025-12-10T15:23:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:58 crc kubenswrapper[4755]: I1210 15:23:58.414807 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01ddc4056319db1f69268dcae192c3cb9db6c25284305803ae7588e59f77c346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aae6bf174cdc9bde18d7c959e976454e73c1e67642f0158d365b79582f63f3cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:58Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:58 crc kubenswrapper[4755]: I1210 15:23:58.429162 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:58Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:58 crc kubenswrapper[4755]: I1210 15:23:58.443184 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zl2tx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"796da6d5-6ccd-4786-a03e-9a8e47a55031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de63a123c46563bd8cd07e669d192bc8b019a889b9bdb7af1b988872c8f1fc48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzg4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zl2tx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:58Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:58 crc kubenswrapper[4755]: I1210 15:23:58.457100 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b132a8b9-1c99-414d-8773-229bf36b305d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5a5a59e9f156fb791ec822c2d5efe3fc6ec0e84bfcb2b6f5da81396951984c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2mdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a40c4bdaa23a60a665b8f565720d79b68cac62d40246be94fc6cd314b1bb3656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2mdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ggt8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:58Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:58 crc kubenswrapper[4755]: I1210 15:23:58.471982 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aed6bb9-78c2-410b-9c58-b60ab22a7bd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70d014e7227746c46b30f8f5a1f307a422d2fa0d4b98d98bfe5ba6217223489e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://104d730ea666ffd2d639ba7fc8d1ac5b444586788a62656fd260cb249f82b1ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09dbec85547c7170bb9551e5657876814d48528e3047daf3547711a563d2b6ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8bf1de0d8bdc0a20bc42ba5097d849ae0f507e5fbc18fb17b4ff3650e46ff0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:58Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:58 crc kubenswrapper[4755]: I1210 15:23:58.486781 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n8c6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b56ae78-835a-45da-bc46-5adff2bdf9fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8957db8ccef7b3c449920471d345aa81ce9a7ab9be36b2350ec428aebec7ac3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b9a1485f6d0c0e53fd3505623b2788476bc00abea2f11650386eac40e449bcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9a1485f6d0c0e53fd3505623b2788476bc00abea2f11650386eac40e449bcc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a532015d196c588ad3dd6c43d9fa3772ba2a42bd1b2231f830b60adb0addab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a532015d196c588ad3dd6c43d9fa3772ba2a42bd1b2231f830b60adb0addab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://417e326f541e04df27d540c14ec572bb8f1d749a9e3dc88f483241d69f7d0679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://417e326f541e04df27d540c14ec572bb8f1d749a9e3dc88f483241d69f7d0679\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42867f7a1f6aceaf708ad650b7634bdafb3f850872c540d424916440014e7941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42867f7a1f6aceaf708ad650b7634bdafb3f850872c540d424916440014e7941\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f2d22bfc26958a3aad99de39f5c831d36c1ba4f563851baafc73735e80d5c91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f2d22bfc26958a3aad99de39f5c831d36c1ba4f563851baafc73735e80d5c91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://548d326fc5c14c16176d4fff3446d93c17322c11d2a34e9e42376b665de3848d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://548d326fc5c14c16176d4fff3446d93c17322c11d2a34e9e42376b665de3848d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n8c6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:58Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:58 crc kubenswrapper[4755]: I1210 15:23:58.503344 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qnmst" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1693fcf1-bef4-4f82-8dd8-f1797b03f5e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8fd6a2ab2b9557574951c0ebbccd663fe576262b0de2c3c655427c977f62d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hzdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qnmst\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:58Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:58 crc kubenswrapper[4755]: I1210 15:23:58.506762 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:58 crc kubenswrapper[4755]: I1210 15:23:58.506807 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:58 crc kubenswrapper[4755]: I1210 15:23:58.506820 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:58 crc kubenswrapper[4755]: I1210 15:23:58.506839 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:58 crc kubenswrapper[4755]: I1210 15:23:58.506851 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:58Z","lastTransitionTime":"2025-12-10T15:23:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:58 crc kubenswrapper[4755]: I1210 15:23:58.516869 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-82wnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0a2f42c-a60e-4350-9ebb-c28d3cbfdad7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xbzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xbzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-82wnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:58Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:58 crc kubenswrapper[4755]: I1210 15:23:58.544571 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fa03060-a6e0-4aad-9aa1-43b1a0d00c85\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a823333ed5cb5de3988d25e50e4b7a0f9071c76fb39c22760a4f2acd5eb455d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d828f3f3b90a2dcb1c1908e6a686368af5b0d715b3251e4b8fcf3c8818ec75a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://172ab3fce08c8ba4095ff4095c89364a778644752bd7bb6c178d6e3ebcface69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfbb8e350ad18b78a6bcf6cfa4eb8f2fad90f970dba08a8b1b2026af6f255e83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c4b2bf0c88a16fd6b4b45a20730a92292895dc9f29ce756d347c302b25a8612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55da309288bc2c7ff1f22da0569d18155796700c200fe3ed508f6f8179756865\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55da309288bc2c7ff1f22da0569d18155796700c200fe3ed508f6f8179756865\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219c531368bb5379f3726d94ab0afc0f33eedbf91b0a4dbbe0757c6e232f79ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://219c531368bb5379f3726d94ab0afc0f33eedbf91b0a4dbbe0757c6e232f79ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f9da8f3d4f85ec83c16b71ecd983ff4f48049eb8e73461a2b46c4f66d71ed06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9da8f3d4f85ec83c16b71ecd983ff4f48049eb8e73461a2b46c4f66d71ed06e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:58Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:58 crc kubenswrapper[4755]: I1210 15:23:58.568364 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:58Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:58 crc kubenswrapper[4755]: I1210 15:23:58.581948 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c371b199c3643a68ea5eba935eb76a1b7e8a4027c9292a1324116ccfc14742ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:58Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:58 crc kubenswrapper[4755]: I1210 15:23:58.593595 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wv8fh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d150a22e-c59a-4376-a5c8-db4085ea0be0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58d106c5d1b9525ec821d009ca556449cd8d7f0e1b9c8ec7dd969df996e73625\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfw9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wv8fh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:58Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:58 crc kubenswrapper[4755]: I1210 15:23:58.604876 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q5ctz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17673130-8212-4f8f-8859-92774f0ee202\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcscn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcscn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q5ctz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:58Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:58 crc kubenswrapper[4755]: I1210 15:23:58.609199 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:58 crc kubenswrapper[4755]: I1210 15:23:58.609245 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:58 crc kubenswrapper[4755]: I1210 15:23:58.609258 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:58 crc kubenswrapper[4755]: I1210 15:23:58.609273 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:58 crc kubenswrapper[4755]: I1210 15:23:58.609283 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:58Z","lastTransitionTime":"2025-12-10T15:23:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:58 crc kubenswrapper[4755]: I1210 15:23:58.711695 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:58 crc kubenswrapper[4755]: I1210 15:23:58.711738 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:58 crc kubenswrapper[4755]: I1210 15:23:58.711747 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:58 crc kubenswrapper[4755]: I1210 15:23:58.711762 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:58 crc kubenswrapper[4755]: I1210 15:23:58.711772 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:58Z","lastTransitionTime":"2025-12-10T15:23:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:58 crc kubenswrapper[4755]: I1210 15:23:58.789299 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/17673130-8212-4f8f-8859-92774f0ee202-metrics-certs\") pod \"network-metrics-daemon-q5ctz\" (UID: \"17673130-8212-4f8f-8859-92774f0ee202\") " pod="openshift-multus/network-metrics-daemon-q5ctz" Dec 10 15:23:58 crc kubenswrapper[4755]: E1210 15:23:58.789368 4755 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 10 15:23:58 crc kubenswrapper[4755]: E1210 15:23:58.789506 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/17673130-8212-4f8f-8859-92774f0ee202-metrics-certs podName:17673130-8212-4f8f-8859-92774f0ee202 nodeName:}" failed. No retries permitted until 2025-12-10 15:23:59.789476115 +0000 UTC m=+36.390359747 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/17673130-8212-4f8f-8859-92774f0ee202-metrics-certs") pod "network-metrics-daemon-q5ctz" (UID: "17673130-8212-4f8f-8859-92774f0ee202") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 10 15:23:58 crc kubenswrapper[4755]: I1210 15:23:58.814128 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:58 crc kubenswrapper[4755]: I1210 15:23:58.814167 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:58 crc kubenswrapper[4755]: I1210 15:23:58.814176 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:58 crc kubenswrapper[4755]: I1210 15:23:58.814191 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:58 crc kubenswrapper[4755]: I1210 15:23:58.814200 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:58Z","lastTransitionTime":"2025-12-10T15:23:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:58 crc kubenswrapper[4755]: I1210 15:23:58.916523 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:58 crc kubenswrapper[4755]: I1210 15:23:58.916603 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:58 crc kubenswrapper[4755]: I1210 15:23:58.916616 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:58 crc kubenswrapper[4755]: I1210 15:23:58.916635 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:58 crc kubenswrapper[4755]: I1210 15:23:58.916648 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:58Z","lastTransitionTime":"2025-12-10T15:23:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:59 crc kubenswrapper[4755]: I1210 15:23:59.019772 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:59 crc kubenswrapper[4755]: I1210 15:23:59.019811 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:59 crc kubenswrapper[4755]: I1210 15:23:59.019820 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:59 crc kubenswrapper[4755]: I1210 15:23:59.019835 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:59 crc kubenswrapper[4755]: I1210 15:23:59.019844 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:59Z","lastTransitionTime":"2025-12-10T15:23:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:59 crc kubenswrapper[4755]: I1210 15:23:59.073487 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-82wnw" event={"ID":"c0a2f42c-a60e-4350-9ebb-c28d3cbfdad7","Type":"ContainerStarted","Data":"915c918562f8a69a28cbcf29e427bfdd94477193b98d5a2da9ad45cb4b44fa03"} Dec 10 15:23:59 crc kubenswrapper[4755]: I1210 15:23:59.096183 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n8c6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b56ae78-835a-45da-bc46-5adff2bdf9fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8957db8ccef7b3c449920471d345aa81ce9a7ab9be36b2350ec428aebec7ac3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b9a1485f6d0c0e53fd3505623b2788476bc00abea2f11650386eac40e449bcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9a1485f6d0c0e53fd3505623b2788476bc00abea2f11650386eac40e449bcc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a532015d196c588ad3dd6c43d9fa3772ba2a42bd1b2231f830b60adb0addab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a532015d196c588ad3dd6c43d9fa3772ba2a42bd1b2231f830b60adb0addab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://417e326f541e04df27d540c14ec572bb8f1d749a9e3dc88f483241d69f7d0679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://417e326f541e04df27d540c14ec572bb8f1d749a9e3dc88f483241d69f7d0679\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42867f7a1f6aceaf708ad650b7634bdafb3f850872c540d424916440014e7941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42867f7a1f6aceaf708ad650b7634bdafb3f850872c540d424916440014e7941\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f2d22bfc26958a3aad99de39f5c831d36c1ba4f563851baafc73735e80d5c91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f2d22bfc26958a3aad99de39f5c831d36c1ba4f563851baafc73735e80d5c91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://548d326fc5c14c16176d4fff3446d93c17322c11d2a34e9e42376b665de3848d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://548d326fc5c14c16176d4fff3446d93c17322c11d2a34e9e42376b665de3848d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n8c6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:59Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:59 crc kubenswrapper[4755]: I1210 15:23:59.109131 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qnmst" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1693fcf1-bef4-4f82-8dd8-f1797b03f5e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8fd6a2ab2b9557574951c0ebbccd663fe576262b0de2c3c655427c977f62d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hzdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qnmst\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:59Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:59 crc kubenswrapper[4755]: I1210 15:23:59.122005 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-82wnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0a2f42c-a60e-4350-9ebb-c28d3cbfdad7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7af643111a7a5d1d78b0412b1621b5ffac6389760ea9190e26e9e5d1704eed4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xbzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://915c918562f8a69a28cbcf29e427bfdd94477193b98d5a2da9ad45cb4b44fa03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xbzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-82wnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:59Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:59 crc kubenswrapper[4755]: I1210 15:23:59.122312 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:59 crc kubenswrapper[4755]: I1210 15:23:59.122361 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:59 crc kubenswrapper[4755]: I1210 15:23:59.122372 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:59 crc kubenswrapper[4755]: I1210 15:23:59.122394 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:59 crc kubenswrapper[4755]: I1210 15:23:59.122407 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:59Z","lastTransitionTime":"2025-12-10T15:23:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:59 crc kubenswrapper[4755]: I1210 15:23:59.136867 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aed6bb9-78c2-410b-9c58-b60ab22a7bd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70d014e7227746c46b30f8f5a1f307a422d2fa0d4b98d98bfe5ba6217223489e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://104d730ea666ffd2d639ba7fc8d1ac5b444586788a62656fd260cb249f82b1ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09dbec85547c7170bb9551e5657876814d48528e3047daf3547711a563d2b6ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8bf1de0d8bdc0a20bc42ba5097d849ae0f507e5fbc18fb17b4ff3650e46ff0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:59Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:59 crc kubenswrapper[4755]: I1210 15:23:59.152038 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:59Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:59 crc kubenswrapper[4755]: I1210 15:23:59.166783 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c371b199c3643a68ea5eba935eb76a1b7e8a4027c9292a1324116ccfc14742ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:59Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:59 crc kubenswrapper[4755]: I1210 15:23:59.179535 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wv8fh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d150a22e-c59a-4376-a5c8-db4085ea0be0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58d106c5d1b9525ec821d009ca556449cd8d7f0e1b9c8ec7dd969df996e73625\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfw9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wv8fh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:59Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:59 crc kubenswrapper[4755]: I1210 15:23:59.205105 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q5ctz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17673130-8212-4f8f-8859-92774f0ee202\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcscn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcscn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q5ctz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:59Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:59 crc kubenswrapper[4755]: I1210 15:23:59.224748 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:59 crc kubenswrapper[4755]: I1210 15:23:59.224788 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:59 crc kubenswrapper[4755]: I1210 15:23:59.224797 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:59 crc kubenswrapper[4755]: I1210 15:23:59.224812 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:59 crc kubenswrapper[4755]: I1210 15:23:59.224654 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fa03060-a6e0-4aad-9aa1-43b1a0d00c85\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a823333ed5cb5de3988d25e50e4b7a0f9071c76fb39c22760a4f2acd5eb455d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d828f3f3b90a2dcb1c1908e6a686368af5b0d715b3251e4b8fcf3c8818ec75a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://172ab3fce08c8ba4095ff4095c89364a778644752bd7bb6c178d6e3ebcface69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfbb8e350ad18b78a6bcf6cfa4eb8f2fad90f970dba08a8b1b2026af6f255e83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c4b2bf0c88a16fd6b4b45a20730a92292895dc9f29ce756d347c302b25a8612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55da309288bc2c7ff1f22da0569d18155796700c200fe3ed508f6f8179756865\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55da309288bc2c7ff1f22da0569d18155796700c200fe3ed508f6f8179756865\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219c531368bb5379f3726d94ab0afc0f33eedbf91b0a4dbbe0757c6e232f79ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://219c531368bb5379f3726d94ab0afc0f33eedbf91b0a4dbbe0757c6e232f79ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f9da8f3d4f85ec83c16b71ecd983ff4f48049eb8e73461a2b46c4f66d71ed06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9da8f3d4f85ec83c16b71ecd983ff4f48049eb8e73461a2b46c4f66d71ed06e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:59Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:59 crc kubenswrapper[4755]: I1210 15:23:59.224821 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:59Z","lastTransitionTime":"2025-12-10T15:23:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:59 crc kubenswrapper[4755]: I1210 15:23:59.239636 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e6db97-7e22-4fec-924e-20e90f463887\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef3871ddc16d2fc1b89a6f120390cf066424bd9ed4cca98f8dea4430c40c6c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08895c3ab20248c4bd70ccfae6442bdf8e76a602e605b043f0f00166624ada17\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5b2467645116856afa1e1e37501c7b453b03da193a96d1075886b85839a86ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16e58653dca08c9b33671279553975bca9507ffeeefea161119cec624813f167\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea18ef6c319336321d97c2a55449903b837b5a47da785da3ff5308244751943\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17ca0b6cc1b8b0bf9c34e03c494d1fc1594db013eccf862027ec8749ea51f6e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17ca0b6cc1b8b0bf9c34e03c494d1fc1594db013eccf862027ec8749ea51f6e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:59Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:59 crc kubenswrapper[4755]: I1210 15:23:59.254283 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:59Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:59 crc kubenswrapper[4755]: I1210 15:23:59.275120 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b1da51a-99c9-4f8e-920d-ce0973af6370\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6eb065dc6c0cc8914cb95553eb2683d894fb9a4e78ce7fac73bcce8d7f6cced9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e547993b9f2fa37bf924f909c47b62eb0cc02b659596b1cad9bbc42fdde8f9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9ba47683cc23d5b531a45f0658b6a9378650400b35b5372642b0430a5ac503f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a75407e83508af9adebb09c6466a966dd791d29f690c539656f9bd3396d7031\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://602b4e49987fa2cc6b54b822110aececbdddaf2bce8f27cce4ed906768d45791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://335bcab3a79f09796e97560365e1211fb30ddf288f4773c05ab353197add4365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://194ac31f80a7b5a0fde0c62cb794234cb84e5b12e4f3db214ec3d16b4250797f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d89ebb4c7085b2d1d38908446db4cfa84ccbb4e5cf150249d1b2b508631e72d6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T15:23:53Z\\\",\\\"message\\\":\\\"sip/v1/apis/informers/externalversions/factory.go:140\\\\nI1210 15:23:53.306420 6025 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1210 15:23:53.306456 6025 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1210 15:23:53.306496 6025 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1210 15:23:53.306514 6025 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1210 15:23:53.306525 6025 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1210 15:23:53.306537 6025 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1210 15:23:53.306542 6025 handler.go:208] Removed *v1.Node event handler 7\\\\nI1210 15:23:53.306551 6025 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1210 15:23:53.306555 6025 handler.go:208] Removed *v1.Node event handler 2\\\\nI1210 15:23:53.306556 6025 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1210 15:23:53.306565 6025 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1210 15:23:53.306573 6025 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1210 15:23:53.306573 6025 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1210 15:23:53.306573 6025 factory.go:656] Stopping watch factory\\\\nI1210 15:23:53.306591 6025 handler.go:208] Removed *v1.EgressFirewall ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59bf59d1b7fbc365a916fbbceca7ae30b7ebc754b34f2f7a34c2e21e1e1d2166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://375efb27cf6bef06a6a0ffc8f6b2963e1b9ffbca18b3c8e7b402bc552a411f6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://375efb27cf6bef06a6a0ffc8f6b2963e1b9ffbca18b3c8e7b402bc552a411f6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6lfvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:59Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:59 crc kubenswrapper[4755]: I1210 15:23:59.290135 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7d089a8bddfa1f80d29011c7a6bab0a300f7dd44bdb2864f86951ebbb9ebea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:59Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:59 crc kubenswrapper[4755]: I1210 15:23:59.306067 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01ddc4056319db1f69268dcae192c3cb9db6c25284305803ae7588e59f77c346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aae6bf174cdc9bde18d7c959e976454e73c1e67642f0158d365b79582f63f3cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:59Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:59 crc kubenswrapper[4755]: I1210 15:23:59.320721 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:59Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:59 crc kubenswrapper[4755]: I1210 15:23:59.326911 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:59 crc kubenswrapper[4755]: I1210 15:23:59.326965 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:59 crc kubenswrapper[4755]: I1210 15:23:59.326979 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:59 crc kubenswrapper[4755]: I1210 15:23:59.326997 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:59 crc kubenswrapper[4755]: I1210 15:23:59.327009 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:59Z","lastTransitionTime":"2025-12-10T15:23:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:59 crc kubenswrapper[4755]: I1210 15:23:59.334085 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zl2tx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"796da6d5-6ccd-4786-a03e-9a8e47a55031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de63a123c46563bd8cd07e669d192bc8b019a889b9bdb7af1b988872c8f1fc48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzg4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zl2tx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:59Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:59 crc kubenswrapper[4755]: I1210 15:23:59.346869 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b132a8b9-1c99-414d-8773-229bf36b305d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5a5a59e9f156fb791ec822c2d5efe3fc6ec0e84bfcb2b6f5da81396951984c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2mdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a40c4bdaa23a60a665b8f565720d79b68cac62d40246be94fc6cd314b1bb3656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2mdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ggt8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:23:59Z is after 2025-08-24T17:21:41Z" Dec 10 15:23:59 crc kubenswrapper[4755]: I1210 15:23:59.429172 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:59 crc kubenswrapper[4755]: I1210 15:23:59.429219 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:59 crc kubenswrapper[4755]: I1210 15:23:59.429229 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:59 crc kubenswrapper[4755]: I1210 15:23:59.429242 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:59 crc kubenswrapper[4755]: I1210 15:23:59.429252 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:59Z","lastTransitionTime":"2025-12-10T15:23:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:59 crc kubenswrapper[4755]: I1210 15:23:59.531553 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:59 crc kubenswrapper[4755]: I1210 15:23:59.531626 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:59 crc kubenswrapper[4755]: I1210 15:23:59.531636 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:59 crc kubenswrapper[4755]: I1210 15:23:59.531649 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:59 crc kubenswrapper[4755]: I1210 15:23:59.531658 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:59Z","lastTransitionTime":"2025-12-10T15:23:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:59 crc kubenswrapper[4755]: I1210 15:23:59.600192 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 15:23:59 crc kubenswrapper[4755]: I1210 15:23:59.600337 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 15:23:59 crc kubenswrapper[4755]: E1210 15:23:59.600391 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 15:24:15.600353383 +0000 UTC m=+52.201237035 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:23:59 crc kubenswrapper[4755]: I1210 15:23:59.600454 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 15:23:59 crc kubenswrapper[4755]: E1210 15:23:59.600516 4755 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 10 15:23:59 crc kubenswrapper[4755]: I1210 15:23:59.600536 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 15:23:59 crc kubenswrapper[4755]: E1210 15:23:59.600547 4755 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 10 15:23:59 crc kubenswrapper[4755]: E1210 15:23:59.600560 4755 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 15:23:59 crc kubenswrapper[4755]: I1210 15:23:59.600604 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 15:23:59 crc kubenswrapper[4755]: E1210 15:23:59.600614 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-10 15:24:15.600598259 +0000 UTC m=+52.201481891 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 15:23:59 crc kubenswrapper[4755]: E1210 15:23:59.600649 4755 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 10 15:23:59 crc kubenswrapper[4755]: E1210 15:23:59.600674 4755 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 10 15:23:59 crc kubenswrapper[4755]: E1210 15:23:59.600689 4755 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 15:23:59 crc kubenswrapper[4755]: E1210 15:23:59.600724 4755 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 10 15:23:59 crc kubenswrapper[4755]: E1210 15:23:59.600722 4755 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 10 15:23:59 crc kubenswrapper[4755]: E1210 15:23:59.600761 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-10 15:24:15.600749133 +0000 UTC m=+52.201632785 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 15:23:59 crc kubenswrapper[4755]: E1210 15:23:59.600840 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-10 15:24:15.600825285 +0000 UTC m=+52.201708927 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 10 15:23:59 crc kubenswrapper[4755]: E1210 15:23:59.600873 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-10 15:24:15.600864616 +0000 UTC m=+52.201748258 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 10 15:23:59 crc kubenswrapper[4755]: I1210 15:23:59.634077 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:59 crc kubenswrapper[4755]: I1210 15:23:59.634115 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:59 crc kubenswrapper[4755]: I1210 15:23:59.634124 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:59 crc kubenswrapper[4755]: I1210 15:23:59.634138 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:59 crc kubenswrapper[4755]: I1210 15:23:59.634149 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:59Z","lastTransitionTime":"2025-12-10T15:23:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:59 crc kubenswrapper[4755]: I1210 15:23:59.737102 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:59 crc kubenswrapper[4755]: I1210 15:23:59.737165 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:59 crc kubenswrapper[4755]: I1210 15:23:59.737176 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:59 crc kubenswrapper[4755]: I1210 15:23:59.737197 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:59 crc kubenswrapper[4755]: I1210 15:23:59.737218 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:59Z","lastTransitionTime":"2025-12-10T15:23:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:59 crc kubenswrapper[4755]: I1210 15:23:59.756706 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5ctz" Dec 10 15:23:59 crc kubenswrapper[4755]: E1210 15:23:59.757024 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5ctz" podUID="17673130-8212-4f8f-8859-92774f0ee202" Dec 10 15:23:59 crc kubenswrapper[4755]: I1210 15:23:59.757148 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 15:23:59 crc kubenswrapper[4755]: E1210 15:23:59.757289 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 15:23:59 crc kubenswrapper[4755]: I1210 15:23:59.757496 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 15:23:59 crc kubenswrapper[4755]: I1210 15:23:59.757611 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 15:23:59 crc kubenswrapper[4755]: E1210 15:23:59.757748 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 15:23:59 crc kubenswrapper[4755]: E1210 15:23:59.757819 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 15:23:59 crc kubenswrapper[4755]: I1210 15:23:59.802843 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/17673130-8212-4f8f-8859-92774f0ee202-metrics-certs\") pod \"network-metrics-daemon-q5ctz\" (UID: \"17673130-8212-4f8f-8859-92774f0ee202\") " pod="openshift-multus/network-metrics-daemon-q5ctz" Dec 10 15:23:59 crc kubenswrapper[4755]: E1210 15:23:59.802983 4755 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 10 15:23:59 crc kubenswrapper[4755]: E1210 15:23:59.803036 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/17673130-8212-4f8f-8859-92774f0ee202-metrics-certs podName:17673130-8212-4f8f-8859-92774f0ee202 nodeName:}" failed. No retries permitted until 2025-12-10 15:24:01.803021543 +0000 UTC m=+38.403905175 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/17673130-8212-4f8f-8859-92774f0ee202-metrics-certs") pod "network-metrics-daemon-q5ctz" (UID: "17673130-8212-4f8f-8859-92774f0ee202") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 10 15:23:59 crc kubenswrapper[4755]: I1210 15:23:59.840034 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:59 crc kubenswrapper[4755]: I1210 15:23:59.840086 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:59 crc kubenswrapper[4755]: I1210 15:23:59.840097 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:59 crc kubenswrapper[4755]: I1210 15:23:59.840113 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:59 crc kubenswrapper[4755]: I1210 15:23:59.840125 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:59Z","lastTransitionTime":"2025-12-10T15:23:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:23:59 crc kubenswrapper[4755]: I1210 15:23:59.942884 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:23:59 crc kubenswrapper[4755]: I1210 15:23:59.942920 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:23:59 crc kubenswrapper[4755]: I1210 15:23:59.942928 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:23:59 crc kubenswrapper[4755]: I1210 15:23:59.942941 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:23:59 crc kubenswrapper[4755]: I1210 15:23:59.942950 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:23:59Z","lastTransitionTime":"2025-12-10T15:23:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:00 crc kubenswrapper[4755]: I1210 15:24:00.045935 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:00 crc kubenswrapper[4755]: I1210 15:24:00.045980 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:00 crc kubenswrapper[4755]: I1210 15:24:00.045991 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:00 crc kubenswrapper[4755]: I1210 15:24:00.046007 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:00 crc kubenswrapper[4755]: I1210 15:24:00.046018 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:00Z","lastTransitionTime":"2025-12-10T15:24:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:00 crc kubenswrapper[4755]: I1210 15:24:00.078382 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6lfvk_4b1da51a-99c9-4f8e-920d-ce0973af6370/ovnkube-controller/1.log" Dec 10 15:24:00 crc kubenswrapper[4755]: I1210 15:24:00.079060 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6lfvk_4b1da51a-99c9-4f8e-920d-ce0973af6370/ovnkube-controller/0.log" Dec 10 15:24:00 crc kubenswrapper[4755]: I1210 15:24:00.083051 4755 generic.go:334] "Generic (PLEG): container finished" podID="4b1da51a-99c9-4f8e-920d-ce0973af6370" containerID="194ac31f80a7b5a0fde0c62cb794234cb84e5b12e4f3db214ec3d16b4250797f" exitCode=1 Dec 10 15:24:00 crc kubenswrapper[4755]: I1210 15:24:00.083120 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" event={"ID":"4b1da51a-99c9-4f8e-920d-ce0973af6370","Type":"ContainerDied","Data":"194ac31f80a7b5a0fde0c62cb794234cb84e5b12e4f3db214ec3d16b4250797f"} Dec 10 15:24:00 crc kubenswrapper[4755]: I1210 15:24:00.083186 4755 scope.go:117] "RemoveContainer" containerID="d89ebb4c7085b2d1d38908446db4cfa84ccbb4e5cf150249d1b2b508631e72d6" Dec 10 15:24:00 crc kubenswrapper[4755]: I1210 15:24:00.084375 4755 scope.go:117] "RemoveContainer" containerID="194ac31f80a7b5a0fde0c62cb794234cb84e5b12e4f3db214ec3d16b4250797f" Dec 10 15:24:00 crc kubenswrapper[4755]: E1210 15:24:00.085338 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-6lfvk_openshift-ovn-kubernetes(4b1da51a-99c9-4f8e-920d-ce0973af6370)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" podUID="4b1da51a-99c9-4f8e-920d-ce0973af6370" Dec 10 15:24:00 crc kubenswrapper[4755]: I1210 15:24:00.098561 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zl2tx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"796da6d5-6ccd-4786-a03e-9a8e47a55031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de63a123c46563bd8cd07e669d192bc8b019a889b9bdb7af1b988872c8f1fc48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzg4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zl2tx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:00Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:00 crc kubenswrapper[4755]: I1210 15:24:00.112657 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b132a8b9-1c99-414d-8773-229bf36b305d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5a5a59e9f156fb791ec822c2d5efe3fc6ec0e84bfcb2b6f5da81396951984c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2mdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a40c4bdaa23a60a665b8f565720d79b68cac62d40246be94fc6cd314b1bb3656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2mdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ggt8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:00Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:00 crc kubenswrapper[4755]: I1210 15:24:00.125682 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7d089a8bddfa1f80d29011c7a6bab0a300f7dd44bdb2864f86951ebbb9ebea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:00Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:00 crc kubenswrapper[4755]: I1210 15:24:00.140797 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01ddc4056319db1f69268dcae192c3cb9db6c25284305803ae7588e59f77c346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aae6bf174cdc9bde18d7c959e976454e73c1e67642f0158d365b79582f63f3cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:00Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:00 crc kubenswrapper[4755]: I1210 15:24:00.148023 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:00 crc kubenswrapper[4755]: I1210 15:24:00.148072 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:00 crc kubenswrapper[4755]: I1210 15:24:00.148083 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:00 crc kubenswrapper[4755]: I1210 15:24:00.148099 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:00 crc kubenswrapper[4755]: I1210 15:24:00.148113 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:00Z","lastTransitionTime":"2025-12-10T15:24:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:00 crc kubenswrapper[4755]: I1210 15:24:00.153902 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:00Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:00 crc kubenswrapper[4755]: I1210 15:24:00.165429 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aed6bb9-78c2-410b-9c58-b60ab22a7bd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70d014e7227746c46b30f8f5a1f307a422d2fa0d4b98d98bfe5ba6217223489e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://104d730ea666ffd2d639ba7fc8d1ac5b444586788a62656fd260cb249f82b1ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09dbec85547c7170bb9551e5657876814d48528e3047daf3547711a563d2b6ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8bf1de0d8bdc0a20bc42ba5097d849ae0f507e5fbc18fb17b4ff3650e46ff0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:00Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:00 crc kubenswrapper[4755]: I1210 15:24:00.180460 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n8c6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b56ae78-835a-45da-bc46-5adff2bdf9fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8957db8ccef7b3c449920471d345aa81ce9a7ab9be36b2350ec428aebec7ac3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b9a1485f6d0c0e53fd3505623b2788476bc00abea2f11650386eac40e449bcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9a1485f6d0c0e53fd3505623b2788476bc00abea2f11650386eac40e449bcc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a532015d196c588ad3dd6c43d9fa3772ba2a42bd1b2231f830b60adb0addab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a532015d196c588ad3dd6c43d9fa3772ba2a42bd1b2231f830b60adb0addab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://417e326f541e04df27d540c14ec572bb8f1d749a9e3dc88f483241d69f7d0679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://417e326f541e04df27d540c14ec572bb8f1d749a9e3dc88f483241d69f7d0679\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42867f7a1f6aceaf708ad650b7634bdafb3f850872c540d424916440014e7941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42867f7a1f6aceaf708ad650b7634bdafb3f850872c540d424916440014e7941\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f2d22bfc26958a3aad99de39f5c831d36c1ba4f563851baafc73735e80d5c91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f2d22bfc26958a3aad99de39f5c831d36c1ba4f563851baafc73735e80d5c91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://548d326fc5c14c16176d4fff3446d93c17322c11d2a34e9e42376b665de3848d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://548d326fc5c14c16176d4fff3446d93c17322c11d2a34e9e42376b665de3848d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n8c6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:00Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:00 crc kubenswrapper[4755]: I1210 15:24:00.190409 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qnmst" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1693fcf1-bef4-4f82-8dd8-f1797b03f5e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8fd6a2ab2b9557574951c0ebbccd663fe576262b0de2c3c655427c977f62d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hzdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qnmst\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:00Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:00 crc kubenswrapper[4755]: I1210 15:24:00.203648 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-82wnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0a2f42c-a60e-4350-9ebb-c28d3cbfdad7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7af643111a7a5d1d78b0412b1621b5ffac6389760ea9190e26e9e5d1704eed4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xbzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://915c918562f8a69a28cbcf29e427bfdd94477193b98d5a2da9ad45cb4b44fa03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xbzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-82wnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:00Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:00 crc kubenswrapper[4755]: I1210 15:24:00.215690 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q5ctz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17673130-8212-4f8f-8859-92774f0ee202\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcscn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcscn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q5ctz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:00Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:00 crc kubenswrapper[4755]: I1210 15:24:00.235265 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fa03060-a6e0-4aad-9aa1-43b1a0d00c85\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a823333ed5cb5de3988d25e50e4b7a0f9071c76fb39c22760a4f2acd5eb455d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d828f3f3b90a2dcb1c1908e6a686368af5b0d715b3251e4b8fcf3c8818ec75a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://172ab3fce08c8ba4095ff4095c89364a778644752bd7bb6c178d6e3ebcface69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfbb8e350ad18b78a6bcf6cfa4eb8f2fad90f970dba08a8b1b2026af6f255e83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c4b2bf0c88a16fd6b4b45a20730a92292895dc9f29ce756d347c302b25a8612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55da309288bc2c7ff1f22da0569d18155796700c200fe3ed508f6f8179756865\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55da309288bc2c7ff1f22da0569d18155796700c200fe3ed508f6f8179756865\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219c531368bb5379f3726d94ab0afc0f33eedbf91b0a4dbbe0757c6e232f79ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://219c531368bb5379f3726d94ab0afc0f33eedbf91b0a4dbbe0757c6e232f79ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f9da8f3d4f85ec83c16b71ecd983ff4f48049eb8e73461a2b46c4f66d71ed06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9da8f3d4f85ec83c16b71ecd983ff4f48049eb8e73461a2b46c4f66d71ed06e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:00Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:00 crc kubenswrapper[4755]: I1210 15:24:00.247594 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:00Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:00 crc kubenswrapper[4755]: I1210 15:24:00.250652 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:00 crc kubenswrapper[4755]: I1210 15:24:00.250695 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:00 crc kubenswrapper[4755]: I1210 15:24:00.250707 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:00 crc kubenswrapper[4755]: I1210 15:24:00.250724 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:00 crc kubenswrapper[4755]: I1210 15:24:00.250740 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:00Z","lastTransitionTime":"2025-12-10T15:24:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:00 crc kubenswrapper[4755]: I1210 15:24:00.262194 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c371b199c3643a68ea5eba935eb76a1b7e8a4027c9292a1324116ccfc14742ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:00Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:00 crc kubenswrapper[4755]: I1210 15:24:00.273112 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wv8fh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d150a22e-c59a-4376-a5c8-db4085ea0be0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58d106c5d1b9525ec821d009ca556449cd8d7f0e1b9c8ec7dd969df996e73625\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfw9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wv8fh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:00Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:00 crc kubenswrapper[4755]: I1210 15:24:00.285230 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e6db97-7e22-4fec-924e-20e90f463887\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef3871ddc16d2fc1b89a6f120390cf066424bd9ed4cca98f8dea4430c40c6c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08895c3ab20248c4bd70ccfae6442bdf8e76a602e605b043f0f00166624ada17\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5b2467645116856afa1e1e37501c7b453b03da193a96d1075886b85839a86ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16e58653dca08c9b33671279553975bca9507ffeeefea161119cec624813f167\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea18ef6c319336321d97c2a55449903b837b5a47da785da3ff5308244751943\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17ca0b6cc1b8b0bf9c34e03c494d1fc1594db013eccf862027ec8749ea51f6e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17ca0b6cc1b8b0bf9c34e03c494d1fc1594db013eccf862027ec8749ea51f6e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:00Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:00 crc kubenswrapper[4755]: I1210 15:24:00.297075 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:00Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:00 crc kubenswrapper[4755]: I1210 15:24:00.314020 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b1da51a-99c9-4f8e-920d-ce0973af6370\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6eb065dc6c0cc8914cb95553eb2683d894fb9a4e78ce7fac73bcce8d7f6cced9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e547993b9f2fa37bf924f909c47b62eb0cc02b659596b1cad9bbc42fdde8f9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9ba47683cc23d5b531a45f0658b6a9378650400b35b5372642b0430a5ac503f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a75407e83508af9adebb09c6466a966dd791d29f690c539656f9bd3396d7031\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://602b4e49987fa2cc6b54b822110aececbdddaf2bce8f27cce4ed906768d45791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://335bcab3a79f09796e97560365e1211fb30ddf288f4773c05ab353197add4365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://194ac31f80a7b5a0fde0c62cb794234cb84e5b12e4f3db214ec3d16b4250797f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d89ebb4c7085b2d1d38908446db4cfa84ccbb4e5cf150249d1b2b508631e72d6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T15:23:53Z\\\",\\\"message\\\":\\\"sip/v1/apis/informers/externalversions/factory.go:140\\\\nI1210 15:23:53.306420 6025 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1210 15:23:53.306456 6025 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1210 15:23:53.306496 6025 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1210 15:23:53.306514 6025 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1210 15:23:53.306525 6025 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1210 15:23:53.306537 6025 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1210 15:23:53.306542 6025 handler.go:208] Removed *v1.Node event handler 7\\\\nI1210 15:23:53.306551 6025 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1210 15:23:53.306555 6025 handler.go:208] Removed *v1.Node event handler 2\\\\nI1210 15:23:53.306556 6025 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1210 15:23:53.306565 6025 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1210 15:23:53.306573 6025 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1210 15:23:53.306573 6025 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1210 15:23:53.306573 6025 factory.go:656] Stopping watch factory\\\\nI1210 15:23:53.306591 6025 handler.go:208] Removed *v1.EgressFirewall ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://194ac31f80a7b5a0fde0c62cb794234cb84e5b12e4f3db214ec3d16b4250797f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T15:23:59Z\\\",\\\"message\\\":\\\"ss event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1210 15:23:58.955006 6153 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1210 15:23:58.954962 6153 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI1210 15:23:58.955022 6153 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI1210 15:23:58.955028 6153 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI1210 15:23:58.955034 6153 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nF1210 15:23:58.955079 6153 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"ht\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59bf59d1b7fbc365a916fbbceca7ae30b7ebc754b34f2f7a34c2e21e1e1d2166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://375efb27cf6bef06a6a0ffc8f6b2963e1b9ffbca18b3c8e7b402bc552a411f6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://375efb27cf6bef06a6a0ffc8f6b2963e1b9ffbca18b3c8e7b402bc552a411f6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6lfvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:00Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:00 crc kubenswrapper[4755]: I1210 15:24:00.352725 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:00 crc kubenswrapper[4755]: I1210 15:24:00.352772 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:00 crc kubenswrapper[4755]: I1210 15:24:00.352788 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:00 crc kubenswrapper[4755]: I1210 15:24:00.352804 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:00 crc kubenswrapper[4755]: I1210 15:24:00.352816 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:00Z","lastTransitionTime":"2025-12-10T15:24:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:00 crc kubenswrapper[4755]: I1210 15:24:00.455646 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:00 crc kubenswrapper[4755]: I1210 15:24:00.455726 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:00 crc kubenswrapper[4755]: I1210 15:24:00.455750 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:00 crc kubenswrapper[4755]: I1210 15:24:00.455783 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:00 crc kubenswrapper[4755]: I1210 15:24:00.455806 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:00Z","lastTransitionTime":"2025-12-10T15:24:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:00 crc kubenswrapper[4755]: I1210 15:24:00.558986 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:00 crc kubenswrapper[4755]: I1210 15:24:00.559070 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:00 crc kubenswrapper[4755]: I1210 15:24:00.559104 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:00 crc kubenswrapper[4755]: I1210 15:24:00.559177 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:00 crc kubenswrapper[4755]: I1210 15:24:00.559201 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:00Z","lastTransitionTime":"2025-12-10T15:24:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:00 crc kubenswrapper[4755]: I1210 15:24:00.660955 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:00 crc kubenswrapper[4755]: I1210 15:24:00.661012 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:00 crc kubenswrapper[4755]: I1210 15:24:00.661026 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:00 crc kubenswrapper[4755]: I1210 15:24:00.661044 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:00 crc kubenswrapper[4755]: I1210 15:24:00.661059 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:00Z","lastTransitionTime":"2025-12-10T15:24:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:00 crc kubenswrapper[4755]: I1210 15:24:00.763336 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:00 crc kubenswrapper[4755]: I1210 15:24:00.763414 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:00 crc kubenswrapper[4755]: I1210 15:24:00.763436 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:00 crc kubenswrapper[4755]: I1210 15:24:00.763499 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:00 crc kubenswrapper[4755]: I1210 15:24:00.763519 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:00Z","lastTransitionTime":"2025-12-10T15:24:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:00 crc kubenswrapper[4755]: I1210 15:24:00.865888 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:00 crc kubenswrapper[4755]: I1210 15:24:00.865926 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:00 crc kubenswrapper[4755]: I1210 15:24:00.865935 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:00 crc kubenswrapper[4755]: I1210 15:24:00.865950 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:00 crc kubenswrapper[4755]: I1210 15:24:00.865960 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:00Z","lastTransitionTime":"2025-12-10T15:24:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:00 crc kubenswrapper[4755]: I1210 15:24:00.973974 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:00 crc kubenswrapper[4755]: I1210 15:24:00.974029 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:00 crc kubenswrapper[4755]: I1210 15:24:00.974043 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:00 crc kubenswrapper[4755]: I1210 15:24:00.974063 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:00 crc kubenswrapper[4755]: I1210 15:24:00.974079 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:00Z","lastTransitionTime":"2025-12-10T15:24:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:01 crc kubenswrapper[4755]: I1210 15:24:01.076991 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:01 crc kubenswrapper[4755]: I1210 15:24:01.077050 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:01 crc kubenswrapper[4755]: I1210 15:24:01.077057 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:01 crc kubenswrapper[4755]: I1210 15:24:01.077071 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:01 crc kubenswrapper[4755]: I1210 15:24:01.077081 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:01Z","lastTransitionTime":"2025-12-10T15:24:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:01 crc kubenswrapper[4755]: I1210 15:24:01.087546 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6lfvk_4b1da51a-99c9-4f8e-920d-ce0973af6370/ovnkube-controller/1.log" Dec 10 15:24:01 crc kubenswrapper[4755]: I1210 15:24:01.179685 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:01 crc kubenswrapper[4755]: I1210 15:24:01.179735 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:01 crc kubenswrapper[4755]: I1210 15:24:01.179746 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:01 crc kubenswrapper[4755]: I1210 15:24:01.179762 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:01 crc kubenswrapper[4755]: I1210 15:24:01.179774 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:01Z","lastTransitionTime":"2025-12-10T15:24:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:01 crc kubenswrapper[4755]: I1210 15:24:01.282331 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:01 crc kubenswrapper[4755]: I1210 15:24:01.282415 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:01 crc kubenswrapper[4755]: I1210 15:24:01.282438 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:01 crc kubenswrapper[4755]: I1210 15:24:01.282461 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:01 crc kubenswrapper[4755]: I1210 15:24:01.282519 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:01Z","lastTransitionTime":"2025-12-10T15:24:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:01 crc kubenswrapper[4755]: I1210 15:24:01.385140 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:01 crc kubenswrapper[4755]: I1210 15:24:01.385206 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:01 crc kubenswrapper[4755]: I1210 15:24:01.385233 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:01 crc kubenswrapper[4755]: I1210 15:24:01.385258 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:01 crc kubenswrapper[4755]: I1210 15:24:01.385276 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:01Z","lastTransitionTime":"2025-12-10T15:24:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:01 crc kubenswrapper[4755]: I1210 15:24:01.487966 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:01 crc kubenswrapper[4755]: I1210 15:24:01.488012 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:01 crc kubenswrapper[4755]: I1210 15:24:01.488024 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:01 crc kubenswrapper[4755]: I1210 15:24:01.488040 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:01 crc kubenswrapper[4755]: I1210 15:24:01.488052 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:01Z","lastTransitionTime":"2025-12-10T15:24:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:01 crc kubenswrapper[4755]: I1210 15:24:01.591000 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:01 crc kubenswrapper[4755]: I1210 15:24:01.591044 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:01 crc kubenswrapper[4755]: I1210 15:24:01.591055 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:01 crc kubenswrapper[4755]: I1210 15:24:01.591070 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:01 crc kubenswrapper[4755]: I1210 15:24:01.591079 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:01Z","lastTransitionTime":"2025-12-10T15:24:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:01 crc kubenswrapper[4755]: I1210 15:24:01.693931 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:01 crc kubenswrapper[4755]: I1210 15:24:01.693968 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:01 crc kubenswrapper[4755]: I1210 15:24:01.693977 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:01 crc kubenswrapper[4755]: I1210 15:24:01.693990 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:01 crc kubenswrapper[4755]: I1210 15:24:01.694002 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:01Z","lastTransitionTime":"2025-12-10T15:24:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:01 crc kubenswrapper[4755]: I1210 15:24:01.756708 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5ctz" Dec 10 15:24:01 crc kubenswrapper[4755]: I1210 15:24:01.756762 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 15:24:01 crc kubenswrapper[4755]: I1210 15:24:01.756723 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 15:24:01 crc kubenswrapper[4755]: I1210 15:24:01.756721 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 15:24:01 crc kubenswrapper[4755]: E1210 15:24:01.756838 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5ctz" podUID="17673130-8212-4f8f-8859-92774f0ee202" Dec 10 15:24:01 crc kubenswrapper[4755]: E1210 15:24:01.756880 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 15:24:01 crc kubenswrapper[4755]: E1210 15:24:01.756923 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 15:24:01 crc kubenswrapper[4755]: E1210 15:24:01.756963 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 15:24:01 crc kubenswrapper[4755]: I1210 15:24:01.797001 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:01 crc kubenswrapper[4755]: I1210 15:24:01.797050 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:01 crc kubenswrapper[4755]: I1210 15:24:01.797060 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:01 crc kubenswrapper[4755]: I1210 15:24:01.797076 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:01 crc kubenswrapper[4755]: I1210 15:24:01.797087 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:01Z","lastTransitionTime":"2025-12-10T15:24:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:01 crc kubenswrapper[4755]: I1210 15:24:01.824220 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/17673130-8212-4f8f-8859-92774f0ee202-metrics-certs\") pod \"network-metrics-daemon-q5ctz\" (UID: \"17673130-8212-4f8f-8859-92774f0ee202\") " pod="openshift-multus/network-metrics-daemon-q5ctz" Dec 10 15:24:01 crc kubenswrapper[4755]: E1210 15:24:01.824510 4755 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 10 15:24:01 crc kubenswrapper[4755]: E1210 15:24:01.824628 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/17673130-8212-4f8f-8859-92774f0ee202-metrics-certs podName:17673130-8212-4f8f-8859-92774f0ee202 nodeName:}" failed. No retries permitted until 2025-12-10 15:24:05.824602133 +0000 UTC m=+42.425485805 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/17673130-8212-4f8f-8859-92774f0ee202-metrics-certs") pod "network-metrics-daemon-q5ctz" (UID: "17673130-8212-4f8f-8859-92774f0ee202") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 10 15:24:01 crc kubenswrapper[4755]: I1210 15:24:01.899639 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:01 crc kubenswrapper[4755]: I1210 15:24:01.899680 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:01 crc kubenswrapper[4755]: I1210 15:24:01.899689 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:01 crc kubenswrapper[4755]: I1210 15:24:01.899702 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:01 crc kubenswrapper[4755]: I1210 15:24:01.899711 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:01Z","lastTransitionTime":"2025-12-10T15:24:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:02 crc kubenswrapper[4755]: I1210 15:24:02.003190 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:02 crc kubenswrapper[4755]: I1210 15:24:02.003252 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:02 crc kubenswrapper[4755]: I1210 15:24:02.003266 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:02 crc kubenswrapper[4755]: I1210 15:24:02.003287 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:02 crc kubenswrapper[4755]: I1210 15:24:02.003301 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:02Z","lastTransitionTime":"2025-12-10T15:24:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:02 crc kubenswrapper[4755]: I1210 15:24:02.106512 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:02 crc kubenswrapper[4755]: I1210 15:24:02.106577 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:02 crc kubenswrapper[4755]: I1210 15:24:02.106594 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:02 crc kubenswrapper[4755]: I1210 15:24:02.106617 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:02 crc kubenswrapper[4755]: I1210 15:24:02.106635 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:02Z","lastTransitionTime":"2025-12-10T15:24:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:02 crc kubenswrapper[4755]: I1210 15:24:02.210994 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:02 crc kubenswrapper[4755]: I1210 15:24:02.211055 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:02 crc kubenswrapper[4755]: I1210 15:24:02.211069 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:02 crc kubenswrapper[4755]: I1210 15:24:02.211088 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:02 crc kubenswrapper[4755]: I1210 15:24:02.211099 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:02Z","lastTransitionTime":"2025-12-10T15:24:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:02 crc kubenswrapper[4755]: I1210 15:24:02.314348 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:02 crc kubenswrapper[4755]: I1210 15:24:02.314545 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:02 crc kubenswrapper[4755]: I1210 15:24:02.314574 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:02 crc kubenswrapper[4755]: I1210 15:24:02.314613 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:02 crc kubenswrapper[4755]: I1210 15:24:02.314639 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:02Z","lastTransitionTime":"2025-12-10T15:24:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:02 crc kubenswrapper[4755]: I1210 15:24:02.417041 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:02 crc kubenswrapper[4755]: I1210 15:24:02.417095 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:02 crc kubenswrapper[4755]: I1210 15:24:02.417109 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:02 crc kubenswrapper[4755]: I1210 15:24:02.417137 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:02 crc kubenswrapper[4755]: I1210 15:24:02.417149 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:02Z","lastTransitionTime":"2025-12-10T15:24:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:02 crc kubenswrapper[4755]: I1210 15:24:02.520426 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:02 crc kubenswrapper[4755]: I1210 15:24:02.520584 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:02 crc kubenswrapper[4755]: I1210 15:24:02.520609 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:02 crc kubenswrapper[4755]: I1210 15:24:02.520641 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:02 crc kubenswrapper[4755]: I1210 15:24:02.520673 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:02Z","lastTransitionTime":"2025-12-10T15:24:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:02 crc kubenswrapper[4755]: I1210 15:24:02.623258 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:02 crc kubenswrapper[4755]: I1210 15:24:02.623299 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:02 crc kubenswrapper[4755]: I1210 15:24:02.623309 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:02 crc kubenswrapper[4755]: I1210 15:24:02.623326 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:02 crc kubenswrapper[4755]: I1210 15:24:02.623336 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:02Z","lastTransitionTime":"2025-12-10T15:24:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:02 crc kubenswrapper[4755]: I1210 15:24:02.725686 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:02 crc kubenswrapper[4755]: I1210 15:24:02.725750 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:02 crc kubenswrapper[4755]: I1210 15:24:02.725763 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:02 crc kubenswrapper[4755]: I1210 15:24:02.725778 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:02 crc kubenswrapper[4755]: I1210 15:24:02.725791 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:02Z","lastTransitionTime":"2025-12-10T15:24:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:02 crc kubenswrapper[4755]: I1210 15:24:02.828603 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:02 crc kubenswrapper[4755]: I1210 15:24:02.828642 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:02 crc kubenswrapper[4755]: I1210 15:24:02.828653 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:02 crc kubenswrapper[4755]: I1210 15:24:02.828670 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:02 crc kubenswrapper[4755]: I1210 15:24:02.828681 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:02Z","lastTransitionTime":"2025-12-10T15:24:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:02 crc kubenswrapper[4755]: I1210 15:24:02.931380 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:02 crc kubenswrapper[4755]: I1210 15:24:02.931428 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:02 crc kubenswrapper[4755]: I1210 15:24:02.931441 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:02 crc kubenswrapper[4755]: I1210 15:24:02.931460 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:02 crc kubenswrapper[4755]: I1210 15:24:02.931498 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:02Z","lastTransitionTime":"2025-12-10T15:24:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:03 crc kubenswrapper[4755]: I1210 15:24:03.034134 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:03 crc kubenswrapper[4755]: I1210 15:24:03.034178 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:03 crc kubenswrapper[4755]: I1210 15:24:03.034188 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:03 crc kubenswrapper[4755]: I1210 15:24:03.034203 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:03 crc kubenswrapper[4755]: I1210 15:24:03.034214 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:03Z","lastTransitionTime":"2025-12-10T15:24:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:03 crc kubenswrapper[4755]: I1210 15:24:03.137861 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:03 crc kubenswrapper[4755]: I1210 15:24:03.137900 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:03 crc kubenswrapper[4755]: I1210 15:24:03.137913 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:03 crc kubenswrapper[4755]: I1210 15:24:03.137936 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:03 crc kubenswrapper[4755]: I1210 15:24:03.137947 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:03Z","lastTransitionTime":"2025-12-10T15:24:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:03 crc kubenswrapper[4755]: I1210 15:24:03.240844 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:03 crc kubenswrapper[4755]: I1210 15:24:03.241349 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:03 crc kubenswrapper[4755]: I1210 15:24:03.241562 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:03 crc kubenswrapper[4755]: I1210 15:24:03.241714 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:03 crc kubenswrapper[4755]: I1210 15:24:03.241862 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:03Z","lastTransitionTime":"2025-12-10T15:24:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:03 crc kubenswrapper[4755]: I1210 15:24:03.345807 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:03 crc kubenswrapper[4755]: I1210 15:24:03.346156 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:03 crc kubenswrapper[4755]: I1210 15:24:03.346308 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:03 crc kubenswrapper[4755]: I1210 15:24:03.346440 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:03 crc kubenswrapper[4755]: I1210 15:24:03.346624 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:03Z","lastTransitionTime":"2025-12-10T15:24:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:03 crc kubenswrapper[4755]: I1210 15:24:03.449645 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:03 crc kubenswrapper[4755]: I1210 15:24:03.449709 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:03 crc kubenswrapper[4755]: I1210 15:24:03.449728 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:03 crc kubenswrapper[4755]: I1210 15:24:03.449755 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:03 crc kubenswrapper[4755]: I1210 15:24:03.449773 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:03Z","lastTransitionTime":"2025-12-10T15:24:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:03 crc kubenswrapper[4755]: I1210 15:24:03.553162 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:03 crc kubenswrapper[4755]: I1210 15:24:03.553555 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:03 crc kubenswrapper[4755]: I1210 15:24:03.553570 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:03 crc kubenswrapper[4755]: I1210 15:24:03.553590 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:03 crc kubenswrapper[4755]: I1210 15:24:03.553602 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:03Z","lastTransitionTime":"2025-12-10T15:24:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:03 crc kubenswrapper[4755]: I1210 15:24:03.656619 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:03 crc kubenswrapper[4755]: I1210 15:24:03.656656 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:03 crc kubenswrapper[4755]: I1210 15:24:03.656666 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:03 crc kubenswrapper[4755]: I1210 15:24:03.656682 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:03 crc kubenswrapper[4755]: I1210 15:24:03.656694 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:03Z","lastTransitionTime":"2025-12-10T15:24:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:03 crc kubenswrapper[4755]: I1210 15:24:03.757256 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 15:24:03 crc kubenswrapper[4755]: E1210 15:24:03.757654 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 15:24:03 crc kubenswrapper[4755]: I1210 15:24:03.757375 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 15:24:03 crc kubenswrapper[4755]: E1210 15:24:03.757928 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 15:24:03 crc kubenswrapper[4755]: I1210 15:24:03.757334 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 15:24:03 crc kubenswrapper[4755]: E1210 15:24:03.758166 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 15:24:03 crc kubenswrapper[4755]: I1210 15:24:03.757524 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5ctz" Dec 10 15:24:03 crc kubenswrapper[4755]: E1210 15:24:03.758399 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5ctz" podUID="17673130-8212-4f8f-8859-92774f0ee202" Dec 10 15:24:03 crc kubenswrapper[4755]: I1210 15:24:03.758983 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:03 crc kubenswrapper[4755]: I1210 15:24:03.759018 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:03 crc kubenswrapper[4755]: I1210 15:24:03.759029 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:03 crc kubenswrapper[4755]: I1210 15:24:03.759044 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:03 crc kubenswrapper[4755]: I1210 15:24:03.759055 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:03Z","lastTransitionTime":"2025-12-10T15:24:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:03 crc kubenswrapper[4755]: I1210 15:24:03.776724 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e6db97-7e22-4fec-924e-20e90f463887\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef3871ddc16d2fc1b89a6f120390cf066424bd9ed4cca98f8dea4430c40c6c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08895c3ab20248c4bd70ccfae6442bdf8e76a602e605b043f0f00166624ada17\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5b2467645116856afa1e1e37501c7b453b03da193a96d1075886b85839a86ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16e58653dca08c9b33671279553975bca9507ffeeefea161119cec624813f167\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea18ef6c319336321d97c2a55449903b837b5a47da785da3ff5308244751943\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17ca0b6cc1b8b0bf9c34e03c494d1fc1594db013eccf862027ec8749ea51f6e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17ca0b6cc1b8b0bf9c34e03c494d1fc1594db013eccf862027ec8749ea51f6e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:03Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:03 crc kubenswrapper[4755]: I1210 15:24:03.794055 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:03Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:03 crc kubenswrapper[4755]: I1210 15:24:03.822218 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b1da51a-99c9-4f8e-920d-ce0973af6370\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6eb065dc6c0cc8914cb95553eb2683d894fb9a4e78ce7fac73bcce8d7f6cced9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e547993b9f2fa37bf924f909c47b62eb0cc02b659596b1cad9bbc42fdde8f9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9ba47683cc23d5b531a45f0658b6a9378650400b35b5372642b0430a5ac503f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a75407e83508af9adebb09c6466a966dd791d29f690c539656f9bd3396d7031\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://602b4e49987fa2cc6b54b822110aececbdddaf2bce8f27cce4ed906768d45791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://335bcab3a79f09796e97560365e1211fb30ddf288f4773c05ab353197add4365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://194ac31f80a7b5a0fde0c62cb794234cb84e5b12e4f3db214ec3d16b4250797f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d89ebb4c7085b2d1d38908446db4cfa84ccbb4e5cf150249d1b2b508631e72d6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T15:23:53Z\\\",\\\"message\\\":\\\"sip/v1/apis/informers/externalversions/factory.go:140\\\\nI1210 15:23:53.306420 6025 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1210 15:23:53.306456 6025 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1210 15:23:53.306496 6025 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1210 15:23:53.306514 6025 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1210 15:23:53.306525 6025 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1210 15:23:53.306537 6025 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1210 15:23:53.306542 6025 handler.go:208] Removed *v1.Node event handler 7\\\\nI1210 15:23:53.306551 6025 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1210 15:23:53.306555 6025 handler.go:208] Removed *v1.Node event handler 2\\\\nI1210 15:23:53.306556 6025 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1210 15:23:53.306565 6025 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1210 15:23:53.306573 6025 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1210 15:23:53.306573 6025 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1210 15:23:53.306573 6025 factory.go:656] Stopping watch factory\\\\nI1210 15:23:53.306591 6025 handler.go:208] Removed *v1.EgressFirewall ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://194ac31f80a7b5a0fde0c62cb794234cb84e5b12e4f3db214ec3d16b4250797f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T15:23:59Z\\\",\\\"message\\\":\\\"ss event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1210 15:23:58.955006 6153 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1210 15:23:58.954962 6153 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI1210 15:23:58.955022 6153 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI1210 15:23:58.955028 6153 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI1210 15:23:58.955034 6153 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nF1210 15:23:58.955079 6153 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"ht\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59bf59d1b7fbc365a916fbbceca7ae30b7ebc754b34f2f7a34c2e21e1e1d2166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://375efb27cf6bef06a6a0ffc8f6b2963e1b9ffbca18b3c8e7b402bc552a411f6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://375efb27cf6bef06a6a0ffc8f6b2963e1b9ffbca18b3c8e7b402bc552a411f6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6lfvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:03Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:03 crc kubenswrapper[4755]: I1210 15:24:03.839121 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7d089a8bddfa1f80d29011c7a6bab0a300f7dd44bdb2864f86951ebbb9ebea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:03Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:03 crc kubenswrapper[4755]: I1210 15:24:03.852247 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01ddc4056319db1f69268dcae192c3cb9db6c25284305803ae7588e59f77c346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aae6bf174cdc9bde18d7c959e976454e73c1e67642f0158d365b79582f63f3cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:03Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:03 crc kubenswrapper[4755]: I1210 15:24:03.861912 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:03 crc kubenswrapper[4755]: I1210 15:24:03.861985 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:03 crc kubenswrapper[4755]: I1210 15:24:03.862008 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:03 crc kubenswrapper[4755]: I1210 15:24:03.862039 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:03 crc kubenswrapper[4755]: I1210 15:24:03.862062 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:03Z","lastTransitionTime":"2025-12-10T15:24:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:03 crc kubenswrapper[4755]: I1210 15:24:03.865898 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:03Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:03 crc kubenswrapper[4755]: I1210 15:24:03.884100 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zl2tx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"796da6d5-6ccd-4786-a03e-9a8e47a55031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de63a123c46563bd8cd07e669d192bc8b019a889b9bdb7af1b988872c8f1fc48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzg4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zl2tx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:03Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:03 crc kubenswrapper[4755]: I1210 15:24:03.900580 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b132a8b9-1c99-414d-8773-229bf36b305d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5a5a59e9f156fb791ec822c2d5efe3fc6ec0e84bfcb2b6f5da81396951984c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2mdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a40c4bdaa23a60a665b8f565720d79b68cac62d40246be94fc6cd314b1bb3656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2mdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ggt8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:03Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:03 crc kubenswrapper[4755]: I1210 15:24:03.921316 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aed6bb9-78c2-410b-9c58-b60ab22a7bd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70d014e7227746c46b30f8f5a1f307a422d2fa0d4b98d98bfe5ba6217223489e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://104d730ea666ffd2d639ba7fc8d1ac5b444586788a62656fd260cb249f82b1ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09dbec85547c7170bb9551e5657876814d48528e3047daf3547711a563d2b6ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8bf1de0d8bdc0a20bc42ba5097d849ae0f507e5fbc18fb17b4ff3650e46ff0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:03Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:03 crc kubenswrapper[4755]: I1210 15:24:03.937982 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n8c6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b56ae78-835a-45da-bc46-5adff2bdf9fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8957db8ccef7b3c449920471d345aa81ce9a7ab9be36b2350ec428aebec7ac3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b9a1485f6d0c0e53fd3505623b2788476bc00abea2f11650386eac40e449bcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9a1485f6d0c0e53fd3505623b2788476bc00abea2f11650386eac40e449bcc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a532015d196c588ad3dd6c43d9fa3772ba2a42bd1b2231f830b60adb0addab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a532015d196c588ad3dd6c43d9fa3772ba2a42bd1b2231f830b60adb0addab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://417e326f541e04df27d540c14ec572bb8f1d749a9e3dc88f483241d69f7d0679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://417e326f541e04df27d540c14ec572bb8f1d749a9e3dc88f483241d69f7d0679\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42867f7a1f6aceaf708ad650b7634bdafb3f850872c540d424916440014e7941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42867f7a1f6aceaf708ad650b7634bdafb3f850872c540d424916440014e7941\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f2d22bfc26958a3aad99de39f5c831d36c1ba4f563851baafc73735e80d5c91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f2d22bfc26958a3aad99de39f5c831d36c1ba4f563851baafc73735e80d5c91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://548d326fc5c14c16176d4fff3446d93c17322c11d2a34e9e42376b665de3848d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://548d326fc5c14c16176d4fff3446d93c17322c11d2a34e9e42376b665de3848d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n8c6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:03Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:03 crc kubenswrapper[4755]: I1210 15:24:03.950362 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qnmst" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1693fcf1-bef4-4f82-8dd8-f1797b03f5e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8fd6a2ab2b9557574951c0ebbccd663fe576262b0de2c3c655427c977f62d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hzdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qnmst\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:03Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:03 crc kubenswrapper[4755]: I1210 15:24:03.964549 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:03 crc kubenswrapper[4755]: I1210 15:24:03.964619 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:03 crc kubenswrapper[4755]: I1210 15:24:03.964643 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:03 crc kubenswrapper[4755]: I1210 15:24:03.964717 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:03 crc kubenswrapper[4755]: I1210 15:24:03.964737 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:03Z","lastTransitionTime":"2025-12-10T15:24:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:03 crc kubenswrapper[4755]: I1210 15:24:03.965607 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-82wnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0a2f42c-a60e-4350-9ebb-c28d3cbfdad7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7af643111a7a5d1d78b0412b1621b5ffac6389760ea9190e26e9e5d1704eed4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xbzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://915c918562f8a69a28cbcf29e427bfdd94477193b98d5a2da9ad45cb4b44fa03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xbzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-82wnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:03Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:03 crc kubenswrapper[4755]: I1210 15:24:03.994613 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fa03060-a6e0-4aad-9aa1-43b1a0d00c85\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a823333ed5cb5de3988d25e50e4b7a0f9071c76fb39c22760a4f2acd5eb455d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d828f3f3b90a2dcb1c1908e6a686368af5b0d715b3251e4b8fcf3c8818ec75a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://172ab3fce08c8ba4095ff4095c89364a778644752bd7bb6c178d6e3ebcface69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfbb8e350ad18b78a6bcf6cfa4eb8f2fad90f970dba08a8b1b2026af6f255e83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c4b2bf0c88a16fd6b4b45a20730a92292895dc9f29ce756d347c302b25a8612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55da309288bc2c7ff1f22da0569d18155796700c200fe3ed508f6f8179756865\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55da309288bc2c7ff1f22da0569d18155796700c200fe3ed508f6f8179756865\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219c531368bb5379f3726d94ab0afc0f33eedbf91b0a4dbbe0757c6e232f79ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://219c531368bb5379f3726d94ab0afc0f33eedbf91b0a4dbbe0757c6e232f79ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f9da8f3d4f85ec83c16b71ecd983ff4f48049eb8e73461a2b46c4f66d71ed06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9da8f3d4f85ec83c16b71ecd983ff4f48049eb8e73461a2b46c4f66d71ed06e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:03Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:04 crc kubenswrapper[4755]: I1210 15:24:04.010864 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:04Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:04 crc kubenswrapper[4755]: I1210 15:24:04.022002 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c371b199c3643a68ea5eba935eb76a1b7e8a4027c9292a1324116ccfc14742ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:04Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:04 crc kubenswrapper[4755]: I1210 15:24:04.034930 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wv8fh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d150a22e-c59a-4376-a5c8-db4085ea0be0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58d106c5d1b9525ec821d009ca556449cd8d7f0e1b9c8ec7dd969df996e73625\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfw9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wv8fh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:04Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:04 crc kubenswrapper[4755]: I1210 15:24:04.047741 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q5ctz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17673130-8212-4f8f-8859-92774f0ee202\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcscn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcscn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q5ctz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:04Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:04 crc kubenswrapper[4755]: I1210 15:24:04.067053 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:04 crc kubenswrapper[4755]: I1210 15:24:04.067080 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:04 crc kubenswrapper[4755]: I1210 15:24:04.067091 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:04 crc kubenswrapper[4755]: I1210 15:24:04.067124 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:04 crc kubenswrapper[4755]: I1210 15:24:04.067136 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:04Z","lastTransitionTime":"2025-12-10T15:24:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:04 crc kubenswrapper[4755]: I1210 15:24:04.169986 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:04 crc kubenswrapper[4755]: I1210 15:24:04.170019 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:04 crc kubenswrapper[4755]: I1210 15:24:04.170038 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:04 crc kubenswrapper[4755]: I1210 15:24:04.170055 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:04 crc kubenswrapper[4755]: I1210 15:24:04.170065 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:04Z","lastTransitionTime":"2025-12-10T15:24:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:04 crc kubenswrapper[4755]: I1210 15:24:04.272996 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:04 crc kubenswrapper[4755]: I1210 15:24:04.273029 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:04 crc kubenswrapper[4755]: I1210 15:24:04.273038 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:04 crc kubenswrapper[4755]: I1210 15:24:04.273053 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:04 crc kubenswrapper[4755]: I1210 15:24:04.273062 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:04Z","lastTransitionTime":"2025-12-10T15:24:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:04 crc kubenswrapper[4755]: I1210 15:24:04.392237 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:04 crc kubenswrapper[4755]: I1210 15:24:04.392299 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:04 crc kubenswrapper[4755]: I1210 15:24:04.392311 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:04 crc kubenswrapper[4755]: I1210 15:24:04.392329 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:04 crc kubenswrapper[4755]: I1210 15:24:04.392343 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:04Z","lastTransitionTime":"2025-12-10T15:24:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:04 crc kubenswrapper[4755]: I1210 15:24:04.495278 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:04 crc kubenswrapper[4755]: I1210 15:24:04.495347 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:04 crc kubenswrapper[4755]: I1210 15:24:04.495362 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:04 crc kubenswrapper[4755]: I1210 15:24:04.495383 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:04 crc kubenswrapper[4755]: I1210 15:24:04.495400 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:04Z","lastTransitionTime":"2025-12-10T15:24:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:04 crc kubenswrapper[4755]: I1210 15:24:04.597659 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:04 crc kubenswrapper[4755]: I1210 15:24:04.597692 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:04 crc kubenswrapper[4755]: I1210 15:24:04.597699 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:04 crc kubenswrapper[4755]: I1210 15:24:04.597712 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:04 crc kubenswrapper[4755]: I1210 15:24:04.597720 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:04Z","lastTransitionTime":"2025-12-10T15:24:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:04 crc kubenswrapper[4755]: I1210 15:24:04.700144 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:04 crc kubenswrapper[4755]: I1210 15:24:04.700213 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:04 crc kubenswrapper[4755]: I1210 15:24:04.700228 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:04 crc kubenswrapper[4755]: I1210 15:24:04.700251 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:04 crc kubenswrapper[4755]: I1210 15:24:04.700264 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:04Z","lastTransitionTime":"2025-12-10T15:24:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:04 crc kubenswrapper[4755]: I1210 15:24:04.802613 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:04 crc kubenswrapper[4755]: I1210 15:24:04.802663 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:04 crc kubenswrapper[4755]: I1210 15:24:04.802675 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:04 crc kubenswrapper[4755]: I1210 15:24:04.802690 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:04 crc kubenswrapper[4755]: I1210 15:24:04.802702 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:04Z","lastTransitionTime":"2025-12-10T15:24:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:04 crc kubenswrapper[4755]: I1210 15:24:04.905144 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:04 crc kubenswrapper[4755]: I1210 15:24:04.905192 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:04 crc kubenswrapper[4755]: I1210 15:24:04.905202 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:04 crc kubenswrapper[4755]: I1210 15:24:04.905219 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:04 crc kubenswrapper[4755]: I1210 15:24:04.905229 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:04Z","lastTransitionTime":"2025-12-10T15:24:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:05 crc kubenswrapper[4755]: I1210 15:24:05.007163 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:05 crc kubenswrapper[4755]: I1210 15:24:05.007208 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:05 crc kubenswrapper[4755]: I1210 15:24:05.007217 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:05 crc kubenswrapper[4755]: I1210 15:24:05.007232 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:05 crc kubenswrapper[4755]: I1210 15:24:05.007241 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:05Z","lastTransitionTime":"2025-12-10T15:24:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:05 crc kubenswrapper[4755]: I1210 15:24:05.109246 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:05 crc kubenswrapper[4755]: I1210 15:24:05.109286 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:05 crc kubenswrapper[4755]: I1210 15:24:05.109297 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:05 crc kubenswrapper[4755]: I1210 15:24:05.109311 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:05 crc kubenswrapper[4755]: I1210 15:24:05.109322 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:05Z","lastTransitionTime":"2025-12-10T15:24:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:05 crc kubenswrapper[4755]: I1210 15:24:05.211260 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:05 crc kubenswrapper[4755]: I1210 15:24:05.211305 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:05 crc kubenswrapper[4755]: I1210 15:24:05.211316 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:05 crc kubenswrapper[4755]: I1210 15:24:05.211335 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:05 crc kubenswrapper[4755]: I1210 15:24:05.211385 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:05Z","lastTransitionTime":"2025-12-10T15:24:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:05 crc kubenswrapper[4755]: I1210 15:24:05.314081 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:05 crc kubenswrapper[4755]: I1210 15:24:05.314146 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:05 crc kubenswrapper[4755]: I1210 15:24:05.314160 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:05 crc kubenswrapper[4755]: I1210 15:24:05.314177 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:05 crc kubenswrapper[4755]: I1210 15:24:05.314188 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:05Z","lastTransitionTime":"2025-12-10T15:24:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:05 crc kubenswrapper[4755]: I1210 15:24:05.416663 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:05 crc kubenswrapper[4755]: I1210 15:24:05.416704 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:05 crc kubenswrapper[4755]: I1210 15:24:05.416714 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:05 crc kubenswrapper[4755]: I1210 15:24:05.416730 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:05 crc kubenswrapper[4755]: I1210 15:24:05.416743 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:05Z","lastTransitionTime":"2025-12-10T15:24:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:05 crc kubenswrapper[4755]: I1210 15:24:05.519706 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:05 crc kubenswrapper[4755]: I1210 15:24:05.519776 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:05 crc kubenswrapper[4755]: I1210 15:24:05.519809 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:05 crc kubenswrapper[4755]: I1210 15:24:05.519837 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:05 crc kubenswrapper[4755]: I1210 15:24:05.519894 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:05Z","lastTransitionTime":"2025-12-10T15:24:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:05 crc kubenswrapper[4755]: I1210 15:24:05.622621 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:05 crc kubenswrapper[4755]: I1210 15:24:05.622673 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:05 crc kubenswrapper[4755]: I1210 15:24:05.622685 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:05 crc kubenswrapper[4755]: I1210 15:24:05.622703 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:05 crc kubenswrapper[4755]: I1210 15:24:05.622715 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:05Z","lastTransitionTime":"2025-12-10T15:24:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:05 crc kubenswrapper[4755]: I1210 15:24:05.725243 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:05 crc kubenswrapper[4755]: I1210 15:24:05.725298 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:05 crc kubenswrapper[4755]: I1210 15:24:05.725315 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:05 crc kubenswrapper[4755]: I1210 15:24:05.725336 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:05 crc kubenswrapper[4755]: I1210 15:24:05.725352 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:05Z","lastTransitionTime":"2025-12-10T15:24:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:05 crc kubenswrapper[4755]: I1210 15:24:05.756853 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 15:24:05 crc kubenswrapper[4755]: I1210 15:24:05.756926 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 15:24:05 crc kubenswrapper[4755]: I1210 15:24:05.756939 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5ctz" Dec 10 15:24:05 crc kubenswrapper[4755]: E1210 15:24:05.757055 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 15:24:05 crc kubenswrapper[4755]: I1210 15:24:05.757110 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 15:24:05 crc kubenswrapper[4755]: E1210 15:24:05.757246 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 15:24:05 crc kubenswrapper[4755]: E1210 15:24:05.757368 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 15:24:05 crc kubenswrapper[4755]: E1210 15:24:05.757447 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5ctz" podUID="17673130-8212-4f8f-8859-92774f0ee202" Dec 10 15:24:05 crc kubenswrapper[4755]: I1210 15:24:05.827746 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:05 crc kubenswrapper[4755]: I1210 15:24:05.827785 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:05 crc kubenswrapper[4755]: I1210 15:24:05.827794 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:05 crc kubenswrapper[4755]: I1210 15:24:05.827811 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:05 crc kubenswrapper[4755]: I1210 15:24:05.827822 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:05Z","lastTransitionTime":"2025-12-10T15:24:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:05 crc kubenswrapper[4755]: I1210 15:24:05.870589 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/17673130-8212-4f8f-8859-92774f0ee202-metrics-certs\") pod \"network-metrics-daemon-q5ctz\" (UID: \"17673130-8212-4f8f-8859-92774f0ee202\") " pod="openshift-multus/network-metrics-daemon-q5ctz" Dec 10 15:24:05 crc kubenswrapper[4755]: E1210 15:24:05.870869 4755 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 10 15:24:05 crc kubenswrapper[4755]: E1210 15:24:05.871234 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/17673130-8212-4f8f-8859-92774f0ee202-metrics-certs podName:17673130-8212-4f8f-8859-92774f0ee202 nodeName:}" failed. No retries permitted until 2025-12-10 15:24:13.871193413 +0000 UTC m=+50.472077095 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/17673130-8212-4f8f-8859-92774f0ee202-metrics-certs") pod "network-metrics-daemon-q5ctz" (UID: "17673130-8212-4f8f-8859-92774f0ee202") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 10 15:24:05 crc kubenswrapper[4755]: I1210 15:24:05.930567 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:05 crc kubenswrapper[4755]: I1210 15:24:05.930634 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:05 crc kubenswrapper[4755]: I1210 15:24:05.930660 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:05 crc kubenswrapper[4755]: I1210 15:24:05.930682 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:05 crc kubenswrapper[4755]: I1210 15:24:05.930696 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:05Z","lastTransitionTime":"2025-12-10T15:24:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:06 crc kubenswrapper[4755]: I1210 15:24:06.032531 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:06 crc kubenswrapper[4755]: I1210 15:24:06.032609 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:06 crc kubenswrapper[4755]: I1210 15:24:06.032628 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:06 crc kubenswrapper[4755]: I1210 15:24:06.032652 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:06 crc kubenswrapper[4755]: I1210 15:24:06.032670 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:06Z","lastTransitionTime":"2025-12-10T15:24:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:06 crc kubenswrapper[4755]: I1210 15:24:06.135254 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:06 crc kubenswrapper[4755]: I1210 15:24:06.135330 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:06 crc kubenswrapper[4755]: I1210 15:24:06.135354 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:06 crc kubenswrapper[4755]: I1210 15:24:06.135382 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:06 crc kubenswrapper[4755]: I1210 15:24:06.135400 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:06Z","lastTransitionTime":"2025-12-10T15:24:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:06 crc kubenswrapper[4755]: I1210 15:24:06.239015 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:06 crc kubenswrapper[4755]: I1210 15:24:06.239051 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:06 crc kubenswrapper[4755]: I1210 15:24:06.239061 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:06 crc kubenswrapper[4755]: I1210 15:24:06.239080 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:06 crc kubenswrapper[4755]: I1210 15:24:06.239097 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:06Z","lastTransitionTime":"2025-12-10T15:24:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:06 crc kubenswrapper[4755]: I1210 15:24:06.341536 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:06 crc kubenswrapper[4755]: I1210 15:24:06.341568 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:06 crc kubenswrapper[4755]: I1210 15:24:06.341581 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:06 crc kubenswrapper[4755]: I1210 15:24:06.341598 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:06 crc kubenswrapper[4755]: I1210 15:24:06.341610 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:06Z","lastTransitionTime":"2025-12-10T15:24:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:06 crc kubenswrapper[4755]: I1210 15:24:06.444837 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:06 crc kubenswrapper[4755]: I1210 15:24:06.444878 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:06 crc kubenswrapper[4755]: I1210 15:24:06.444889 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:06 crc kubenswrapper[4755]: I1210 15:24:06.444907 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:06 crc kubenswrapper[4755]: I1210 15:24:06.444921 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:06Z","lastTransitionTime":"2025-12-10T15:24:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:06 crc kubenswrapper[4755]: I1210 15:24:06.493727 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:06 crc kubenswrapper[4755]: I1210 15:24:06.493787 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:06 crc kubenswrapper[4755]: I1210 15:24:06.493805 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:06 crc kubenswrapper[4755]: I1210 15:24:06.493830 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:06 crc kubenswrapper[4755]: I1210 15:24:06.493848 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:06Z","lastTransitionTime":"2025-12-10T15:24:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:06 crc kubenswrapper[4755]: E1210 15:24:06.517925 4755 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:24:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:24:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:24:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:24:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ba232303-88d5-4931-b82e-34d9a0e5c06a\\\",\\\"systemUUID\\\":\\\"ebd59de0-c6b0-47c1-bc17-6f665dcf344d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:06Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:06 crc kubenswrapper[4755]: I1210 15:24:06.522657 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:06 crc kubenswrapper[4755]: I1210 15:24:06.522785 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:06 crc kubenswrapper[4755]: I1210 15:24:06.522805 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:06 crc kubenswrapper[4755]: I1210 15:24:06.522831 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:06 crc kubenswrapper[4755]: I1210 15:24:06.522852 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:06Z","lastTransitionTime":"2025-12-10T15:24:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:06 crc kubenswrapper[4755]: E1210 15:24:06.542440 4755 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:24:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:24:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:24:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:24:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ba232303-88d5-4931-b82e-34d9a0e5c06a\\\",\\\"systemUUID\\\":\\\"ebd59de0-c6b0-47c1-bc17-6f665dcf344d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:06Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:06 crc kubenswrapper[4755]: I1210 15:24:06.548569 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:06 crc kubenswrapper[4755]: I1210 15:24:06.548629 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:06 crc kubenswrapper[4755]: I1210 15:24:06.548641 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:06 crc kubenswrapper[4755]: I1210 15:24:06.548660 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:06 crc kubenswrapper[4755]: I1210 15:24:06.548672 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:06Z","lastTransitionTime":"2025-12-10T15:24:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:06 crc kubenswrapper[4755]: E1210 15:24:06.564414 4755 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:24:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:24:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:24:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:24:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ba232303-88d5-4931-b82e-34d9a0e5c06a\\\",\\\"systemUUID\\\":\\\"ebd59de0-c6b0-47c1-bc17-6f665dcf344d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:06Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:06 crc kubenswrapper[4755]: I1210 15:24:06.568534 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:06 crc kubenswrapper[4755]: I1210 15:24:06.568583 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:06 crc kubenswrapper[4755]: I1210 15:24:06.568599 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:06 crc kubenswrapper[4755]: I1210 15:24:06.568618 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:06 crc kubenswrapper[4755]: I1210 15:24:06.568633 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:06Z","lastTransitionTime":"2025-12-10T15:24:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:06 crc kubenswrapper[4755]: E1210 15:24:06.583549 4755 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:24:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:24:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:24:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:24:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ba232303-88d5-4931-b82e-34d9a0e5c06a\\\",\\\"systemUUID\\\":\\\"ebd59de0-c6b0-47c1-bc17-6f665dcf344d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:06Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:06 crc kubenswrapper[4755]: I1210 15:24:06.586719 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:06 crc kubenswrapper[4755]: I1210 15:24:06.586746 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:06 crc kubenswrapper[4755]: I1210 15:24:06.586754 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:06 crc kubenswrapper[4755]: I1210 15:24:06.586767 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:06 crc kubenswrapper[4755]: I1210 15:24:06.586775 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:06Z","lastTransitionTime":"2025-12-10T15:24:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:06 crc kubenswrapper[4755]: E1210 15:24:06.601090 4755 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:24:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:24:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:24:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:24:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ba232303-88d5-4931-b82e-34d9a0e5c06a\\\",\\\"systemUUID\\\":\\\"ebd59de0-c6b0-47c1-bc17-6f665dcf344d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:06Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:06 crc kubenswrapper[4755]: E1210 15:24:06.601242 4755 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 10 15:24:06 crc kubenswrapper[4755]: I1210 15:24:06.602778 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:06 crc kubenswrapper[4755]: I1210 15:24:06.602808 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:06 crc kubenswrapper[4755]: I1210 15:24:06.602817 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:06 crc kubenswrapper[4755]: I1210 15:24:06.602859 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:06 crc kubenswrapper[4755]: I1210 15:24:06.602870 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:06Z","lastTransitionTime":"2025-12-10T15:24:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:06 crc kubenswrapper[4755]: I1210 15:24:06.705153 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:06 crc kubenswrapper[4755]: I1210 15:24:06.705186 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:06 crc kubenswrapper[4755]: I1210 15:24:06.705196 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:06 crc kubenswrapper[4755]: I1210 15:24:06.705235 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:06 crc kubenswrapper[4755]: I1210 15:24:06.705244 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:06Z","lastTransitionTime":"2025-12-10T15:24:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:06 crc kubenswrapper[4755]: I1210 15:24:06.806938 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:06 crc kubenswrapper[4755]: I1210 15:24:06.806983 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:06 crc kubenswrapper[4755]: I1210 15:24:06.806992 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:06 crc kubenswrapper[4755]: I1210 15:24:06.807007 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:06 crc kubenswrapper[4755]: I1210 15:24:06.807018 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:06Z","lastTransitionTime":"2025-12-10T15:24:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:06 crc kubenswrapper[4755]: I1210 15:24:06.909315 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:06 crc kubenswrapper[4755]: I1210 15:24:06.909357 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:06 crc kubenswrapper[4755]: I1210 15:24:06.909367 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:06 crc kubenswrapper[4755]: I1210 15:24:06.909382 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:06 crc kubenswrapper[4755]: I1210 15:24:06.909391 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:06Z","lastTransitionTime":"2025-12-10T15:24:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:07 crc kubenswrapper[4755]: I1210 15:24:07.012031 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:07 crc kubenswrapper[4755]: I1210 15:24:07.012059 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:07 crc kubenswrapper[4755]: I1210 15:24:07.012076 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:07 crc kubenswrapper[4755]: I1210 15:24:07.012094 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:07 crc kubenswrapper[4755]: I1210 15:24:07.012107 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:07Z","lastTransitionTime":"2025-12-10T15:24:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:07 crc kubenswrapper[4755]: I1210 15:24:07.114382 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:07 crc kubenswrapper[4755]: I1210 15:24:07.114428 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:07 crc kubenswrapper[4755]: I1210 15:24:07.114440 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:07 crc kubenswrapper[4755]: I1210 15:24:07.114457 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:07 crc kubenswrapper[4755]: I1210 15:24:07.114486 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:07Z","lastTransitionTime":"2025-12-10T15:24:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:07 crc kubenswrapper[4755]: I1210 15:24:07.217052 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:07 crc kubenswrapper[4755]: I1210 15:24:07.217099 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:07 crc kubenswrapper[4755]: I1210 15:24:07.217111 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:07 crc kubenswrapper[4755]: I1210 15:24:07.217128 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:07 crc kubenswrapper[4755]: I1210 15:24:07.217139 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:07Z","lastTransitionTime":"2025-12-10T15:24:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:07 crc kubenswrapper[4755]: I1210 15:24:07.319556 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:07 crc kubenswrapper[4755]: I1210 15:24:07.319596 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:07 crc kubenswrapper[4755]: I1210 15:24:07.319606 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:07 crc kubenswrapper[4755]: I1210 15:24:07.319621 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:07 crc kubenswrapper[4755]: I1210 15:24:07.319632 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:07Z","lastTransitionTime":"2025-12-10T15:24:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:07 crc kubenswrapper[4755]: I1210 15:24:07.422575 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:07 crc kubenswrapper[4755]: I1210 15:24:07.422619 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:07 crc kubenswrapper[4755]: I1210 15:24:07.422628 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:07 crc kubenswrapper[4755]: I1210 15:24:07.422645 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:07 crc kubenswrapper[4755]: I1210 15:24:07.422656 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:07Z","lastTransitionTime":"2025-12-10T15:24:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:07 crc kubenswrapper[4755]: I1210 15:24:07.525379 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:07 crc kubenswrapper[4755]: I1210 15:24:07.525417 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:07 crc kubenswrapper[4755]: I1210 15:24:07.525427 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:07 crc kubenswrapper[4755]: I1210 15:24:07.525444 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:07 crc kubenswrapper[4755]: I1210 15:24:07.525454 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:07Z","lastTransitionTime":"2025-12-10T15:24:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:07 crc kubenswrapper[4755]: I1210 15:24:07.627988 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:07 crc kubenswrapper[4755]: I1210 15:24:07.628024 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:07 crc kubenswrapper[4755]: I1210 15:24:07.628039 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:07 crc kubenswrapper[4755]: I1210 15:24:07.628058 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:07 crc kubenswrapper[4755]: I1210 15:24:07.628070 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:07Z","lastTransitionTime":"2025-12-10T15:24:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:07 crc kubenswrapper[4755]: I1210 15:24:07.730672 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:07 crc kubenswrapper[4755]: I1210 15:24:07.730713 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:07 crc kubenswrapper[4755]: I1210 15:24:07.730728 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:07 crc kubenswrapper[4755]: I1210 15:24:07.730743 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:07 crc kubenswrapper[4755]: I1210 15:24:07.730754 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:07Z","lastTransitionTime":"2025-12-10T15:24:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:07 crc kubenswrapper[4755]: I1210 15:24:07.756662 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5ctz" Dec 10 15:24:07 crc kubenswrapper[4755]: I1210 15:24:07.756745 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 15:24:07 crc kubenswrapper[4755]: I1210 15:24:07.756742 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 15:24:07 crc kubenswrapper[4755]: I1210 15:24:07.756840 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 15:24:07 crc kubenswrapper[4755]: E1210 15:24:07.756833 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5ctz" podUID="17673130-8212-4f8f-8859-92774f0ee202" Dec 10 15:24:07 crc kubenswrapper[4755]: E1210 15:24:07.756927 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 15:24:07 crc kubenswrapper[4755]: E1210 15:24:07.756982 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 15:24:07 crc kubenswrapper[4755]: E1210 15:24:07.757041 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 15:24:07 crc kubenswrapper[4755]: I1210 15:24:07.832716 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:07 crc kubenswrapper[4755]: I1210 15:24:07.832774 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:07 crc kubenswrapper[4755]: I1210 15:24:07.832785 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:07 crc kubenswrapper[4755]: I1210 15:24:07.832799 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:07 crc kubenswrapper[4755]: I1210 15:24:07.832810 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:07Z","lastTransitionTime":"2025-12-10T15:24:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:07 crc kubenswrapper[4755]: I1210 15:24:07.935084 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:07 crc kubenswrapper[4755]: I1210 15:24:07.935126 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:07 crc kubenswrapper[4755]: I1210 15:24:07.935135 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:07 crc kubenswrapper[4755]: I1210 15:24:07.935150 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:07 crc kubenswrapper[4755]: I1210 15:24:07.935159 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:07Z","lastTransitionTime":"2025-12-10T15:24:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:08 crc kubenswrapper[4755]: I1210 15:24:08.037855 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:08 crc kubenswrapper[4755]: I1210 15:24:08.037896 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:08 crc kubenswrapper[4755]: I1210 15:24:08.037904 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:08 crc kubenswrapper[4755]: I1210 15:24:08.037919 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:08 crc kubenswrapper[4755]: I1210 15:24:08.037929 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:08Z","lastTransitionTime":"2025-12-10T15:24:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:08 crc kubenswrapper[4755]: I1210 15:24:08.140039 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:08 crc kubenswrapper[4755]: I1210 15:24:08.140097 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:08 crc kubenswrapper[4755]: I1210 15:24:08.140109 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:08 crc kubenswrapper[4755]: I1210 15:24:08.140130 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:08 crc kubenswrapper[4755]: I1210 15:24:08.140141 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:08Z","lastTransitionTime":"2025-12-10T15:24:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:08 crc kubenswrapper[4755]: I1210 15:24:08.242028 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:08 crc kubenswrapper[4755]: I1210 15:24:08.242088 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:08 crc kubenswrapper[4755]: I1210 15:24:08.242100 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:08 crc kubenswrapper[4755]: I1210 15:24:08.242115 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:08 crc kubenswrapper[4755]: I1210 15:24:08.242126 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:08Z","lastTransitionTime":"2025-12-10T15:24:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:08 crc kubenswrapper[4755]: I1210 15:24:08.345296 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:08 crc kubenswrapper[4755]: I1210 15:24:08.345352 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:08 crc kubenswrapper[4755]: I1210 15:24:08.345362 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:08 crc kubenswrapper[4755]: I1210 15:24:08.345380 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:08 crc kubenswrapper[4755]: I1210 15:24:08.345391 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:08Z","lastTransitionTime":"2025-12-10T15:24:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:08 crc kubenswrapper[4755]: I1210 15:24:08.448155 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:08 crc kubenswrapper[4755]: I1210 15:24:08.448200 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:08 crc kubenswrapper[4755]: I1210 15:24:08.448215 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:08 crc kubenswrapper[4755]: I1210 15:24:08.448237 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:08 crc kubenswrapper[4755]: I1210 15:24:08.448257 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:08Z","lastTransitionTime":"2025-12-10T15:24:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:08 crc kubenswrapper[4755]: I1210 15:24:08.551002 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:08 crc kubenswrapper[4755]: I1210 15:24:08.551038 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:08 crc kubenswrapper[4755]: I1210 15:24:08.551053 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:08 crc kubenswrapper[4755]: I1210 15:24:08.551075 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:08 crc kubenswrapper[4755]: I1210 15:24:08.551090 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:08Z","lastTransitionTime":"2025-12-10T15:24:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:08 crc kubenswrapper[4755]: I1210 15:24:08.653503 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:08 crc kubenswrapper[4755]: I1210 15:24:08.653539 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:08 crc kubenswrapper[4755]: I1210 15:24:08.653550 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:08 crc kubenswrapper[4755]: I1210 15:24:08.653569 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:08 crc kubenswrapper[4755]: I1210 15:24:08.653585 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:08Z","lastTransitionTime":"2025-12-10T15:24:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:08 crc kubenswrapper[4755]: I1210 15:24:08.756034 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:08 crc kubenswrapper[4755]: I1210 15:24:08.756073 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:08 crc kubenswrapper[4755]: I1210 15:24:08.756086 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:08 crc kubenswrapper[4755]: I1210 15:24:08.756103 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:08 crc kubenswrapper[4755]: I1210 15:24:08.756113 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:08Z","lastTransitionTime":"2025-12-10T15:24:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:08 crc kubenswrapper[4755]: I1210 15:24:08.858780 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:08 crc kubenswrapper[4755]: I1210 15:24:08.858814 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:08 crc kubenswrapper[4755]: I1210 15:24:08.858823 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:08 crc kubenswrapper[4755]: I1210 15:24:08.858839 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:08 crc kubenswrapper[4755]: I1210 15:24:08.858848 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:08Z","lastTransitionTime":"2025-12-10T15:24:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:08 crc kubenswrapper[4755]: I1210 15:24:08.961442 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:08 crc kubenswrapper[4755]: I1210 15:24:08.961538 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:08 crc kubenswrapper[4755]: I1210 15:24:08.961555 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:08 crc kubenswrapper[4755]: I1210 15:24:08.961579 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:08 crc kubenswrapper[4755]: I1210 15:24:08.961624 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:08Z","lastTransitionTime":"2025-12-10T15:24:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:09 crc kubenswrapper[4755]: I1210 15:24:09.063551 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:09 crc kubenswrapper[4755]: I1210 15:24:09.063597 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:09 crc kubenswrapper[4755]: I1210 15:24:09.063608 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:09 crc kubenswrapper[4755]: I1210 15:24:09.063624 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:09 crc kubenswrapper[4755]: I1210 15:24:09.063643 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:09Z","lastTransitionTime":"2025-12-10T15:24:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:09 crc kubenswrapper[4755]: I1210 15:24:09.166654 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:09 crc kubenswrapper[4755]: I1210 15:24:09.166705 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:09 crc kubenswrapper[4755]: I1210 15:24:09.166718 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:09 crc kubenswrapper[4755]: I1210 15:24:09.166738 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:09 crc kubenswrapper[4755]: I1210 15:24:09.166754 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:09Z","lastTransitionTime":"2025-12-10T15:24:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:09 crc kubenswrapper[4755]: I1210 15:24:09.268883 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:09 crc kubenswrapper[4755]: I1210 15:24:09.268924 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:09 crc kubenswrapper[4755]: I1210 15:24:09.268932 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:09 crc kubenswrapper[4755]: I1210 15:24:09.268947 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:09 crc kubenswrapper[4755]: I1210 15:24:09.268957 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:09Z","lastTransitionTime":"2025-12-10T15:24:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:09 crc kubenswrapper[4755]: I1210 15:24:09.370888 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:09 crc kubenswrapper[4755]: I1210 15:24:09.370928 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:09 crc kubenswrapper[4755]: I1210 15:24:09.370937 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:09 crc kubenswrapper[4755]: I1210 15:24:09.370954 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:09 crc kubenswrapper[4755]: I1210 15:24:09.370964 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:09Z","lastTransitionTime":"2025-12-10T15:24:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:09 crc kubenswrapper[4755]: I1210 15:24:09.473412 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:09 crc kubenswrapper[4755]: I1210 15:24:09.473445 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:09 crc kubenswrapper[4755]: I1210 15:24:09.473453 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:09 crc kubenswrapper[4755]: I1210 15:24:09.473469 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:09 crc kubenswrapper[4755]: I1210 15:24:09.473503 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:09Z","lastTransitionTime":"2025-12-10T15:24:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:09 crc kubenswrapper[4755]: I1210 15:24:09.576302 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:09 crc kubenswrapper[4755]: I1210 15:24:09.576343 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:09 crc kubenswrapper[4755]: I1210 15:24:09.576354 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:09 crc kubenswrapper[4755]: I1210 15:24:09.576370 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:09 crc kubenswrapper[4755]: I1210 15:24:09.576382 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:09Z","lastTransitionTime":"2025-12-10T15:24:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:09 crc kubenswrapper[4755]: I1210 15:24:09.678701 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:09 crc kubenswrapper[4755]: I1210 15:24:09.678741 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:09 crc kubenswrapper[4755]: I1210 15:24:09.678750 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:09 crc kubenswrapper[4755]: I1210 15:24:09.678768 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:09 crc kubenswrapper[4755]: I1210 15:24:09.678778 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:09Z","lastTransitionTime":"2025-12-10T15:24:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:09 crc kubenswrapper[4755]: I1210 15:24:09.756581 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 15:24:09 crc kubenswrapper[4755]: I1210 15:24:09.756726 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 15:24:09 crc kubenswrapper[4755]: E1210 15:24:09.756722 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 15:24:09 crc kubenswrapper[4755]: I1210 15:24:09.756770 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 15:24:09 crc kubenswrapper[4755]: I1210 15:24:09.756848 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5ctz" Dec 10 15:24:09 crc kubenswrapper[4755]: E1210 15:24:09.756924 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 15:24:09 crc kubenswrapper[4755]: E1210 15:24:09.757095 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5ctz" podUID="17673130-8212-4f8f-8859-92774f0ee202" Dec 10 15:24:09 crc kubenswrapper[4755]: E1210 15:24:09.757166 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 15:24:09 crc kubenswrapper[4755]: I1210 15:24:09.781279 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:09 crc kubenswrapper[4755]: I1210 15:24:09.781318 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:09 crc kubenswrapper[4755]: I1210 15:24:09.781327 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:09 crc kubenswrapper[4755]: I1210 15:24:09.781342 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:09 crc kubenswrapper[4755]: I1210 15:24:09.781351 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:09Z","lastTransitionTime":"2025-12-10T15:24:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:09 crc kubenswrapper[4755]: I1210 15:24:09.884774 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:09 crc kubenswrapper[4755]: I1210 15:24:09.884829 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:09 crc kubenswrapper[4755]: I1210 15:24:09.884838 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:09 crc kubenswrapper[4755]: I1210 15:24:09.884860 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:09 crc kubenswrapper[4755]: I1210 15:24:09.884872 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:09Z","lastTransitionTime":"2025-12-10T15:24:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:09 crc kubenswrapper[4755]: I1210 15:24:09.987460 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:09 crc kubenswrapper[4755]: I1210 15:24:09.987528 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:09 crc kubenswrapper[4755]: I1210 15:24:09.987542 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:09 crc kubenswrapper[4755]: I1210 15:24:09.987560 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:09 crc kubenswrapper[4755]: I1210 15:24:09.987573 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:09Z","lastTransitionTime":"2025-12-10T15:24:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:10 crc kubenswrapper[4755]: I1210 15:24:10.090139 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:10 crc kubenswrapper[4755]: I1210 15:24:10.090186 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:10 crc kubenswrapper[4755]: I1210 15:24:10.090198 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:10 crc kubenswrapper[4755]: I1210 15:24:10.090217 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:10 crc kubenswrapper[4755]: I1210 15:24:10.090228 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:10Z","lastTransitionTime":"2025-12-10T15:24:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:10 crc kubenswrapper[4755]: I1210 15:24:10.192155 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:10 crc kubenswrapper[4755]: I1210 15:24:10.192207 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:10 crc kubenswrapper[4755]: I1210 15:24:10.192219 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:10 crc kubenswrapper[4755]: I1210 15:24:10.192238 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:10 crc kubenswrapper[4755]: I1210 15:24:10.192250 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:10Z","lastTransitionTime":"2025-12-10T15:24:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:10 crc kubenswrapper[4755]: I1210 15:24:10.295172 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:10 crc kubenswrapper[4755]: I1210 15:24:10.295209 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:10 crc kubenswrapper[4755]: I1210 15:24:10.295220 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:10 crc kubenswrapper[4755]: I1210 15:24:10.295241 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:10 crc kubenswrapper[4755]: I1210 15:24:10.295253 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:10Z","lastTransitionTime":"2025-12-10T15:24:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:10 crc kubenswrapper[4755]: I1210 15:24:10.406231 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:10 crc kubenswrapper[4755]: I1210 15:24:10.406597 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:10 crc kubenswrapper[4755]: I1210 15:24:10.406617 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:10 crc kubenswrapper[4755]: I1210 15:24:10.407154 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:10 crc kubenswrapper[4755]: I1210 15:24:10.407295 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:10Z","lastTransitionTime":"2025-12-10T15:24:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:10 crc kubenswrapper[4755]: I1210 15:24:10.510552 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:10 crc kubenswrapper[4755]: I1210 15:24:10.510622 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:10 crc kubenswrapper[4755]: I1210 15:24:10.510642 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:10 crc kubenswrapper[4755]: I1210 15:24:10.510666 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:10 crc kubenswrapper[4755]: I1210 15:24:10.510683 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:10Z","lastTransitionTime":"2025-12-10T15:24:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:10 crc kubenswrapper[4755]: I1210 15:24:10.612503 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:10 crc kubenswrapper[4755]: I1210 15:24:10.612552 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:10 crc kubenswrapper[4755]: I1210 15:24:10.612564 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:10 crc kubenswrapper[4755]: I1210 15:24:10.612581 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:10 crc kubenswrapper[4755]: I1210 15:24:10.612592 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:10Z","lastTransitionTime":"2025-12-10T15:24:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:10 crc kubenswrapper[4755]: I1210 15:24:10.715226 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:10 crc kubenswrapper[4755]: I1210 15:24:10.715273 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:10 crc kubenswrapper[4755]: I1210 15:24:10.715284 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:10 crc kubenswrapper[4755]: I1210 15:24:10.715302 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:10 crc kubenswrapper[4755]: I1210 15:24:10.715314 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:10Z","lastTransitionTime":"2025-12-10T15:24:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:10 crc kubenswrapper[4755]: I1210 15:24:10.817882 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:10 crc kubenswrapper[4755]: I1210 15:24:10.817916 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:10 crc kubenswrapper[4755]: I1210 15:24:10.817927 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:10 crc kubenswrapper[4755]: I1210 15:24:10.817940 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:10 crc kubenswrapper[4755]: I1210 15:24:10.817954 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:10Z","lastTransitionTime":"2025-12-10T15:24:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:10 crc kubenswrapper[4755]: I1210 15:24:10.920673 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:10 crc kubenswrapper[4755]: I1210 15:24:10.920752 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:10 crc kubenswrapper[4755]: I1210 15:24:10.920764 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:10 crc kubenswrapper[4755]: I1210 15:24:10.920783 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:10 crc kubenswrapper[4755]: I1210 15:24:10.920797 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:10Z","lastTransitionTime":"2025-12-10T15:24:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:11 crc kubenswrapper[4755]: I1210 15:24:11.022602 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:11 crc kubenswrapper[4755]: I1210 15:24:11.022672 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:11 crc kubenswrapper[4755]: I1210 15:24:11.022688 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:11 crc kubenswrapper[4755]: I1210 15:24:11.022711 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:11 crc kubenswrapper[4755]: I1210 15:24:11.022728 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:11Z","lastTransitionTime":"2025-12-10T15:24:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:11 crc kubenswrapper[4755]: I1210 15:24:11.124197 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:11 crc kubenswrapper[4755]: I1210 15:24:11.124246 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:11 crc kubenswrapper[4755]: I1210 15:24:11.124343 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:11 crc kubenswrapper[4755]: I1210 15:24:11.124363 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:11 crc kubenswrapper[4755]: I1210 15:24:11.124378 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:11Z","lastTransitionTime":"2025-12-10T15:24:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:11 crc kubenswrapper[4755]: I1210 15:24:11.227226 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:11 crc kubenswrapper[4755]: I1210 15:24:11.227279 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:11 crc kubenswrapper[4755]: I1210 15:24:11.227293 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:11 crc kubenswrapper[4755]: I1210 15:24:11.227310 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:11 crc kubenswrapper[4755]: I1210 15:24:11.227344 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:11Z","lastTransitionTime":"2025-12-10T15:24:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:11 crc kubenswrapper[4755]: I1210 15:24:11.329531 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:11 crc kubenswrapper[4755]: I1210 15:24:11.329574 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:11 crc kubenswrapper[4755]: I1210 15:24:11.329584 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:11 crc kubenswrapper[4755]: I1210 15:24:11.329599 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:11 crc kubenswrapper[4755]: I1210 15:24:11.329609 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:11Z","lastTransitionTime":"2025-12-10T15:24:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:11 crc kubenswrapper[4755]: I1210 15:24:11.432511 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:11 crc kubenswrapper[4755]: I1210 15:24:11.432567 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:11 crc kubenswrapper[4755]: I1210 15:24:11.432578 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:11 crc kubenswrapper[4755]: I1210 15:24:11.432593 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:11 crc kubenswrapper[4755]: I1210 15:24:11.432603 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:11Z","lastTransitionTime":"2025-12-10T15:24:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:11 crc kubenswrapper[4755]: I1210 15:24:11.534782 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:11 crc kubenswrapper[4755]: I1210 15:24:11.534813 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:11 crc kubenswrapper[4755]: I1210 15:24:11.534822 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:11 crc kubenswrapper[4755]: I1210 15:24:11.534866 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:11 crc kubenswrapper[4755]: I1210 15:24:11.534877 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:11Z","lastTransitionTime":"2025-12-10T15:24:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:11 crc kubenswrapper[4755]: I1210 15:24:11.637693 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:11 crc kubenswrapper[4755]: I1210 15:24:11.637748 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:11 crc kubenswrapper[4755]: I1210 15:24:11.637761 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:11 crc kubenswrapper[4755]: I1210 15:24:11.637776 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:11 crc kubenswrapper[4755]: I1210 15:24:11.637786 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:11Z","lastTransitionTime":"2025-12-10T15:24:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:11 crc kubenswrapper[4755]: I1210 15:24:11.739878 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:11 crc kubenswrapper[4755]: I1210 15:24:11.739933 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:11 crc kubenswrapper[4755]: I1210 15:24:11.739963 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:11 crc kubenswrapper[4755]: I1210 15:24:11.739983 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:11 crc kubenswrapper[4755]: I1210 15:24:11.739993 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:11Z","lastTransitionTime":"2025-12-10T15:24:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:11 crc kubenswrapper[4755]: I1210 15:24:11.756622 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 15:24:11 crc kubenswrapper[4755]: I1210 15:24:11.756693 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 15:24:11 crc kubenswrapper[4755]: E1210 15:24:11.756824 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 15:24:11 crc kubenswrapper[4755]: I1210 15:24:11.756855 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 15:24:11 crc kubenswrapper[4755]: I1210 15:24:11.756881 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5ctz" Dec 10 15:24:11 crc kubenswrapper[4755]: E1210 15:24:11.756980 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 15:24:11 crc kubenswrapper[4755]: E1210 15:24:11.757094 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5ctz" podUID="17673130-8212-4f8f-8859-92774f0ee202" Dec 10 15:24:11 crc kubenswrapper[4755]: E1210 15:24:11.757179 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 15:24:11 crc kubenswrapper[4755]: I1210 15:24:11.757833 4755 scope.go:117] "RemoveContainer" containerID="194ac31f80a7b5a0fde0c62cb794234cb84e5b12e4f3db214ec3d16b4250797f" Dec 10 15:24:11 crc kubenswrapper[4755]: I1210 15:24:11.774778 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zl2tx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"796da6d5-6ccd-4786-a03e-9a8e47a55031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de63a123c46563bd8cd07e669d192bc8b019a889b9bdb7af1b988872c8f1fc48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzg4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zl2tx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:11Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:11 crc kubenswrapper[4755]: I1210 15:24:11.787348 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b132a8b9-1c99-414d-8773-229bf36b305d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5a5a59e9f156fb791ec822c2d5efe3fc6ec0e84bfcb2b6f5da81396951984c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2mdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a40c4bdaa23a60a665b8f565720d79b68cac62d40246be94fc6cd314b1bb3656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2mdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ggt8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:11Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:11 crc kubenswrapper[4755]: I1210 15:24:11.799310 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7d089a8bddfa1f80d29011c7a6bab0a300f7dd44bdb2864f86951ebbb9ebea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:11Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:11 crc kubenswrapper[4755]: I1210 15:24:11.811832 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01ddc4056319db1f69268dcae192c3cb9db6c25284305803ae7588e59f77c346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aae6bf174cdc9bde18d7c959e976454e73c1e67642f0158d365b79582f63f3cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:11Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:11 crc kubenswrapper[4755]: I1210 15:24:11.824251 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:11Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:11 crc kubenswrapper[4755]: I1210 15:24:11.835315 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aed6bb9-78c2-410b-9c58-b60ab22a7bd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70d014e7227746c46b30f8f5a1f307a422d2fa0d4b98d98bfe5ba6217223489e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://104d730ea666ffd2d639ba7fc8d1ac5b444586788a62656fd260cb249f82b1ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09dbec85547c7170bb9551e5657876814d48528e3047daf3547711a563d2b6ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8bf1de0d8bdc0a20bc42ba5097d849ae0f507e5fbc18fb17b4ff3650e46ff0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:11Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:11 crc kubenswrapper[4755]: I1210 15:24:11.849416 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n8c6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b56ae78-835a-45da-bc46-5adff2bdf9fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8957db8ccef7b3c449920471d345aa81ce9a7ab9be36b2350ec428aebec7ac3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b9a1485f6d0c0e53fd3505623b2788476bc00abea2f11650386eac40e449bcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9a1485f6d0c0e53fd3505623b2788476bc00abea2f11650386eac40e449bcc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a532015d196c588ad3dd6c43d9fa3772ba2a42bd1b2231f830b60adb0addab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a532015d196c588ad3dd6c43d9fa3772ba2a42bd1b2231f830b60adb0addab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://417e326f541e04df27d540c14ec572bb8f1d749a9e3dc88f483241d69f7d0679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://417e326f541e04df27d540c14ec572bb8f1d749a9e3dc88f483241d69f7d0679\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42867f7a1f6aceaf708ad650b7634bdafb3f850872c540d424916440014e7941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42867f7a1f6aceaf708ad650b7634bdafb3f850872c540d424916440014e7941\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f2d22bfc26958a3aad99de39f5c831d36c1ba4f563851baafc73735e80d5c91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f2d22bfc26958a3aad99de39f5c831d36c1ba4f563851baafc73735e80d5c91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://548d326fc5c14c16176d4fff3446d93c17322c11d2a34e9e42376b665de3848d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://548d326fc5c14c16176d4fff3446d93c17322c11d2a34e9e42376b665de3848d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n8c6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:11Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:11 crc kubenswrapper[4755]: I1210 15:24:11.851735 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:11 crc kubenswrapper[4755]: I1210 15:24:11.851770 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:11 crc kubenswrapper[4755]: I1210 15:24:11.851781 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:11 crc kubenswrapper[4755]: I1210 15:24:11.851797 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:11 crc kubenswrapper[4755]: I1210 15:24:11.851807 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:11Z","lastTransitionTime":"2025-12-10T15:24:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:11 crc kubenswrapper[4755]: I1210 15:24:11.858615 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qnmst" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1693fcf1-bef4-4f82-8dd8-f1797b03f5e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8fd6a2ab2b9557574951c0ebbccd663fe576262b0de2c3c655427c977f62d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hzdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qnmst\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:11Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:11 crc kubenswrapper[4755]: I1210 15:24:11.883248 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-82wnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0a2f42c-a60e-4350-9ebb-c28d3cbfdad7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7af643111a7a5d1d78b0412b1621b5ffac6389760ea9190e26e9e5d1704eed4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xbzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://915c918562f8a69a28cbcf29e427bfdd94477193b98d5a2da9ad45cb4b44fa03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xbzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-82wnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:11Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:11 crc kubenswrapper[4755]: I1210 15:24:11.894087 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q5ctz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17673130-8212-4f8f-8859-92774f0ee202\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcscn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcscn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q5ctz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:11Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:11 crc kubenswrapper[4755]: I1210 15:24:11.911631 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fa03060-a6e0-4aad-9aa1-43b1a0d00c85\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a823333ed5cb5de3988d25e50e4b7a0f9071c76fb39c22760a4f2acd5eb455d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d828f3f3b90a2dcb1c1908e6a686368af5b0d715b3251e4b8fcf3c8818ec75a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://172ab3fce08c8ba4095ff4095c89364a778644752bd7bb6c178d6e3ebcface69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfbb8e350ad18b78a6bcf6cfa4eb8f2fad90f970dba08a8b1b2026af6f255e83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c4b2bf0c88a16fd6b4b45a20730a92292895dc9f29ce756d347c302b25a8612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55da309288bc2c7ff1f22da0569d18155796700c200fe3ed508f6f8179756865\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55da309288bc2c7ff1f22da0569d18155796700c200fe3ed508f6f8179756865\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219c531368bb5379f3726d94ab0afc0f33eedbf91b0a4dbbe0757c6e232f79ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://219c531368bb5379f3726d94ab0afc0f33eedbf91b0a4dbbe0757c6e232f79ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f9da8f3d4f85ec83c16b71ecd983ff4f48049eb8e73461a2b46c4f66d71ed06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9da8f3d4f85ec83c16b71ecd983ff4f48049eb8e73461a2b46c4f66d71ed06e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:11Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:11 crc kubenswrapper[4755]: I1210 15:24:11.923222 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:11Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:11 crc kubenswrapper[4755]: I1210 15:24:11.933513 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c371b199c3643a68ea5eba935eb76a1b7e8a4027c9292a1324116ccfc14742ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:11Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:11 crc kubenswrapper[4755]: I1210 15:24:11.943393 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wv8fh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d150a22e-c59a-4376-a5c8-db4085ea0be0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58d106c5d1b9525ec821d009ca556449cd8d7f0e1b9c8ec7dd969df996e73625\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfw9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wv8fh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:11Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:11 crc kubenswrapper[4755]: I1210 15:24:11.955088 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:11 crc kubenswrapper[4755]: I1210 15:24:11.955138 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:11 crc kubenswrapper[4755]: I1210 15:24:11.955151 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:11 crc kubenswrapper[4755]: I1210 15:24:11.955168 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:11 crc kubenswrapper[4755]: I1210 15:24:11.955180 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:11Z","lastTransitionTime":"2025-12-10T15:24:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:11 crc kubenswrapper[4755]: I1210 15:24:11.958578 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e6db97-7e22-4fec-924e-20e90f463887\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef3871ddc16d2fc1b89a6f120390cf066424bd9ed4cca98f8dea4430c40c6c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08895c3ab20248c4bd70ccfae6442bdf8e76a602e605b043f0f00166624ada17\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5b2467645116856afa1e1e37501c7b453b03da193a96d1075886b85839a86ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16e58653dca08c9b33671279553975bca9507ffeeefea161119cec624813f167\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea18ef6c319336321d97c2a55449903b837b5a47da785da3ff5308244751943\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17ca0b6cc1b8b0bf9c34e03c494d1fc1594db013eccf862027ec8749ea51f6e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17ca0b6cc1b8b0bf9c34e03c494d1fc1594db013eccf862027ec8749ea51f6e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:11Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:11 crc kubenswrapper[4755]: I1210 15:24:11.969028 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:11Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:11 crc kubenswrapper[4755]: I1210 15:24:11.989507 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b1da51a-99c9-4f8e-920d-ce0973af6370\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6eb065dc6c0cc8914cb95553eb2683d894fb9a4e78ce7fac73bcce8d7f6cced9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e547993b9f2fa37bf924f909c47b62eb0cc02b659596b1cad9bbc42fdde8f9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9ba47683cc23d5b531a45f0658b6a9378650400b35b5372642b0430a5ac503f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a75407e83508af9adebb09c6466a966dd791d29f690c539656f9bd3396d7031\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://602b4e49987fa2cc6b54b822110aececbdddaf2bce8f27cce4ed906768d45791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://335bcab3a79f09796e97560365e1211fb30ddf288f4773c05ab353197add4365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://194ac31f80a7b5a0fde0c62cb794234cb84e5b12e4f3db214ec3d16b4250797f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://194ac31f80a7b5a0fde0c62cb794234cb84e5b12e4f3db214ec3d16b4250797f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T15:23:59Z\\\",\\\"message\\\":\\\"ss event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1210 15:23:58.955006 6153 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1210 15:23:58.954962 6153 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI1210 15:23:58.955022 6153 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI1210 15:23:58.955028 6153 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI1210 15:23:58.955034 6153 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nF1210 15:23:58.955079 6153 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"ht\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-6lfvk_openshift-ovn-kubernetes(4b1da51a-99c9-4f8e-920d-ce0973af6370)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59bf59d1b7fbc365a916fbbceca7ae30b7ebc754b34f2f7a34c2e21e1e1d2166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://375efb27cf6bef06a6a0ffc8f6b2963e1b9ffbca18b3c8e7b402bc552a411f6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://375efb27cf6bef06a6a0ffc8f6b2963e1b9ffbca18b3c8e7b402bc552a411f6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6lfvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:11Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:12 crc kubenswrapper[4755]: I1210 15:24:12.057963 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:12 crc kubenswrapper[4755]: I1210 15:24:12.058015 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:12 crc kubenswrapper[4755]: I1210 15:24:12.058027 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:12 crc kubenswrapper[4755]: I1210 15:24:12.058045 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:12 crc kubenswrapper[4755]: I1210 15:24:12.058057 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:12Z","lastTransitionTime":"2025-12-10T15:24:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:12 crc kubenswrapper[4755]: I1210 15:24:12.122855 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6lfvk_4b1da51a-99c9-4f8e-920d-ce0973af6370/ovnkube-controller/1.log" Dec 10 15:24:12 crc kubenswrapper[4755]: I1210 15:24:12.125227 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" event={"ID":"4b1da51a-99c9-4f8e-920d-ce0973af6370","Type":"ContainerStarted","Data":"807b21c925067bde1df3babdc35da6c8805e10be110c0282376dbd6e8be69dcb"} Dec 10 15:24:12 crc kubenswrapper[4755]: I1210 15:24:12.125708 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" Dec 10 15:24:12 crc kubenswrapper[4755]: I1210 15:24:12.149540 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7d089a8bddfa1f80d29011c7a6bab0a300f7dd44bdb2864f86951ebbb9ebea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:12Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:12 crc kubenswrapper[4755]: I1210 15:24:12.160089 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:12 crc kubenswrapper[4755]: I1210 15:24:12.160132 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:12 crc kubenswrapper[4755]: I1210 15:24:12.160141 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:12 crc kubenswrapper[4755]: I1210 15:24:12.160155 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:12 crc kubenswrapper[4755]: I1210 15:24:12.160164 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:12Z","lastTransitionTime":"2025-12-10T15:24:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:12 crc kubenswrapper[4755]: I1210 15:24:12.167747 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01ddc4056319db1f69268dcae192c3cb9db6c25284305803ae7588e59f77c346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aae6bf174cdc9bde18d7c959e976454e73c1e67642f0158d365b79582f63f3cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:12Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:12 crc kubenswrapper[4755]: I1210 15:24:12.188018 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:12Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:12 crc kubenswrapper[4755]: I1210 15:24:12.200247 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zl2tx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"796da6d5-6ccd-4786-a03e-9a8e47a55031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de63a123c46563bd8cd07e669d192bc8b019a889b9bdb7af1b988872c8f1fc48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzg4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zl2tx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:12Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:12 crc kubenswrapper[4755]: I1210 15:24:12.209672 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b132a8b9-1c99-414d-8773-229bf36b305d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5a5a59e9f156fb791ec822c2d5efe3fc6ec0e84bfcb2b6f5da81396951984c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2mdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a40c4bdaa23a60a665b8f565720d79b68cac62d40246be94fc6cd314b1bb3656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2mdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ggt8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:12Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:12 crc kubenswrapper[4755]: I1210 15:24:12.219313 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aed6bb9-78c2-410b-9c58-b60ab22a7bd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70d014e7227746c46b30f8f5a1f307a422d2fa0d4b98d98bfe5ba6217223489e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://104d730ea666ffd2d639ba7fc8d1ac5b444586788a62656fd260cb249f82b1ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09dbec85547c7170bb9551e5657876814d48528e3047daf3547711a563d2b6ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8bf1de0d8bdc0a20bc42ba5097d849ae0f507e5fbc18fb17b4ff3650e46ff0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:12Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:12 crc kubenswrapper[4755]: I1210 15:24:12.231345 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n8c6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b56ae78-835a-45da-bc46-5adff2bdf9fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8957db8ccef7b3c449920471d345aa81ce9a7ab9be36b2350ec428aebec7ac3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b9a1485f6d0c0e53fd3505623b2788476bc00abea2f11650386eac40e449bcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9a1485f6d0c0e53fd3505623b2788476bc00abea2f11650386eac40e449bcc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a532015d196c588ad3dd6c43d9fa3772ba2a42bd1b2231f830b60adb0addab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a532015d196c588ad3dd6c43d9fa3772ba2a42bd1b2231f830b60adb0addab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://417e326f541e04df27d540c14ec572bb8f1d749a9e3dc88f483241d69f7d0679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://417e326f541e04df27d540c14ec572bb8f1d749a9e3dc88f483241d69f7d0679\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42867f7a1f6aceaf708ad650b7634bdafb3f850872c540d424916440014e7941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42867f7a1f6aceaf708ad650b7634bdafb3f850872c540d424916440014e7941\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f2d22bfc26958a3aad99de39f5c831d36c1ba4f563851baafc73735e80d5c91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f2d22bfc26958a3aad99de39f5c831d36c1ba4f563851baafc73735e80d5c91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://548d326fc5c14c16176d4fff3446d93c17322c11d2a34e9e42376b665de3848d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://548d326fc5c14c16176d4fff3446d93c17322c11d2a34e9e42376b665de3848d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n8c6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:12Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:12 crc kubenswrapper[4755]: I1210 15:24:12.241001 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qnmst" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1693fcf1-bef4-4f82-8dd8-f1797b03f5e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8fd6a2ab2b9557574951c0ebbccd663fe576262b0de2c3c655427c977f62d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hzdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qnmst\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:12Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:12 crc kubenswrapper[4755]: I1210 15:24:12.251515 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-82wnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0a2f42c-a60e-4350-9ebb-c28d3cbfdad7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7af643111a7a5d1d78b0412b1621b5ffac6389760ea9190e26e9e5d1704eed4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xbzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://915c918562f8a69a28cbcf29e427bfdd94477193b98d5a2da9ad45cb4b44fa03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xbzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-82wnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:12Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:12 crc kubenswrapper[4755]: I1210 15:24:12.262030 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:12 crc kubenswrapper[4755]: I1210 15:24:12.262064 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:12 crc kubenswrapper[4755]: I1210 15:24:12.262072 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:12 crc kubenswrapper[4755]: I1210 15:24:12.262086 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:12 crc kubenswrapper[4755]: I1210 15:24:12.262095 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:12Z","lastTransitionTime":"2025-12-10T15:24:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:12 crc kubenswrapper[4755]: I1210 15:24:12.268069 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fa03060-a6e0-4aad-9aa1-43b1a0d00c85\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a823333ed5cb5de3988d25e50e4b7a0f9071c76fb39c22760a4f2acd5eb455d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d828f3f3b90a2dcb1c1908e6a686368af5b0d715b3251e4b8fcf3c8818ec75a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://172ab3fce08c8ba4095ff4095c89364a778644752bd7bb6c178d6e3ebcface69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfbb8e350ad18b78a6bcf6cfa4eb8f2fad90f970dba08a8b1b2026af6f255e83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c4b2bf0c88a16fd6b4b45a20730a92292895dc9f29ce756d347c302b25a8612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55da309288bc2c7ff1f22da0569d18155796700c200fe3ed508f6f8179756865\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55da309288bc2c7ff1f22da0569d18155796700c200fe3ed508f6f8179756865\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219c531368bb5379f3726d94ab0afc0f33eedbf91b0a4dbbe0757c6e232f79ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://219c531368bb5379f3726d94ab0afc0f33eedbf91b0a4dbbe0757c6e232f79ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f9da8f3d4f85ec83c16b71ecd983ff4f48049eb8e73461a2b46c4f66d71ed06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9da8f3d4f85ec83c16b71ecd983ff4f48049eb8e73461a2b46c4f66d71ed06e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:12Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:12 crc kubenswrapper[4755]: I1210 15:24:12.286780 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:12Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:12 crc kubenswrapper[4755]: I1210 15:24:12.303009 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c371b199c3643a68ea5eba935eb76a1b7e8a4027c9292a1324116ccfc14742ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:12Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:12 crc kubenswrapper[4755]: I1210 15:24:12.319501 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wv8fh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d150a22e-c59a-4376-a5c8-db4085ea0be0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58d106c5d1b9525ec821d009ca556449cd8d7f0e1b9c8ec7dd969df996e73625\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfw9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wv8fh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:12Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:12 crc kubenswrapper[4755]: I1210 15:24:12.329929 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q5ctz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17673130-8212-4f8f-8859-92774f0ee202\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcscn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcscn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q5ctz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:12Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:12 crc kubenswrapper[4755]: I1210 15:24:12.342643 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e6db97-7e22-4fec-924e-20e90f463887\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef3871ddc16d2fc1b89a6f120390cf066424bd9ed4cca98f8dea4430c40c6c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08895c3ab20248c4bd70ccfae6442bdf8e76a602e605b043f0f00166624ada17\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5b2467645116856afa1e1e37501c7b453b03da193a96d1075886b85839a86ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16e58653dca08c9b33671279553975bca9507ffeeefea161119cec624813f167\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea18ef6c319336321d97c2a55449903b837b5a47da785da3ff5308244751943\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17ca0b6cc1b8b0bf9c34e03c494d1fc1594db013eccf862027ec8749ea51f6e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17ca0b6cc1b8b0bf9c34e03c494d1fc1594db013eccf862027ec8749ea51f6e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:12Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:12 crc kubenswrapper[4755]: I1210 15:24:12.353989 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:12Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:12 crc kubenswrapper[4755]: I1210 15:24:12.363660 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:12 crc kubenswrapper[4755]: I1210 15:24:12.363702 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:12 crc kubenswrapper[4755]: I1210 15:24:12.363713 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:12 crc kubenswrapper[4755]: I1210 15:24:12.363731 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:12 crc kubenswrapper[4755]: I1210 15:24:12.363744 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:12Z","lastTransitionTime":"2025-12-10T15:24:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:12 crc kubenswrapper[4755]: I1210 15:24:12.371095 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b1da51a-99c9-4f8e-920d-ce0973af6370\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6eb065dc6c0cc8914cb95553eb2683d894fb9a4e78ce7fac73bcce8d7f6cced9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e547993b9f2fa37bf924f909c47b62eb0cc02b659596b1cad9bbc42fdde8f9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9ba47683cc23d5b531a45f0658b6a9378650400b35b5372642b0430a5ac503f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a75407e83508af9adebb09c6466a966dd791d29f690c539656f9bd3396d7031\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://602b4e49987fa2cc6b54b822110aececbdddaf2bce8f27cce4ed906768d45791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://335bcab3a79f09796e97560365e1211fb30ddf288f4773c05ab353197add4365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://807b21c925067bde1df3babdc35da6c8805e10be110c0282376dbd6e8be69dcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://194ac31f80a7b5a0fde0c62cb794234cb84e5b12e4f3db214ec3d16b4250797f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T15:23:59Z\\\",\\\"message\\\":\\\"ss event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1210 15:23:58.955006 6153 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1210 15:23:58.954962 6153 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI1210 15:23:58.955022 6153 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI1210 15:23:58.955028 6153 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI1210 15:23:58.955034 6153 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nF1210 15:23:58.955079 6153 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"ht\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:24:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59bf59d1b7fbc365a916fbbceca7ae30b7ebc754b34f2f7a34c2e21e1e1d2166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://375efb27cf6bef06a6a0ffc8f6b2963e1b9ffbca18b3c8e7b402bc552a411f6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://375efb27cf6bef06a6a0ffc8f6b2963e1b9ffbca18b3c8e7b402bc552a411f6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6lfvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:12Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:12 crc kubenswrapper[4755]: I1210 15:24:12.465789 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:12 crc kubenswrapper[4755]: I1210 15:24:12.465830 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:12 crc kubenswrapper[4755]: I1210 15:24:12.465838 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:12 crc kubenswrapper[4755]: I1210 15:24:12.465854 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:12 crc kubenswrapper[4755]: I1210 15:24:12.465864 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:12Z","lastTransitionTime":"2025-12-10T15:24:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:12 crc kubenswrapper[4755]: I1210 15:24:12.568333 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:12 crc kubenswrapper[4755]: I1210 15:24:12.568368 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:12 crc kubenswrapper[4755]: I1210 15:24:12.568377 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:12 crc kubenswrapper[4755]: I1210 15:24:12.568391 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:12 crc kubenswrapper[4755]: I1210 15:24:12.568403 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:12Z","lastTransitionTime":"2025-12-10T15:24:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:12 crc kubenswrapper[4755]: I1210 15:24:12.671938 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:12 crc kubenswrapper[4755]: I1210 15:24:12.671977 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:12 crc kubenswrapper[4755]: I1210 15:24:12.671992 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:12 crc kubenswrapper[4755]: I1210 15:24:12.672008 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:12 crc kubenswrapper[4755]: I1210 15:24:12.672018 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:12Z","lastTransitionTime":"2025-12-10T15:24:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:12 crc kubenswrapper[4755]: I1210 15:24:12.774776 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:12 crc kubenswrapper[4755]: I1210 15:24:12.774825 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:12 crc kubenswrapper[4755]: I1210 15:24:12.774835 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:12 crc kubenswrapper[4755]: I1210 15:24:12.774853 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:12 crc kubenswrapper[4755]: I1210 15:24:12.774862 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:12Z","lastTransitionTime":"2025-12-10T15:24:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:12 crc kubenswrapper[4755]: I1210 15:24:12.877423 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:12 crc kubenswrapper[4755]: I1210 15:24:12.877482 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:12 crc kubenswrapper[4755]: I1210 15:24:12.877494 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:12 crc kubenswrapper[4755]: I1210 15:24:12.877512 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:12 crc kubenswrapper[4755]: I1210 15:24:12.877522 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:12Z","lastTransitionTime":"2025-12-10T15:24:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:12 crc kubenswrapper[4755]: I1210 15:24:12.980150 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:12 crc kubenswrapper[4755]: I1210 15:24:12.980218 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:12 crc kubenswrapper[4755]: I1210 15:24:12.980233 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:12 crc kubenswrapper[4755]: I1210 15:24:12.980249 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:12 crc kubenswrapper[4755]: I1210 15:24:12.980259 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:12Z","lastTransitionTime":"2025-12-10T15:24:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:13 crc kubenswrapper[4755]: I1210 15:24:13.083060 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:13 crc kubenswrapper[4755]: I1210 15:24:13.083144 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:13 crc kubenswrapper[4755]: I1210 15:24:13.083185 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:13 crc kubenswrapper[4755]: I1210 15:24:13.083207 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:13 crc kubenswrapper[4755]: I1210 15:24:13.083223 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:13Z","lastTransitionTime":"2025-12-10T15:24:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:13 crc kubenswrapper[4755]: I1210 15:24:13.185091 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:13 crc kubenswrapper[4755]: I1210 15:24:13.185161 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:13 crc kubenswrapper[4755]: I1210 15:24:13.185178 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:13 crc kubenswrapper[4755]: I1210 15:24:13.185201 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:13 crc kubenswrapper[4755]: I1210 15:24:13.185224 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:13Z","lastTransitionTime":"2025-12-10T15:24:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:13 crc kubenswrapper[4755]: I1210 15:24:13.287270 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:13 crc kubenswrapper[4755]: I1210 15:24:13.287322 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:13 crc kubenswrapper[4755]: I1210 15:24:13.287332 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:13 crc kubenswrapper[4755]: I1210 15:24:13.287344 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:13 crc kubenswrapper[4755]: I1210 15:24:13.287354 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:13Z","lastTransitionTime":"2025-12-10T15:24:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:13 crc kubenswrapper[4755]: I1210 15:24:13.390664 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:13 crc kubenswrapper[4755]: I1210 15:24:13.390716 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:13 crc kubenswrapper[4755]: I1210 15:24:13.390761 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:13 crc kubenswrapper[4755]: I1210 15:24:13.390828 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:13 crc kubenswrapper[4755]: I1210 15:24:13.390846 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:13Z","lastTransitionTime":"2025-12-10T15:24:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:13 crc kubenswrapper[4755]: I1210 15:24:13.494055 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:13 crc kubenswrapper[4755]: I1210 15:24:13.494101 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:13 crc kubenswrapper[4755]: I1210 15:24:13.494110 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:13 crc kubenswrapper[4755]: I1210 15:24:13.494126 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:13 crc kubenswrapper[4755]: I1210 15:24:13.494134 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:13Z","lastTransitionTime":"2025-12-10T15:24:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:13 crc kubenswrapper[4755]: I1210 15:24:13.596261 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:13 crc kubenswrapper[4755]: I1210 15:24:13.596306 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:13 crc kubenswrapper[4755]: I1210 15:24:13.596316 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:13 crc kubenswrapper[4755]: I1210 15:24:13.596330 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:13 crc kubenswrapper[4755]: I1210 15:24:13.596341 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:13Z","lastTransitionTime":"2025-12-10T15:24:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:13 crc kubenswrapper[4755]: I1210 15:24:13.699000 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:13 crc kubenswrapper[4755]: I1210 15:24:13.699041 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:13 crc kubenswrapper[4755]: I1210 15:24:13.699052 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:13 crc kubenswrapper[4755]: I1210 15:24:13.699086 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:13 crc kubenswrapper[4755]: I1210 15:24:13.699095 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:13Z","lastTransitionTime":"2025-12-10T15:24:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:13 crc kubenswrapper[4755]: I1210 15:24:13.757372 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 15:24:13 crc kubenswrapper[4755]: I1210 15:24:13.757424 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5ctz" Dec 10 15:24:13 crc kubenswrapper[4755]: E1210 15:24:13.757564 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 15:24:13 crc kubenswrapper[4755]: I1210 15:24:13.757372 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 15:24:13 crc kubenswrapper[4755]: I1210 15:24:13.757679 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 15:24:13 crc kubenswrapper[4755]: E1210 15:24:13.757834 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5ctz" podUID="17673130-8212-4f8f-8859-92774f0ee202" Dec 10 15:24:13 crc kubenswrapper[4755]: E1210 15:24:13.757918 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 15:24:13 crc kubenswrapper[4755]: E1210 15:24:13.758007 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 15:24:13 crc kubenswrapper[4755]: I1210 15:24:13.769362 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qnmst" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1693fcf1-bef4-4f82-8dd8-f1797b03f5e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8fd6a2ab2b9557574951c0ebbccd663fe576262b0de2c3c655427c977f62d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hzdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qnmst\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:13Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:13 crc kubenswrapper[4755]: I1210 15:24:13.784401 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-82wnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0a2f42c-a60e-4350-9ebb-c28d3cbfdad7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7af643111a7a5d1d78b0412b1621b5ffac6389760ea9190e26e9e5d1704eed4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xbzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://915c918562f8a69a28cbcf29e427bfdd94477193b98d5a2da9ad45cb4b44fa03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xbzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-82wnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:13Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:13 crc kubenswrapper[4755]: I1210 15:24:13.800096 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aed6bb9-78c2-410b-9c58-b60ab22a7bd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70d014e7227746c46b30f8f5a1f307a422d2fa0d4b98d98bfe5ba6217223489e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://104d730ea666ffd2d639ba7fc8d1ac5b444586788a62656fd260cb249f82b1ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09dbec85547c7170bb9551e5657876814d48528e3047daf3547711a563d2b6ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8bf1de0d8bdc0a20bc42ba5097d849ae0f507e5fbc18fb17b4ff3650e46ff0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:13Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:13 crc kubenswrapper[4755]: I1210 15:24:13.801817 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:13 crc kubenswrapper[4755]: I1210 15:24:13.801864 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:13 crc kubenswrapper[4755]: I1210 15:24:13.801881 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:13 crc kubenswrapper[4755]: I1210 15:24:13.801906 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:13 crc kubenswrapper[4755]: I1210 15:24:13.801923 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:13Z","lastTransitionTime":"2025-12-10T15:24:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:13 crc kubenswrapper[4755]: I1210 15:24:13.817583 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n8c6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b56ae78-835a-45da-bc46-5adff2bdf9fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8957db8ccef7b3c449920471d345aa81ce9a7ab9be36b2350ec428aebec7ac3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b9a1485f6d0c0e53fd3505623b2788476bc00abea2f11650386eac40e449bcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9a1485f6d0c0e53fd3505623b2788476bc00abea2f11650386eac40e449bcc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a532015d196c588ad3dd6c43d9fa3772ba2a42bd1b2231f830b60adb0addab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a532015d196c588ad3dd6c43d9fa3772ba2a42bd1b2231f830b60adb0addab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://417e326f541e04df27d540c14ec572bb8f1d749a9e3dc88f483241d69f7d0679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://417e326f541e04df27d540c14ec572bb8f1d749a9e3dc88f483241d69f7d0679\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42867f7a1f6aceaf708ad650b7634bdafb3f850872c540d424916440014e7941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42867f7a1f6aceaf708ad650b7634bdafb3f850872c540d424916440014e7941\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f2d22bfc26958a3aad99de39f5c831d36c1ba4f563851baafc73735e80d5c91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f2d22bfc26958a3aad99de39f5c831d36c1ba4f563851baafc73735e80d5c91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://548d326fc5c14c16176d4fff3446d93c17322c11d2a34e9e42376b665de3848d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://548d326fc5c14c16176d4fff3446d93c17322c11d2a34e9e42376b665de3848d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n8c6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:13Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:13 crc kubenswrapper[4755]: I1210 15:24:13.829317 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c371b199c3643a68ea5eba935eb76a1b7e8a4027c9292a1324116ccfc14742ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:13Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:13 crc kubenswrapper[4755]: I1210 15:24:13.838501 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wv8fh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d150a22e-c59a-4376-a5c8-db4085ea0be0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58d106c5d1b9525ec821d009ca556449cd8d7f0e1b9c8ec7dd969df996e73625\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfw9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wv8fh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:13Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:13 crc kubenswrapper[4755]: I1210 15:24:13.849411 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q5ctz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17673130-8212-4f8f-8859-92774f0ee202\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcscn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcscn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q5ctz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:13Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:13 crc kubenswrapper[4755]: I1210 15:24:13.871798 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fa03060-a6e0-4aad-9aa1-43b1a0d00c85\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a823333ed5cb5de3988d25e50e4b7a0f9071c76fb39c22760a4f2acd5eb455d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d828f3f3b90a2dcb1c1908e6a686368af5b0d715b3251e4b8fcf3c8818ec75a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://172ab3fce08c8ba4095ff4095c89364a778644752bd7bb6c178d6e3ebcface69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfbb8e350ad18b78a6bcf6cfa4eb8f2fad90f970dba08a8b1b2026af6f255e83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c4b2bf0c88a16fd6b4b45a20730a92292895dc9f29ce756d347c302b25a8612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55da309288bc2c7ff1f22da0569d18155796700c200fe3ed508f6f8179756865\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55da309288bc2c7ff1f22da0569d18155796700c200fe3ed508f6f8179756865\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219c531368bb5379f3726d94ab0afc0f33eedbf91b0a4dbbe0757c6e232f79ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://219c531368bb5379f3726d94ab0afc0f33eedbf91b0a4dbbe0757c6e232f79ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f9da8f3d4f85ec83c16b71ecd983ff4f48049eb8e73461a2b46c4f66d71ed06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9da8f3d4f85ec83c16b71ecd983ff4f48049eb8e73461a2b46c4f66d71ed06e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:13Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:13 crc kubenswrapper[4755]: I1210 15:24:13.884627 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:13Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:13 crc kubenswrapper[4755]: I1210 15:24:13.895158 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:13Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:13 crc kubenswrapper[4755]: I1210 15:24:13.903678 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:13 crc kubenswrapper[4755]: I1210 15:24:13.903709 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:13 crc kubenswrapper[4755]: I1210 15:24:13.903720 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:13 crc kubenswrapper[4755]: I1210 15:24:13.903737 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:13 crc kubenswrapper[4755]: I1210 15:24:13.903753 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:13Z","lastTransitionTime":"2025-12-10T15:24:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:13 crc kubenswrapper[4755]: I1210 15:24:13.915627 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b1da51a-99c9-4f8e-920d-ce0973af6370\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6eb065dc6c0cc8914cb95553eb2683d894fb9a4e78ce7fac73bcce8d7f6cced9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e547993b9f2fa37bf924f909c47b62eb0cc02b659596b1cad9bbc42fdde8f9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9ba47683cc23d5b531a45f0658b6a9378650400b35b5372642b0430a5ac503f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a75407e83508af9adebb09c6466a966dd791d29f690c539656f9bd3396d7031\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://602b4e49987fa2cc6b54b822110aececbdddaf2bce8f27cce4ed906768d45791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://335bcab3a79f09796e97560365e1211fb30ddf288f4773c05ab353197add4365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://807b21c925067bde1df3babdc35da6c8805e10be110c0282376dbd6e8be69dcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://194ac31f80a7b5a0fde0c62cb794234cb84e5b12e4f3db214ec3d16b4250797f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T15:23:59Z\\\",\\\"message\\\":\\\"ss event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1210 15:23:58.955006 6153 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1210 15:23:58.954962 6153 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI1210 15:23:58.955022 6153 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI1210 15:23:58.955028 6153 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI1210 15:23:58.955034 6153 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nF1210 15:23:58.955079 6153 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"ht\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:24:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59bf59d1b7fbc365a916fbbceca7ae30b7ebc754b34f2f7a34c2e21e1e1d2166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://375efb27cf6bef06a6a0ffc8f6b2963e1b9ffbca18b3c8e7b402bc552a411f6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://375efb27cf6bef06a6a0ffc8f6b2963e1b9ffbca18b3c8e7b402bc552a411f6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6lfvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:13Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:13 crc kubenswrapper[4755]: I1210 15:24:13.932280 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e6db97-7e22-4fec-924e-20e90f463887\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef3871ddc16d2fc1b89a6f120390cf066424bd9ed4cca98f8dea4430c40c6c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08895c3ab20248c4bd70ccfae6442bdf8e76a602e605b043f0f00166624ada17\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5b2467645116856afa1e1e37501c7b453b03da193a96d1075886b85839a86ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16e58653dca08c9b33671279553975bca9507ffeeefea161119cec624813f167\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea18ef6c319336321d97c2a55449903b837b5a47da785da3ff5308244751943\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17ca0b6cc1b8b0bf9c34e03c494d1fc1594db013eccf862027ec8749ea51f6e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17ca0b6cc1b8b0bf9c34e03c494d1fc1594db013eccf862027ec8749ea51f6e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:13Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:13 crc kubenswrapper[4755]: I1210 15:24:13.942599 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01ddc4056319db1f69268dcae192c3cb9db6c25284305803ae7588e59f77c346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aae6bf174cdc9bde18d7c959e976454e73c1e67642f0158d365b79582f63f3cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:13Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:13 crc kubenswrapper[4755]: I1210 15:24:13.949456 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/17673130-8212-4f8f-8859-92774f0ee202-metrics-certs\") pod \"network-metrics-daemon-q5ctz\" (UID: \"17673130-8212-4f8f-8859-92774f0ee202\") " pod="openshift-multus/network-metrics-daemon-q5ctz" Dec 10 15:24:13 crc kubenswrapper[4755]: E1210 15:24:13.949630 4755 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 10 15:24:13 crc kubenswrapper[4755]: E1210 15:24:13.949722 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/17673130-8212-4f8f-8859-92774f0ee202-metrics-certs podName:17673130-8212-4f8f-8859-92774f0ee202 nodeName:}" failed. No retries permitted until 2025-12-10 15:24:29.949701454 +0000 UTC m=+66.550585136 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/17673130-8212-4f8f-8859-92774f0ee202-metrics-certs") pod "network-metrics-daemon-q5ctz" (UID: "17673130-8212-4f8f-8859-92774f0ee202") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 10 15:24:13 crc kubenswrapper[4755]: I1210 15:24:13.953662 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:13Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:13 crc kubenswrapper[4755]: I1210 15:24:13.965325 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zl2tx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"796da6d5-6ccd-4786-a03e-9a8e47a55031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de63a123c46563bd8cd07e669d192bc8b019a889b9bdb7af1b988872c8f1fc48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzg4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zl2tx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:13Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:13 crc kubenswrapper[4755]: I1210 15:24:13.976425 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b132a8b9-1c99-414d-8773-229bf36b305d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5a5a59e9f156fb791ec822c2d5efe3fc6ec0e84bfcb2b6f5da81396951984c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2mdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a40c4bdaa23a60a665b8f565720d79b68cac62d40246be94fc6cd314b1bb3656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2mdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ggt8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:13Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:13 crc kubenswrapper[4755]: I1210 15:24:13.989824 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7d089a8bddfa1f80d29011c7a6bab0a300f7dd44bdb2864f86951ebbb9ebea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:13Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:14 crc kubenswrapper[4755]: I1210 15:24:14.006279 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:14 crc kubenswrapper[4755]: I1210 15:24:14.006310 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:14 crc kubenswrapper[4755]: I1210 15:24:14.006318 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:14 crc kubenswrapper[4755]: I1210 15:24:14.006333 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:14 crc kubenswrapper[4755]: I1210 15:24:14.006342 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:14Z","lastTransitionTime":"2025-12-10T15:24:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:14 crc kubenswrapper[4755]: I1210 15:24:14.109071 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:14 crc kubenswrapper[4755]: I1210 15:24:14.109146 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:14 crc kubenswrapper[4755]: I1210 15:24:14.109170 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:14 crc kubenswrapper[4755]: I1210 15:24:14.109204 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:14 crc kubenswrapper[4755]: I1210 15:24:14.109225 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:14Z","lastTransitionTime":"2025-12-10T15:24:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:14 crc kubenswrapper[4755]: I1210 15:24:14.133023 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6lfvk_4b1da51a-99c9-4f8e-920d-ce0973af6370/ovnkube-controller/2.log" Dec 10 15:24:14 crc kubenswrapper[4755]: I1210 15:24:14.137165 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6lfvk_4b1da51a-99c9-4f8e-920d-ce0973af6370/ovnkube-controller/1.log" Dec 10 15:24:14 crc kubenswrapper[4755]: I1210 15:24:14.139942 4755 generic.go:334] "Generic (PLEG): container finished" podID="4b1da51a-99c9-4f8e-920d-ce0973af6370" containerID="807b21c925067bde1df3babdc35da6c8805e10be110c0282376dbd6e8be69dcb" exitCode=1 Dec 10 15:24:14 crc kubenswrapper[4755]: I1210 15:24:14.139980 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" event={"ID":"4b1da51a-99c9-4f8e-920d-ce0973af6370","Type":"ContainerDied","Data":"807b21c925067bde1df3babdc35da6c8805e10be110c0282376dbd6e8be69dcb"} Dec 10 15:24:14 crc kubenswrapper[4755]: I1210 15:24:14.140018 4755 scope.go:117] "RemoveContainer" containerID="194ac31f80a7b5a0fde0c62cb794234cb84e5b12e4f3db214ec3d16b4250797f" Dec 10 15:24:14 crc kubenswrapper[4755]: I1210 15:24:14.140660 4755 scope.go:117] "RemoveContainer" containerID="807b21c925067bde1df3babdc35da6c8805e10be110c0282376dbd6e8be69dcb" Dec 10 15:24:14 crc kubenswrapper[4755]: E1210 15:24:14.140837 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6lfvk_openshift-ovn-kubernetes(4b1da51a-99c9-4f8e-920d-ce0973af6370)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" podUID="4b1da51a-99c9-4f8e-920d-ce0973af6370" Dec 10 15:24:14 crc kubenswrapper[4755]: I1210 15:24:14.156203 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qnmst" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1693fcf1-bef4-4f82-8dd8-f1797b03f5e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8fd6a2ab2b9557574951c0ebbccd663fe576262b0de2c3c655427c977f62d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hzdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qnmst\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:14Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:14 crc kubenswrapper[4755]: I1210 15:24:14.169440 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-82wnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0a2f42c-a60e-4350-9ebb-c28d3cbfdad7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7af643111a7a5d1d78b0412b1621b5ffac6389760ea9190e26e9e5d1704eed4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xbzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://915c918562f8a69a28cbcf29e427bfdd94477193b98d5a2da9ad45cb4b44fa03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xbzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-82wnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:14Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:14 crc kubenswrapper[4755]: I1210 15:24:14.181065 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aed6bb9-78c2-410b-9c58-b60ab22a7bd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70d014e7227746c46b30f8f5a1f307a422d2fa0d4b98d98bfe5ba6217223489e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://104d730ea666ffd2d639ba7fc8d1ac5b444586788a62656fd260cb249f82b1ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09dbec85547c7170bb9551e5657876814d48528e3047daf3547711a563d2b6ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8bf1de0d8bdc0a20bc42ba5097d849ae0f507e5fbc18fb17b4ff3650e46ff0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:14Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:14 crc kubenswrapper[4755]: I1210 15:24:14.194698 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n8c6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b56ae78-835a-45da-bc46-5adff2bdf9fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8957db8ccef7b3c449920471d345aa81ce9a7ab9be36b2350ec428aebec7ac3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b9a1485f6d0c0e53fd3505623b2788476bc00abea2f11650386eac40e449bcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9a1485f6d0c0e53fd3505623b2788476bc00abea2f11650386eac40e449bcc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a532015d196c588ad3dd6c43d9fa3772ba2a42bd1b2231f830b60adb0addab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a532015d196c588ad3dd6c43d9fa3772ba2a42bd1b2231f830b60adb0addab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://417e326f541e04df27d540c14ec572bb8f1d749a9e3dc88f483241d69f7d0679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://417e326f541e04df27d540c14ec572bb8f1d749a9e3dc88f483241d69f7d0679\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42867f7a1f6aceaf708ad650b7634bdafb3f850872c540d424916440014e7941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42867f7a1f6aceaf708ad650b7634bdafb3f850872c540d424916440014e7941\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f2d22bfc26958a3aad99de39f5c831d36c1ba4f563851baafc73735e80d5c91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f2d22bfc26958a3aad99de39f5c831d36c1ba4f563851baafc73735e80d5c91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://548d326fc5c14c16176d4fff3446d93c17322c11d2a34e9e42376b665de3848d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://548d326fc5c14c16176d4fff3446d93c17322c11d2a34e9e42376b665de3848d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n8c6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:14Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:14 crc kubenswrapper[4755]: I1210 15:24:14.206648 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c371b199c3643a68ea5eba935eb76a1b7e8a4027c9292a1324116ccfc14742ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:14Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:14 crc kubenswrapper[4755]: I1210 15:24:14.211359 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:14 crc kubenswrapper[4755]: I1210 15:24:14.211396 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:14 crc kubenswrapper[4755]: I1210 15:24:14.211404 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:14 crc kubenswrapper[4755]: I1210 15:24:14.211418 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:14 crc kubenswrapper[4755]: I1210 15:24:14.211427 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:14Z","lastTransitionTime":"2025-12-10T15:24:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:14 crc kubenswrapper[4755]: I1210 15:24:14.218592 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wv8fh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d150a22e-c59a-4376-a5c8-db4085ea0be0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58d106c5d1b9525ec821d009ca556449cd8d7f0e1b9c8ec7dd969df996e73625\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfw9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wv8fh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:14Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:14 crc kubenswrapper[4755]: I1210 15:24:14.229922 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q5ctz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17673130-8212-4f8f-8859-92774f0ee202\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcscn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcscn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q5ctz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:14Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:14 crc kubenswrapper[4755]: I1210 15:24:14.249498 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fa03060-a6e0-4aad-9aa1-43b1a0d00c85\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a823333ed5cb5de3988d25e50e4b7a0f9071c76fb39c22760a4f2acd5eb455d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d828f3f3b90a2dcb1c1908e6a686368af5b0d715b3251e4b8fcf3c8818ec75a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://172ab3fce08c8ba4095ff4095c89364a778644752bd7bb6c178d6e3ebcface69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfbb8e350ad18b78a6bcf6cfa4eb8f2fad90f970dba08a8b1b2026af6f255e83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c4b2bf0c88a16fd6b4b45a20730a92292895dc9f29ce756d347c302b25a8612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55da309288bc2c7ff1f22da0569d18155796700c200fe3ed508f6f8179756865\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55da309288bc2c7ff1f22da0569d18155796700c200fe3ed508f6f8179756865\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219c531368bb5379f3726d94ab0afc0f33eedbf91b0a4dbbe0757c6e232f79ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://219c531368bb5379f3726d94ab0afc0f33eedbf91b0a4dbbe0757c6e232f79ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f9da8f3d4f85ec83c16b71ecd983ff4f48049eb8e73461a2b46c4f66d71ed06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9da8f3d4f85ec83c16b71ecd983ff4f48049eb8e73461a2b46c4f66d71ed06e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:14Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:14 crc kubenswrapper[4755]: I1210 15:24:14.262058 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:14Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:14 crc kubenswrapper[4755]: I1210 15:24:14.273834 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:14Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:14 crc kubenswrapper[4755]: I1210 15:24:14.290402 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b1da51a-99c9-4f8e-920d-ce0973af6370\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6eb065dc6c0cc8914cb95553eb2683d894fb9a4e78ce7fac73bcce8d7f6cced9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e547993b9f2fa37bf924f909c47b62eb0cc02b659596b1cad9bbc42fdde8f9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9ba47683cc23d5b531a45f0658b6a9378650400b35b5372642b0430a5ac503f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a75407e83508af9adebb09c6466a966dd791d29f690c539656f9bd3396d7031\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://602b4e49987fa2cc6b54b822110aececbdddaf2bce8f27cce4ed906768d45791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://335bcab3a79f09796e97560365e1211fb30ddf288f4773c05ab353197add4365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://807b21c925067bde1df3babdc35da6c8805e10be110c0282376dbd6e8be69dcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://194ac31f80a7b5a0fde0c62cb794234cb84e5b12e4f3db214ec3d16b4250797f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T15:23:59Z\\\",\\\"message\\\":\\\"ss event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1210 15:23:58.955006 6153 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1210 15:23:58.954962 6153 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI1210 15:23:58.955022 6153 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI1210 15:23:58.955028 6153 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI1210 15:23:58.955034 6153 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nF1210 15:23:58.955079 6153 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"ht\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://807b21c925067bde1df3babdc35da6c8805e10be110c0282376dbd6e8be69dcb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T15:24:13Z\\\",\\\"message\\\":\\\"s, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI1210 15:24:12.549599 6361 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI1210 15:24:12.549769 6361 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1210 15:24:12.549565 6361 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-zl2tx in node crc\\\\nI1210 15:24:12.549780 6361 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-zl2tx after 0 failed attempt(s)\\\\nI1210 15:24:12.549784 6361 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-zl2tx\\\\nI1210 15:24:12.549311 6361 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-apiserver/apiserver_TCP_cluster\\\\\\\", UUID:\\\\\\\"d71b38eb-32af-4c0f-9490-7c317c111e3a\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-apiserver/apiserver\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]st\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T15:24:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59bf59d1b7fbc365a916fbbceca7ae30b7ebc754b34f2f7a34c2e21e1e1d2166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://375efb27cf6bef06a6a0ffc8f6b2963e1b9ffbca18b3c8e7b402bc552a411f6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://375efb27cf6bef06a6a0ffc8f6b2963e1b9ffbca18b3c8e7b402bc552a411f6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6lfvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:14Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:14 crc kubenswrapper[4755]: I1210 15:24:14.304046 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e6db97-7e22-4fec-924e-20e90f463887\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef3871ddc16d2fc1b89a6f120390cf066424bd9ed4cca98f8dea4430c40c6c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08895c3ab20248c4bd70ccfae6442bdf8e76a602e605b043f0f00166624ada17\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5b2467645116856afa1e1e37501c7b453b03da193a96d1075886b85839a86ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16e58653dca08c9b33671279553975bca9507ffeeefea161119cec624813f167\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea18ef6c319336321d97c2a55449903b837b5a47da785da3ff5308244751943\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17ca0b6cc1b8b0bf9c34e03c494d1fc1594db013eccf862027ec8749ea51f6e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17ca0b6cc1b8b0bf9c34e03c494d1fc1594db013eccf862027ec8749ea51f6e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:14Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:14 crc kubenswrapper[4755]: I1210 15:24:14.313819 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:14 crc kubenswrapper[4755]: I1210 15:24:14.313858 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:14 crc kubenswrapper[4755]: I1210 15:24:14.313868 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:14 crc kubenswrapper[4755]: I1210 15:24:14.313905 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:14 crc kubenswrapper[4755]: I1210 15:24:14.313919 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:14Z","lastTransitionTime":"2025-12-10T15:24:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:14 crc kubenswrapper[4755]: I1210 15:24:14.318099 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01ddc4056319db1f69268dcae192c3cb9db6c25284305803ae7588e59f77c346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aae6bf174cdc9bde18d7c959e976454e73c1e67642f0158d365b79582f63f3cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:14Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:14 crc kubenswrapper[4755]: I1210 15:24:14.331699 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:14Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:14 crc kubenswrapper[4755]: I1210 15:24:14.344173 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zl2tx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"796da6d5-6ccd-4786-a03e-9a8e47a55031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de63a123c46563bd8cd07e669d192bc8b019a889b9bdb7af1b988872c8f1fc48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzg4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zl2tx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:14Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:14 crc kubenswrapper[4755]: I1210 15:24:14.354593 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b132a8b9-1c99-414d-8773-229bf36b305d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5a5a59e9f156fb791ec822c2d5efe3fc6ec0e84bfcb2b6f5da81396951984c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2mdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a40c4bdaa23a60a665b8f565720d79b68cac62d40246be94fc6cd314b1bb3656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2mdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ggt8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:14Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:14 crc kubenswrapper[4755]: I1210 15:24:14.365243 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7d089a8bddfa1f80d29011c7a6bab0a300f7dd44bdb2864f86951ebbb9ebea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:14Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:14 crc kubenswrapper[4755]: I1210 15:24:14.417059 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:14 crc kubenswrapper[4755]: I1210 15:24:14.417279 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:14 crc kubenswrapper[4755]: I1210 15:24:14.417339 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:14 crc kubenswrapper[4755]: I1210 15:24:14.417398 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:14 crc kubenswrapper[4755]: I1210 15:24:14.417456 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:14Z","lastTransitionTime":"2025-12-10T15:24:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:14 crc kubenswrapper[4755]: I1210 15:24:14.520341 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:14 crc kubenswrapper[4755]: I1210 15:24:14.520422 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:14 crc kubenswrapper[4755]: I1210 15:24:14.520446 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:14 crc kubenswrapper[4755]: I1210 15:24:14.520523 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:14 crc kubenswrapper[4755]: I1210 15:24:14.520548 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:14Z","lastTransitionTime":"2025-12-10T15:24:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:14 crc kubenswrapper[4755]: I1210 15:24:14.623509 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:14 crc kubenswrapper[4755]: I1210 15:24:14.623548 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:14 crc kubenswrapper[4755]: I1210 15:24:14.623560 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:14 crc kubenswrapper[4755]: I1210 15:24:14.623574 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:14 crc kubenswrapper[4755]: I1210 15:24:14.623585 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:14Z","lastTransitionTime":"2025-12-10T15:24:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:14 crc kubenswrapper[4755]: I1210 15:24:14.727329 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:14 crc kubenswrapper[4755]: I1210 15:24:14.727717 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:14 crc kubenswrapper[4755]: I1210 15:24:14.727812 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:14 crc kubenswrapper[4755]: I1210 15:24:14.727887 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:14 crc kubenswrapper[4755]: I1210 15:24:14.727996 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:14Z","lastTransitionTime":"2025-12-10T15:24:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:14 crc kubenswrapper[4755]: I1210 15:24:14.830660 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:14 crc kubenswrapper[4755]: I1210 15:24:14.830699 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:14 crc kubenswrapper[4755]: I1210 15:24:14.830711 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:14 crc kubenswrapper[4755]: I1210 15:24:14.830726 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:14 crc kubenswrapper[4755]: I1210 15:24:14.830736 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:14Z","lastTransitionTime":"2025-12-10T15:24:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:14 crc kubenswrapper[4755]: I1210 15:24:14.932853 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:14 crc kubenswrapper[4755]: I1210 15:24:14.932901 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:14 crc kubenswrapper[4755]: I1210 15:24:14.932914 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:14 crc kubenswrapper[4755]: I1210 15:24:14.932933 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:14 crc kubenswrapper[4755]: I1210 15:24:14.932947 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:14Z","lastTransitionTime":"2025-12-10T15:24:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:15 crc kubenswrapper[4755]: I1210 15:24:15.035363 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:15 crc kubenswrapper[4755]: I1210 15:24:15.035418 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:15 crc kubenswrapper[4755]: I1210 15:24:15.035428 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:15 crc kubenswrapper[4755]: I1210 15:24:15.035445 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:15 crc kubenswrapper[4755]: I1210 15:24:15.035456 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:15Z","lastTransitionTime":"2025-12-10T15:24:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:15 crc kubenswrapper[4755]: I1210 15:24:15.137872 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:15 crc kubenswrapper[4755]: I1210 15:24:15.137928 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:15 crc kubenswrapper[4755]: I1210 15:24:15.137944 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:15 crc kubenswrapper[4755]: I1210 15:24:15.137974 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:15 crc kubenswrapper[4755]: I1210 15:24:15.137990 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:15Z","lastTransitionTime":"2025-12-10T15:24:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:15 crc kubenswrapper[4755]: I1210 15:24:15.145135 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6lfvk_4b1da51a-99c9-4f8e-920d-ce0973af6370/ovnkube-controller/2.log" Dec 10 15:24:15 crc kubenswrapper[4755]: I1210 15:24:15.241211 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:15 crc kubenswrapper[4755]: I1210 15:24:15.241248 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:15 crc kubenswrapper[4755]: I1210 15:24:15.241257 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:15 crc kubenswrapper[4755]: I1210 15:24:15.241270 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:15 crc kubenswrapper[4755]: I1210 15:24:15.241279 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:15Z","lastTransitionTime":"2025-12-10T15:24:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:15 crc kubenswrapper[4755]: I1210 15:24:15.344380 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:15 crc kubenswrapper[4755]: I1210 15:24:15.344425 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:15 crc kubenswrapper[4755]: I1210 15:24:15.344434 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:15 crc kubenswrapper[4755]: I1210 15:24:15.344448 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:15 crc kubenswrapper[4755]: I1210 15:24:15.344457 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:15Z","lastTransitionTime":"2025-12-10T15:24:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:15 crc kubenswrapper[4755]: I1210 15:24:15.447892 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:15 crc kubenswrapper[4755]: I1210 15:24:15.447944 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:15 crc kubenswrapper[4755]: I1210 15:24:15.447958 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:15 crc kubenswrapper[4755]: I1210 15:24:15.447980 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:15 crc kubenswrapper[4755]: I1210 15:24:15.447996 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:15Z","lastTransitionTime":"2025-12-10T15:24:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:15 crc kubenswrapper[4755]: I1210 15:24:15.551435 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:15 crc kubenswrapper[4755]: I1210 15:24:15.551492 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:15 crc kubenswrapper[4755]: I1210 15:24:15.551501 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:15 crc kubenswrapper[4755]: I1210 15:24:15.551515 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:15 crc kubenswrapper[4755]: I1210 15:24:15.551527 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:15Z","lastTransitionTime":"2025-12-10T15:24:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:15 crc kubenswrapper[4755]: I1210 15:24:15.654300 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:15 crc kubenswrapper[4755]: I1210 15:24:15.654362 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:15 crc kubenswrapper[4755]: I1210 15:24:15.654378 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:15 crc kubenswrapper[4755]: I1210 15:24:15.654399 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:15 crc kubenswrapper[4755]: I1210 15:24:15.654414 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:15Z","lastTransitionTime":"2025-12-10T15:24:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:15 crc kubenswrapper[4755]: I1210 15:24:15.667168 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 15:24:15 crc kubenswrapper[4755]: I1210 15:24:15.667284 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 15:24:15 crc kubenswrapper[4755]: I1210 15:24:15.667329 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 15:24:15 crc kubenswrapper[4755]: E1210 15:24:15.667373 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 15:24:47.66735002 +0000 UTC m=+84.268233682 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:24:15 crc kubenswrapper[4755]: E1210 15:24:15.667417 4755 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 10 15:24:15 crc kubenswrapper[4755]: E1210 15:24:15.667440 4755 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 10 15:24:15 crc kubenswrapper[4755]: E1210 15:24:15.667452 4755 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 15:24:15 crc kubenswrapper[4755]: E1210 15:24:15.667496 4755 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 10 15:24:15 crc kubenswrapper[4755]: E1210 15:24:15.667506 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-10 15:24:47.667497264 +0000 UTC m=+84.268380896 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 15:24:15 crc kubenswrapper[4755]: E1210 15:24:15.667530 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-10 15:24:47.667520995 +0000 UTC m=+84.268404707 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 10 15:24:15 crc kubenswrapper[4755]: E1210 15:24:15.667417 4755 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 10 15:24:15 crc kubenswrapper[4755]: E1210 15:24:15.667554 4755 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 10 15:24:15 crc kubenswrapper[4755]: E1210 15:24:15.667563 4755 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 15:24:15 crc kubenswrapper[4755]: I1210 15:24:15.667417 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 15:24:15 crc kubenswrapper[4755]: E1210 15:24:15.667594 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-10 15:24:47.667586136 +0000 UTC m=+84.268469858 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 15:24:15 crc kubenswrapper[4755]: I1210 15:24:15.667625 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 15:24:15 crc kubenswrapper[4755]: E1210 15:24:15.667728 4755 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 10 15:24:15 crc kubenswrapper[4755]: E1210 15:24:15.667794 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-10 15:24:47.667772881 +0000 UTC m=+84.268656583 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 10 15:24:15 crc kubenswrapper[4755]: I1210 15:24:15.756577 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 15:24:15 crc kubenswrapper[4755]: I1210 15:24:15.756585 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 15:24:15 crc kubenswrapper[4755]: E1210 15:24:15.756736 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 15:24:15 crc kubenswrapper[4755]: I1210 15:24:15.756604 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5ctz" Dec 10 15:24:15 crc kubenswrapper[4755]: E1210 15:24:15.756798 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 15:24:15 crc kubenswrapper[4755]: I1210 15:24:15.756600 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 15:24:15 crc kubenswrapper[4755]: E1210 15:24:15.756850 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5ctz" podUID="17673130-8212-4f8f-8859-92774f0ee202" Dec 10 15:24:15 crc kubenswrapper[4755]: I1210 15:24:15.756911 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:15 crc kubenswrapper[4755]: I1210 15:24:15.756955 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:15 crc kubenswrapper[4755]: I1210 15:24:15.756971 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:15 crc kubenswrapper[4755]: I1210 15:24:15.756991 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:15 crc kubenswrapper[4755]: I1210 15:24:15.757007 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:15Z","lastTransitionTime":"2025-12-10T15:24:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:15 crc kubenswrapper[4755]: E1210 15:24:15.757055 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 15:24:15 crc kubenswrapper[4755]: I1210 15:24:15.859976 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:15 crc kubenswrapper[4755]: I1210 15:24:15.860051 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:15 crc kubenswrapper[4755]: I1210 15:24:15.860088 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:15 crc kubenswrapper[4755]: I1210 15:24:15.860108 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:15 crc kubenswrapper[4755]: I1210 15:24:15.860120 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:15Z","lastTransitionTime":"2025-12-10T15:24:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:15 crc kubenswrapper[4755]: I1210 15:24:15.963339 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:15 crc kubenswrapper[4755]: I1210 15:24:15.963407 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:15 crc kubenswrapper[4755]: I1210 15:24:15.963426 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:15 crc kubenswrapper[4755]: I1210 15:24:15.963452 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:15 crc kubenswrapper[4755]: I1210 15:24:15.963506 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:15Z","lastTransitionTime":"2025-12-10T15:24:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:16 crc kubenswrapper[4755]: I1210 15:24:16.066682 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:16 crc kubenswrapper[4755]: I1210 15:24:16.066760 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:16 crc kubenswrapper[4755]: I1210 15:24:16.066796 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:16 crc kubenswrapper[4755]: I1210 15:24:16.066826 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:16 crc kubenswrapper[4755]: I1210 15:24:16.066850 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:16Z","lastTransitionTime":"2025-12-10T15:24:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:16 crc kubenswrapper[4755]: I1210 15:24:16.169260 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:16 crc kubenswrapper[4755]: I1210 15:24:16.169323 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:16 crc kubenswrapper[4755]: I1210 15:24:16.169343 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:16 crc kubenswrapper[4755]: I1210 15:24:16.169373 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:16 crc kubenswrapper[4755]: I1210 15:24:16.169395 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:16Z","lastTransitionTime":"2025-12-10T15:24:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:16 crc kubenswrapper[4755]: I1210 15:24:16.273031 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:16 crc kubenswrapper[4755]: I1210 15:24:16.273096 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:16 crc kubenswrapper[4755]: I1210 15:24:16.273114 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:16 crc kubenswrapper[4755]: I1210 15:24:16.273140 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:16 crc kubenswrapper[4755]: I1210 15:24:16.273161 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:16Z","lastTransitionTime":"2025-12-10T15:24:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:16 crc kubenswrapper[4755]: I1210 15:24:16.376777 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:16 crc kubenswrapper[4755]: I1210 15:24:16.376857 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:16 crc kubenswrapper[4755]: I1210 15:24:16.376881 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:16 crc kubenswrapper[4755]: I1210 15:24:16.376913 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:16 crc kubenswrapper[4755]: I1210 15:24:16.376936 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:16Z","lastTransitionTime":"2025-12-10T15:24:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:16 crc kubenswrapper[4755]: I1210 15:24:16.479518 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:16 crc kubenswrapper[4755]: I1210 15:24:16.479551 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:16 crc kubenswrapper[4755]: I1210 15:24:16.479582 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:16 crc kubenswrapper[4755]: I1210 15:24:16.479600 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:16 crc kubenswrapper[4755]: I1210 15:24:16.479611 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:16Z","lastTransitionTime":"2025-12-10T15:24:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:16 crc kubenswrapper[4755]: I1210 15:24:16.583531 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:16 crc kubenswrapper[4755]: I1210 15:24:16.583571 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:16 crc kubenswrapper[4755]: I1210 15:24:16.583579 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:16 crc kubenswrapper[4755]: I1210 15:24:16.583592 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:16 crc kubenswrapper[4755]: I1210 15:24:16.583601 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:16Z","lastTransitionTime":"2025-12-10T15:24:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:16 crc kubenswrapper[4755]: I1210 15:24:16.628962 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:16 crc kubenswrapper[4755]: I1210 15:24:16.629025 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:16 crc kubenswrapper[4755]: I1210 15:24:16.629038 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:16 crc kubenswrapper[4755]: I1210 15:24:16.629055 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:16 crc kubenswrapper[4755]: I1210 15:24:16.629068 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:16Z","lastTransitionTime":"2025-12-10T15:24:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:16 crc kubenswrapper[4755]: E1210 15:24:16.640638 4755 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:24:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:24:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:24:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:24:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ba232303-88d5-4931-b82e-34d9a0e5c06a\\\",\\\"systemUUID\\\":\\\"ebd59de0-c6b0-47c1-bc17-6f665dcf344d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:16Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:16 crc kubenswrapper[4755]: I1210 15:24:16.644453 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:16 crc kubenswrapper[4755]: I1210 15:24:16.644510 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:16 crc kubenswrapper[4755]: I1210 15:24:16.644523 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:16 crc kubenswrapper[4755]: I1210 15:24:16.644544 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:16 crc kubenswrapper[4755]: I1210 15:24:16.644561 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:16Z","lastTransitionTime":"2025-12-10T15:24:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:16 crc kubenswrapper[4755]: E1210 15:24:16.657395 4755 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:24:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:24:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:24:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:24:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ba232303-88d5-4931-b82e-34d9a0e5c06a\\\",\\\"systemUUID\\\":\\\"ebd59de0-c6b0-47c1-bc17-6f665dcf344d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:16Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:16 crc kubenswrapper[4755]: I1210 15:24:16.661391 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:16 crc kubenswrapper[4755]: I1210 15:24:16.661438 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:16 crc kubenswrapper[4755]: I1210 15:24:16.661454 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:16 crc kubenswrapper[4755]: I1210 15:24:16.661502 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:16 crc kubenswrapper[4755]: I1210 15:24:16.661522 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:16Z","lastTransitionTime":"2025-12-10T15:24:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:16 crc kubenswrapper[4755]: E1210 15:24:16.675411 4755 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:24:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:24:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:24:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:24:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ba232303-88d5-4931-b82e-34d9a0e5c06a\\\",\\\"systemUUID\\\":\\\"ebd59de0-c6b0-47c1-bc17-6f665dcf344d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:16Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:16 crc kubenswrapper[4755]: I1210 15:24:16.679855 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:16 crc kubenswrapper[4755]: I1210 15:24:16.679909 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:16 crc kubenswrapper[4755]: I1210 15:24:16.679922 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:16 crc kubenswrapper[4755]: I1210 15:24:16.679939 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:16 crc kubenswrapper[4755]: I1210 15:24:16.679950 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:16Z","lastTransitionTime":"2025-12-10T15:24:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:16 crc kubenswrapper[4755]: E1210 15:24:16.693628 4755 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:24:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:24:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:24:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:24:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ba232303-88d5-4931-b82e-34d9a0e5c06a\\\",\\\"systemUUID\\\":\\\"ebd59de0-c6b0-47c1-bc17-6f665dcf344d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:16Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:16 crc kubenswrapper[4755]: I1210 15:24:16.697875 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:16 crc kubenswrapper[4755]: I1210 15:24:16.698156 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:16 crc kubenswrapper[4755]: I1210 15:24:16.698246 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:16 crc kubenswrapper[4755]: I1210 15:24:16.698319 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:16 crc kubenswrapper[4755]: I1210 15:24:16.698404 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:16Z","lastTransitionTime":"2025-12-10T15:24:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:16 crc kubenswrapper[4755]: E1210 15:24:16.711320 4755 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:24:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:24:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:24:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:24:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ba232303-88d5-4931-b82e-34d9a0e5c06a\\\",\\\"systemUUID\\\":\\\"ebd59de0-c6b0-47c1-bc17-6f665dcf344d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:16Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:16 crc kubenswrapper[4755]: E1210 15:24:16.711496 4755 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 10 15:24:16 crc kubenswrapper[4755]: I1210 15:24:16.713048 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:16 crc kubenswrapper[4755]: I1210 15:24:16.713186 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:16 crc kubenswrapper[4755]: I1210 15:24:16.713256 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:16 crc kubenswrapper[4755]: I1210 15:24:16.713337 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:16 crc kubenswrapper[4755]: I1210 15:24:16.713400 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:16Z","lastTransitionTime":"2025-12-10T15:24:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:16 crc kubenswrapper[4755]: I1210 15:24:16.816610 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:16 crc kubenswrapper[4755]: I1210 15:24:16.816682 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:16 crc kubenswrapper[4755]: I1210 15:24:16.816705 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:16 crc kubenswrapper[4755]: I1210 15:24:16.816737 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:16 crc kubenswrapper[4755]: I1210 15:24:16.816765 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:16Z","lastTransitionTime":"2025-12-10T15:24:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:16 crc kubenswrapper[4755]: I1210 15:24:16.920099 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:16 crc kubenswrapper[4755]: I1210 15:24:16.920148 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:16 crc kubenswrapper[4755]: I1210 15:24:16.920158 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:16 crc kubenswrapper[4755]: I1210 15:24:16.920175 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:16 crc kubenswrapper[4755]: I1210 15:24:16.920189 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:16Z","lastTransitionTime":"2025-12-10T15:24:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:17 crc kubenswrapper[4755]: I1210 15:24:17.022870 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:17 crc kubenswrapper[4755]: I1210 15:24:17.022918 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:17 crc kubenswrapper[4755]: I1210 15:24:17.022928 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:17 crc kubenswrapper[4755]: I1210 15:24:17.022945 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:17 crc kubenswrapper[4755]: I1210 15:24:17.022954 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:17Z","lastTransitionTime":"2025-12-10T15:24:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:17 crc kubenswrapper[4755]: I1210 15:24:17.125291 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:17 crc kubenswrapper[4755]: I1210 15:24:17.125341 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:17 crc kubenswrapper[4755]: I1210 15:24:17.125360 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:17 crc kubenswrapper[4755]: I1210 15:24:17.125388 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:17 crc kubenswrapper[4755]: I1210 15:24:17.125406 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:17Z","lastTransitionTime":"2025-12-10T15:24:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:17 crc kubenswrapper[4755]: I1210 15:24:17.227966 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:17 crc kubenswrapper[4755]: I1210 15:24:17.228070 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:17 crc kubenswrapper[4755]: I1210 15:24:17.228080 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:17 crc kubenswrapper[4755]: I1210 15:24:17.228097 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:17 crc kubenswrapper[4755]: I1210 15:24:17.228107 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:17Z","lastTransitionTime":"2025-12-10T15:24:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:17 crc kubenswrapper[4755]: I1210 15:24:17.331337 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:17 crc kubenswrapper[4755]: I1210 15:24:17.331385 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:17 crc kubenswrapper[4755]: I1210 15:24:17.331396 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:17 crc kubenswrapper[4755]: I1210 15:24:17.331413 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:17 crc kubenswrapper[4755]: I1210 15:24:17.331425 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:17Z","lastTransitionTime":"2025-12-10T15:24:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:17 crc kubenswrapper[4755]: I1210 15:24:17.434208 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:17 crc kubenswrapper[4755]: I1210 15:24:17.434260 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:17 crc kubenswrapper[4755]: I1210 15:24:17.434269 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:17 crc kubenswrapper[4755]: I1210 15:24:17.434284 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:17 crc kubenswrapper[4755]: I1210 15:24:17.434292 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:17Z","lastTransitionTime":"2025-12-10T15:24:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:17 crc kubenswrapper[4755]: I1210 15:24:17.537052 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:17 crc kubenswrapper[4755]: I1210 15:24:17.537095 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:17 crc kubenswrapper[4755]: I1210 15:24:17.537111 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:17 crc kubenswrapper[4755]: I1210 15:24:17.537127 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:17 crc kubenswrapper[4755]: I1210 15:24:17.537138 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:17Z","lastTransitionTime":"2025-12-10T15:24:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:17 crc kubenswrapper[4755]: I1210 15:24:17.640272 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:17 crc kubenswrapper[4755]: I1210 15:24:17.640332 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:17 crc kubenswrapper[4755]: I1210 15:24:17.640351 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:17 crc kubenswrapper[4755]: I1210 15:24:17.640373 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:17 crc kubenswrapper[4755]: I1210 15:24:17.640385 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:17Z","lastTransitionTime":"2025-12-10T15:24:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:17 crc kubenswrapper[4755]: I1210 15:24:17.742718 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:17 crc kubenswrapper[4755]: I1210 15:24:17.742999 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:17 crc kubenswrapper[4755]: I1210 15:24:17.743075 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:17 crc kubenswrapper[4755]: I1210 15:24:17.743143 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:17 crc kubenswrapper[4755]: I1210 15:24:17.743211 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:17Z","lastTransitionTime":"2025-12-10T15:24:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:17 crc kubenswrapper[4755]: I1210 15:24:17.757129 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 15:24:17 crc kubenswrapper[4755]: I1210 15:24:17.757326 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 15:24:17 crc kubenswrapper[4755]: I1210 15:24:17.757131 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5ctz" Dec 10 15:24:17 crc kubenswrapper[4755]: I1210 15:24:17.757134 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 15:24:17 crc kubenswrapper[4755]: E1210 15:24:17.757651 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 15:24:17 crc kubenswrapper[4755]: E1210 15:24:17.757713 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 15:24:17 crc kubenswrapper[4755]: E1210 15:24:17.757821 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 15:24:17 crc kubenswrapper[4755]: E1210 15:24:17.757775 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5ctz" podUID="17673130-8212-4f8f-8859-92774f0ee202" Dec 10 15:24:17 crc kubenswrapper[4755]: I1210 15:24:17.797785 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 10 15:24:17 crc kubenswrapper[4755]: I1210 15:24:17.810155 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wv8fh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d150a22e-c59a-4376-a5c8-db4085ea0be0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58d106c5d1b9525ec821d009ca556449cd8d7f0e1b9c8ec7dd969df996e73625\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfw9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wv8fh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:17Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:17 crc kubenswrapper[4755]: I1210 15:24:17.811975 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Dec 10 15:24:17 crc kubenswrapper[4755]: I1210 15:24:17.820622 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q5ctz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17673130-8212-4f8f-8859-92774f0ee202\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcscn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcscn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q5ctz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:17Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:17 crc kubenswrapper[4755]: I1210 15:24:17.844340 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fa03060-a6e0-4aad-9aa1-43b1a0d00c85\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a823333ed5cb5de3988d25e50e4b7a0f9071c76fb39c22760a4f2acd5eb455d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d828f3f3b90a2dcb1c1908e6a686368af5b0d715b3251e4b8fcf3c8818ec75a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://172ab3fce08c8ba4095ff4095c89364a778644752bd7bb6c178d6e3ebcface69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfbb8e350ad18b78a6bcf6cfa4eb8f2fad90f970dba08a8b1b2026af6f255e83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c4b2bf0c88a16fd6b4b45a20730a92292895dc9f29ce756d347c302b25a8612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55da309288bc2c7ff1f22da0569d18155796700c200fe3ed508f6f8179756865\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55da309288bc2c7ff1f22da0569d18155796700c200fe3ed508f6f8179756865\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219c531368bb5379f3726d94ab0afc0f33eedbf91b0a4dbbe0757c6e232f79ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://219c531368bb5379f3726d94ab0afc0f33eedbf91b0a4dbbe0757c6e232f79ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f9da8f3d4f85ec83c16b71ecd983ff4f48049eb8e73461a2b46c4f66d71ed06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9da8f3d4f85ec83c16b71ecd983ff4f48049eb8e73461a2b46c4f66d71ed06e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:17Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:17 crc kubenswrapper[4755]: I1210 15:24:17.845898 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:17 crc kubenswrapper[4755]: I1210 15:24:17.845924 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:17 crc kubenswrapper[4755]: I1210 15:24:17.845936 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:17 crc kubenswrapper[4755]: I1210 15:24:17.845954 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:17 crc kubenswrapper[4755]: I1210 15:24:17.845966 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:17Z","lastTransitionTime":"2025-12-10T15:24:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:17 crc kubenswrapper[4755]: I1210 15:24:17.859769 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:17Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:17 crc kubenswrapper[4755]: I1210 15:24:17.874122 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c371b199c3643a68ea5eba935eb76a1b7e8a4027c9292a1324116ccfc14742ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:17Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:17 crc kubenswrapper[4755]: I1210 15:24:17.894751 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b1da51a-99c9-4f8e-920d-ce0973af6370\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6eb065dc6c0cc8914cb95553eb2683d894fb9a4e78ce7fac73bcce8d7f6cced9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e547993b9f2fa37bf924f909c47b62eb0cc02b659596b1cad9bbc42fdde8f9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9ba47683cc23d5b531a45f0658b6a9378650400b35b5372642b0430a5ac503f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a75407e83508af9adebb09c6466a966dd791d29f690c539656f9bd3396d7031\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://602b4e49987fa2cc6b54b822110aececbdddaf2bce8f27cce4ed906768d45791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://335bcab3a79f09796e97560365e1211fb30ddf288f4773c05ab353197add4365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://807b21c925067bde1df3babdc35da6c8805e10be110c0282376dbd6e8be69dcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://194ac31f80a7b5a0fde0c62cb794234cb84e5b12e4f3db214ec3d16b4250797f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T15:23:59Z\\\",\\\"message\\\":\\\"ss event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1210 15:23:58.955006 6153 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1210 15:23:58.954962 6153 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI1210 15:23:58.955022 6153 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI1210 15:23:58.955028 6153 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI1210 15:23:58.955034 6153 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nF1210 15:23:58.955079 6153 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"ht\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://807b21c925067bde1df3babdc35da6c8805e10be110c0282376dbd6e8be69dcb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T15:24:13Z\\\",\\\"message\\\":\\\"s, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI1210 15:24:12.549599 6361 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI1210 15:24:12.549769 6361 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1210 15:24:12.549565 6361 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-zl2tx in node crc\\\\nI1210 15:24:12.549780 6361 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-zl2tx after 0 failed attempt(s)\\\\nI1210 15:24:12.549784 6361 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-zl2tx\\\\nI1210 15:24:12.549311 6361 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-apiserver/apiserver_TCP_cluster\\\\\\\", UUID:\\\\\\\"d71b38eb-32af-4c0f-9490-7c317c111e3a\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-apiserver/apiserver\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]st\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T15:24:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59bf59d1b7fbc365a916fbbceca7ae30b7ebc754b34f2f7a34c2e21e1e1d2166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://375efb27cf6bef06a6a0ffc8f6b2963e1b9ffbca18b3c8e7b402bc552a411f6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://375efb27cf6bef06a6a0ffc8f6b2963e1b9ffbca18b3c8e7b402bc552a411f6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6lfvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:17Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:17 crc kubenswrapper[4755]: I1210 15:24:17.908050 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e6db97-7e22-4fec-924e-20e90f463887\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef3871ddc16d2fc1b89a6f120390cf066424bd9ed4cca98f8dea4430c40c6c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08895c3ab20248c4bd70ccfae6442bdf8e76a602e605b043f0f00166624ada17\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5b2467645116856afa1e1e37501c7b453b03da193a96d1075886b85839a86ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16e58653dca08c9b33671279553975bca9507ffeeefea161119cec624813f167\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea18ef6c319336321d97c2a55449903b837b5a47da785da3ff5308244751943\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17ca0b6cc1b8b0bf9c34e03c494d1fc1594db013eccf862027ec8749ea51f6e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17ca0b6cc1b8b0bf9c34e03c494d1fc1594db013eccf862027ec8749ea51f6e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:17Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:17 crc kubenswrapper[4755]: I1210 15:24:17.927509 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:17Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:17 crc kubenswrapper[4755]: I1210 15:24:17.947442 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:17Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:17 crc kubenswrapper[4755]: I1210 15:24:17.948842 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:17 crc kubenswrapper[4755]: I1210 15:24:17.948878 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:17 crc kubenswrapper[4755]: I1210 15:24:17.948890 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:17 crc kubenswrapper[4755]: I1210 15:24:17.948908 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:17 crc kubenswrapper[4755]: I1210 15:24:17.948921 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:17Z","lastTransitionTime":"2025-12-10T15:24:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:17 crc kubenswrapper[4755]: I1210 15:24:17.966899 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zl2tx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"796da6d5-6ccd-4786-a03e-9a8e47a55031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de63a123c46563bd8cd07e669d192bc8b019a889b9bdb7af1b988872c8f1fc48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzg4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zl2tx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:17Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:17 crc kubenswrapper[4755]: I1210 15:24:17.983535 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b132a8b9-1c99-414d-8773-229bf36b305d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5a5a59e9f156fb791ec822c2d5efe3fc6ec0e84bfcb2b6f5da81396951984c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2mdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a40c4bdaa23a60a665b8f565720d79b68cac62d40246be94fc6cd314b1bb3656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2mdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ggt8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:17Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:18 crc kubenswrapper[4755]: I1210 15:24:18.003584 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7d089a8bddfa1f80d29011c7a6bab0a300f7dd44bdb2864f86951ebbb9ebea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:18Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:18 crc kubenswrapper[4755]: I1210 15:24:18.021943 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01ddc4056319db1f69268dcae192c3cb9db6c25284305803ae7588e59f77c346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aae6bf174cdc9bde18d7c959e976454e73c1e67642f0158d365b79582f63f3cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:18Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:18 crc kubenswrapper[4755]: I1210 15:24:18.037130 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-82wnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0a2f42c-a60e-4350-9ebb-c28d3cbfdad7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7af643111a7a5d1d78b0412b1621b5ffac6389760ea9190e26e9e5d1704eed4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xbzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://915c918562f8a69a28cbcf29e427bfdd94477193b98d5a2da9ad45cb4b44fa03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xbzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-82wnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:18Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:18 crc kubenswrapper[4755]: I1210 15:24:18.052843 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:18 crc kubenswrapper[4755]: I1210 15:24:18.052911 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:18 crc kubenswrapper[4755]: I1210 15:24:18.052945 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:18 crc kubenswrapper[4755]: I1210 15:24:18.052977 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:18 crc kubenswrapper[4755]: I1210 15:24:18.052996 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:18Z","lastTransitionTime":"2025-12-10T15:24:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:18 crc kubenswrapper[4755]: I1210 15:24:18.055398 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aed6bb9-78c2-410b-9c58-b60ab22a7bd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70d014e7227746c46b30f8f5a1f307a422d2fa0d4b98d98bfe5ba6217223489e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://104d730ea666ffd2d639ba7fc8d1ac5b444586788a62656fd260cb249f82b1ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09dbec85547c7170bb9551e5657876814d48528e3047daf3547711a563d2b6ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8bf1de0d8bdc0a20bc42ba5097d849ae0f507e5fbc18fb17b4ff3650e46ff0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:18Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:18 crc kubenswrapper[4755]: I1210 15:24:18.073168 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n8c6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b56ae78-835a-45da-bc46-5adff2bdf9fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8957db8ccef7b3c449920471d345aa81ce9a7ab9be36b2350ec428aebec7ac3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b9a1485f6d0c0e53fd3505623b2788476bc00abea2f11650386eac40e449bcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9a1485f6d0c0e53fd3505623b2788476bc00abea2f11650386eac40e449bcc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a532015d196c588ad3dd6c43d9fa3772ba2a42bd1b2231f830b60adb0addab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a532015d196c588ad3dd6c43d9fa3772ba2a42bd1b2231f830b60adb0addab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://417e326f541e04df27d540c14ec572bb8f1d749a9e3dc88f483241d69f7d0679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://417e326f541e04df27d540c14ec572bb8f1d749a9e3dc88f483241d69f7d0679\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42867f7a1f6aceaf708ad650b7634bdafb3f850872c540d424916440014e7941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42867f7a1f6aceaf708ad650b7634bdafb3f850872c540d424916440014e7941\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f2d22bfc26958a3aad99de39f5c831d36c1ba4f563851baafc73735e80d5c91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f2d22bfc26958a3aad99de39f5c831d36c1ba4f563851baafc73735e80d5c91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://548d326fc5c14c16176d4fff3446d93c17322c11d2a34e9e42376b665de3848d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://548d326fc5c14c16176d4fff3446d93c17322c11d2a34e9e42376b665de3848d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n8c6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:18Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:18 crc kubenswrapper[4755]: I1210 15:24:18.087641 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qnmst" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1693fcf1-bef4-4f82-8dd8-f1797b03f5e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8fd6a2ab2b9557574951c0ebbccd663fe576262b0de2c3c655427c977f62d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hzdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qnmst\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:18Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:18 crc kubenswrapper[4755]: I1210 15:24:18.155340 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:18 crc kubenswrapper[4755]: I1210 15:24:18.155387 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:18 crc kubenswrapper[4755]: I1210 15:24:18.155401 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:18 crc kubenswrapper[4755]: I1210 15:24:18.155420 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:18 crc kubenswrapper[4755]: I1210 15:24:18.155435 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:18Z","lastTransitionTime":"2025-12-10T15:24:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:18 crc kubenswrapper[4755]: I1210 15:24:18.258520 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:18 crc kubenswrapper[4755]: I1210 15:24:18.258576 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:18 crc kubenswrapper[4755]: I1210 15:24:18.258587 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:18 crc kubenswrapper[4755]: I1210 15:24:18.258606 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:18 crc kubenswrapper[4755]: I1210 15:24:18.258619 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:18Z","lastTransitionTime":"2025-12-10T15:24:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:18 crc kubenswrapper[4755]: I1210 15:24:18.361379 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:18 crc kubenswrapper[4755]: I1210 15:24:18.361510 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:18 crc kubenswrapper[4755]: I1210 15:24:18.361535 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:18 crc kubenswrapper[4755]: I1210 15:24:18.361571 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:18 crc kubenswrapper[4755]: I1210 15:24:18.361594 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:18Z","lastTransitionTime":"2025-12-10T15:24:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:18 crc kubenswrapper[4755]: I1210 15:24:18.464947 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:18 crc kubenswrapper[4755]: I1210 15:24:18.464990 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:18 crc kubenswrapper[4755]: I1210 15:24:18.465001 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:18 crc kubenswrapper[4755]: I1210 15:24:18.465017 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:18 crc kubenswrapper[4755]: I1210 15:24:18.465029 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:18Z","lastTransitionTime":"2025-12-10T15:24:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:18 crc kubenswrapper[4755]: I1210 15:24:18.567207 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:18 crc kubenswrapper[4755]: I1210 15:24:18.567235 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:18 crc kubenswrapper[4755]: I1210 15:24:18.567242 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:18 crc kubenswrapper[4755]: I1210 15:24:18.567255 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:18 crc kubenswrapper[4755]: I1210 15:24:18.567263 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:18Z","lastTransitionTime":"2025-12-10T15:24:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:18 crc kubenswrapper[4755]: I1210 15:24:18.669782 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:18 crc kubenswrapper[4755]: I1210 15:24:18.669846 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:18 crc kubenswrapper[4755]: I1210 15:24:18.669857 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:18 crc kubenswrapper[4755]: I1210 15:24:18.669873 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:18 crc kubenswrapper[4755]: I1210 15:24:18.669886 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:18Z","lastTransitionTime":"2025-12-10T15:24:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:18 crc kubenswrapper[4755]: I1210 15:24:18.772500 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:18 crc kubenswrapper[4755]: I1210 15:24:18.772561 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:18 crc kubenswrapper[4755]: I1210 15:24:18.772578 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:18 crc kubenswrapper[4755]: I1210 15:24:18.772601 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:18 crc kubenswrapper[4755]: I1210 15:24:18.772618 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:18Z","lastTransitionTime":"2025-12-10T15:24:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:18 crc kubenswrapper[4755]: I1210 15:24:18.875756 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:18 crc kubenswrapper[4755]: I1210 15:24:18.875815 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:18 crc kubenswrapper[4755]: I1210 15:24:18.875832 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:18 crc kubenswrapper[4755]: I1210 15:24:18.875854 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:18 crc kubenswrapper[4755]: I1210 15:24:18.875870 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:18Z","lastTransitionTime":"2025-12-10T15:24:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:18 crc kubenswrapper[4755]: I1210 15:24:18.978517 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:18 crc kubenswrapper[4755]: I1210 15:24:18.978562 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:18 crc kubenswrapper[4755]: I1210 15:24:18.978572 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:18 crc kubenswrapper[4755]: I1210 15:24:18.978590 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:18 crc kubenswrapper[4755]: I1210 15:24:18.978603 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:18Z","lastTransitionTime":"2025-12-10T15:24:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:19 crc kubenswrapper[4755]: I1210 15:24:19.081347 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:19 crc kubenswrapper[4755]: I1210 15:24:19.081403 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:19 crc kubenswrapper[4755]: I1210 15:24:19.081418 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:19 crc kubenswrapper[4755]: I1210 15:24:19.081438 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:19 crc kubenswrapper[4755]: I1210 15:24:19.081452 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:19Z","lastTransitionTime":"2025-12-10T15:24:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:19 crc kubenswrapper[4755]: I1210 15:24:19.184189 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:19 crc kubenswrapper[4755]: I1210 15:24:19.184254 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:19 crc kubenswrapper[4755]: I1210 15:24:19.184275 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:19 crc kubenswrapper[4755]: I1210 15:24:19.184302 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:19 crc kubenswrapper[4755]: I1210 15:24:19.184321 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:19Z","lastTransitionTime":"2025-12-10T15:24:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:19 crc kubenswrapper[4755]: I1210 15:24:19.287999 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:19 crc kubenswrapper[4755]: I1210 15:24:19.288075 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:19 crc kubenswrapper[4755]: I1210 15:24:19.288090 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:19 crc kubenswrapper[4755]: I1210 15:24:19.288109 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:19 crc kubenswrapper[4755]: I1210 15:24:19.288123 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:19Z","lastTransitionTime":"2025-12-10T15:24:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:19 crc kubenswrapper[4755]: I1210 15:24:19.391177 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:19 crc kubenswrapper[4755]: I1210 15:24:19.391241 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:19 crc kubenswrapper[4755]: I1210 15:24:19.391260 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:19 crc kubenswrapper[4755]: I1210 15:24:19.391286 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:19 crc kubenswrapper[4755]: I1210 15:24:19.391304 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:19Z","lastTransitionTime":"2025-12-10T15:24:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:19 crc kubenswrapper[4755]: I1210 15:24:19.494413 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:19 crc kubenswrapper[4755]: I1210 15:24:19.495328 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:19 crc kubenswrapper[4755]: I1210 15:24:19.495524 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:19 crc kubenswrapper[4755]: I1210 15:24:19.495682 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:19 crc kubenswrapper[4755]: I1210 15:24:19.495822 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:19Z","lastTransitionTime":"2025-12-10T15:24:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:19 crc kubenswrapper[4755]: I1210 15:24:19.597915 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:19 crc kubenswrapper[4755]: I1210 15:24:19.597963 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:19 crc kubenswrapper[4755]: I1210 15:24:19.597976 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:19 crc kubenswrapper[4755]: I1210 15:24:19.597995 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:19 crc kubenswrapper[4755]: I1210 15:24:19.598009 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:19Z","lastTransitionTime":"2025-12-10T15:24:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:19 crc kubenswrapper[4755]: I1210 15:24:19.700701 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:19 crc kubenswrapper[4755]: I1210 15:24:19.700733 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:19 crc kubenswrapper[4755]: I1210 15:24:19.700740 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:19 crc kubenswrapper[4755]: I1210 15:24:19.700756 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:19 crc kubenswrapper[4755]: I1210 15:24:19.700766 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:19Z","lastTransitionTime":"2025-12-10T15:24:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:19 crc kubenswrapper[4755]: I1210 15:24:19.757685 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 15:24:19 crc kubenswrapper[4755]: I1210 15:24:19.757778 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 15:24:19 crc kubenswrapper[4755]: E1210 15:24:19.757839 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 15:24:19 crc kubenswrapper[4755]: I1210 15:24:19.757867 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5ctz" Dec 10 15:24:19 crc kubenswrapper[4755]: E1210 15:24:19.757994 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 15:24:19 crc kubenswrapper[4755]: E1210 15:24:19.758075 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5ctz" podUID="17673130-8212-4f8f-8859-92774f0ee202" Dec 10 15:24:19 crc kubenswrapper[4755]: I1210 15:24:19.758211 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 15:24:19 crc kubenswrapper[4755]: E1210 15:24:19.758342 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 15:24:19 crc kubenswrapper[4755]: I1210 15:24:19.803386 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:19 crc kubenswrapper[4755]: I1210 15:24:19.803415 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:19 crc kubenswrapper[4755]: I1210 15:24:19.803423 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:19 crc kubenswrapper[4755]: I1210 15:24:19.803437 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:19 crc kubenswrapper[4755]: I1210 15:24:19.803445 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:19Z","lastTransitionTime":"2025-12-10T15:24:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:19 crc kubenswrapper[4755]: I1210 15:24:19.905462 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:19 crc kubenswrapper[4755]: I1210 15:24:19.905544 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:19 crc kubenswrapper[4755]: I1210 15:24:19.905555 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:19 crc kubenswrapper[4755]: I1210 15:24:19.905572 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:19 crc kubenswrapper[4755]: I1210 15:24:19.905584 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:19Z","lastTransitionTime":"2025-12-10T15:24:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:20 crc kubenswrapper[4755]: I1210 15:24:20.007806 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:20 crc kubenswrapper[4755]: I1210 15:24:20.007838 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:20 crc kubenswrapper[4755]: I1210 15:24:20.007847 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:20 crc kubenswrapper[4755]: I1210 15:24:20.007861 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:20 crc kubenswrapper[4755]: I1210 15:24:20.007869 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:20Z","lastTransitionTime":"2025-12-10T15:24:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:20 crc kubenswrapper[4755]: I1210 15:24:20.111382 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:20 crc kubenswrapper[4755]: I1210 15:24:20.111438 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:20 crc kubenswrapper[4755]: I1210 15:24:20.111449 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:20 crc kubenswrapper[4755]: I1210 15:24:20.111481 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:20 crc kubenswrapper[4755]: I1210 15:24:20.111495 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:20Z","lastTransitionTime":"2025-12-10T15:24:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:20 crc kubenswrapper[4755]: I1210 15:24:20.213969 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:20 crc kubenswrapper[4755]: I1210 15:24:20.214202 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:20 crc kubenswrapper[4755]: I1210 15:24:20.214305 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:20 crc kubenswrapper[4755]: I1210 15:24:20.214383 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:20 crc kubenswrapper[4755]: I1210 15:24:20.214447 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:20Z","lastTransitionTime":"2025-12-10T15:24:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:20 crc kubenswrapper[4755]: I1210 15:24:20.317234 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:20 crc kubenswrapper[4755]: I1210 15:24:20.317828 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:20 crc kubenswrapper[4755]: I1210 15:24:20.317914 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:20 crc kubenswrapper[4755]: I1210 15:24:20.318002 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:20 crc kubenswrapper[4755]: I1210 15:24:20.318092 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:20Z","lastTransitionTime":"2025-12-10T15:24:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:20 crc kubenswrapper[4755]: I1210 15:24:20.421071 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:20 crc kubenswrapper[4755]: I1210 15:24:20.421127 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:20 crc kubenswrapper[4755]: I1210 15:24:20.421144 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:20 crc kubenswrapper[4755]: I1210 15:24:20.421167 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:20 crc kubenswrapper[4755]: I1210 15:24:20.421182 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:20Z","lastTransitionTime":"2025-12-10T15:24:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:20 crc kubenswrapper[4755]: I1210 15:24:20.524347 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:20 crc kubenswrapper[4755]: I1210 15:24:20.524412 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:20 crc kubenswrapper[4755]: I1210 15:24:20.524425 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:20 crc kubenswrapper[4755]: I1210 15:24:20.524446 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:20 crc kubenswrapper[4755]: I1210 15:24:20.524481 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:20Z","lastTransitionTime":"2025-12-10T15:24:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:20 crc kubenswrapper[4755]: I1210 15:24:20.626574 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:20 crc kubenswrapper[4755]: I1210 15:24:20.626816 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:20 crc kubenswrapper[4755]: I1210 15:24:20.626883 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:20 crc kubenswrapper[4755]: I1210 15:24:20.626969 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:20 crc kubenswrapper[4755]: I1210 15:24:20.627048 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:20Z","lastTransitionTime":"2025-12-10T15:24:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:20 crc kubenswrapper[4755]: I1210 15:24:20.729783 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:20 crc kubenswrapper[4755]: I1210 15:24:20.730027 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:20 crc kubenswrapper[4755]: I1210 15:24:20.730126 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:20 crc kubenswrapper[4755]: I1210 15:24:20.730201 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:20 crc kubenswrapper[4755]: I1210 15:24:20.730271 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:20Z","lastTransitionTime":"2025-12-10T15:24:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:20 crc kubenswrapper[4755]: I1210 15:24:20.832971 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:20 crc kubenswrapper[4755]: I1210 15:24:20.833017 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:20 crc kubenswrapper[4755]: I1210 15:24:20.833031 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:20 crc kubenswrapper[4755]: I1210 15:24:20.833047 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:20 crc kubenswrapper[4755]: I1210 15:24:20.833057 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:20Z","lastTransitionTime":"2025-12-10T15:24:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:20 crc kubenswrapper[4755]: I1210 15:24:20.935711 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:20 crc kubenswrapper[4755]: I1210 15:24:20.935777 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:20 crc kubenswrapper[4755]: I1210 15:24:20.935798 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:20 crc kubenswrapper[4755]: I1210 15:24:20.935820 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:20 crc kubenswrapper[4755]: I1210 15:24:20.935835 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:20Z","lastTransitionTime":"2025-12-10T15:24:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:21 crc kubenswrapper[4755]: I1210 15:24:21.038087 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:21 crc kubenswrapper[4755]: I1210 15:24:21.038136 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:21 crc kubenswrapper[4755]: I1210 15:24:21.038148 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:21 crc kubenswrapper[4755]: I1210 15:24:21.038165 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:21 crc kubenswrapper[4755]: I1210 15:24:21.038179 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:21Z","lastTransitionTime":"2025-12-10T15:24:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:21 crc kubenswrapper[4755]: I1210 15:24:21.141150 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:21 crc kubenswrapper[4755]: I1210 15:24:21.141198 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:21 crc kubenswrapper[4755]: I1210 15:24:21.141209 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:21 crc kubenswrapper[4755]: I1210 15:24:21.141227 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:21 crc kubenswrapper[4755]: I1210 15:24:21.141239 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:21Z","lastTransitionTime":"2025-12-10T15:24:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:21 crc kubenswrapper[4755]: I1210 15:24:21.244042 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:21 crc kubenswrapper[4755]: I1210 15:24:21.244120 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:21 crc kubenswrapper[4755]: I1210 15:24:21.244157 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:21 crc kubenswrapper[4755]: I1210 15:24:21.244190 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:21 crc kubenswrapper[4755]: I1210 15:24:21.244211 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:21Z","lastTransitionTime":"2025-12-10T15:24:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:21 crc kubenswrapper[4755]: I1210 15:24:21.346805 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:21 crc kubenswrapper[4755]: I1210 15:24:21.346863 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:21 crc kubenswrapper[4755]: I1210 15:24:21.346880 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:21 crc kubenswrapper[4755]: I1210 15:24:21.346904 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:21 crc kubenswrapper[4755]: I1210 15:24:21.346918 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:21Z","lastTransitionTime":"2025-12-10T15:24:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:21 crc kubenswrapper[4755]: I1210 15:24:21.449798 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:21 crc kubenswrapper[4755]: I1210 15:24:21.449843 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:21 crc kubenswrapper[4755]: I1210 15:24:21.449860 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:21 crc kubenswrapper[4755]: I1210 15:24:21.449881 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:21 crc kubenswrapper[4755]: I1210 15:24:21.449897 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:21Z","lastTransitionTime":"2025-12-10T15:24:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:21 crc kubenswrapper[4755]: I1210 15:24:21.552049 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:21 crc kubenswrapper[4755]: I1210 15:24:21.552123 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:21 crc kubenswrapper[4755]: I1210 15:24:21.552146 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:21 crc kubenswrapper[4755]: I1210 15:24:21.552174 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:21 crc kubenswrapper[4755]: I1210 15:24:21.552194 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:21Z","lastTransitionTime":"2025-12-10T15:24:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:21 crc kubenswrapper[4755]: I1210 15:24:21.655082 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:21 crc kubenswrapper[4755]: I1210 15:24:21.655124 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:21 crc kubenswrapper[4755]: I1210 15:24:21.655135 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:21 crc kubenswrapper[4755]: I1210 15:24:21.655150 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:21 crc kubenswrapper[4755]: I1210 15:24:21.655160 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:21Z","lastTransitionTime":"2025-12-10T15:24:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:21 crc kubenswrapper[4755]: I1210 15:24:21.756640 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 15:24:21 crc kubenswrapper[4755]: I1210 15:24:21.756729 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 15:24:21 crc kubenswrapper[4755]: E1210 15:24:21.756829 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 15:24:21 crc kubenswrapper[4755]: I1210 15:24:21.756662 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5ctz" Dec 10 15:24:21 crc kubenswrapper[4755]: I1210 15:24:21.756880 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 15:24:21 crc kubenswrapper[4755]: E1210 15:24:21.756975 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 15:24:21 crc kubenswrapper[4755]: E1210 15:24:21.757055 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5ctz" podUID="17673130-8212-4f8f-8859-92774f0ee202" Dec 10 15:24:21 crc kubenswrapper[4755]: E1210 15:24:21.757121 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 15:24:21 crc kubenswrapper[4755]: I1210 15:24:21.758390 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:21 crc kubenswrapper[4755]: I1210 15:24:21.758440 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:21 crc kubenswrapper[4755]: I1210 15:24:21.758453 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:21 crc kubenswrapper[4755]: I1210 15:24:21.758500 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:21 crc kubenswrapper[4755]: I1210 15:24:21.758513 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:21Z","lastTransitionTime":"2025-12-10T15:24:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:21 crc kubenswrapper[4755]: I1210 15:24:21.861518 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:21 crc kubenswrapper[4755]: I1210 15:24:21.861589 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:21 crc kubenswrapper[4755]: I1210 15:24:21.861611 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:21 crc kubenswrapper[4755]: I1210 15:24:21.861639 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:21 crc kubenswrapper[4755]: I1210 15:24:21.861659 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:21Z","lastTransitionTime":"2025-12-10T15:24:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:21 crc kubenswrapper[4755]: I1210 15:24:21.964720 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:21 crc kubenswrapper[4755]: I1210 15:24:21.964768 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:21 crc kubenswrapper[4755]: I1210 15:24:21.964782 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:21 crc kubenswrapper[4755]: I1210 15:24:21.964802 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:21 crc kubenswrapper[4755]: I1210 15:24:21.964817 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:21Z","lastTransitionTime":"2025-12-10T15:24:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:22 crc kubenswrapper[4755]: I1210 15:24:22.067541 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:22 crc kubenswrapper[4755]: I1210 15:24:22.067615 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:22 crc kubenswrapper[4755]: I1210 15:24:22.067645 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:22 crc kubenswrapper[4755]: I1210 15:24:22.067681 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:22 crc kubenswrapper[4755]: I1210 15:24:22.067702 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:22Z","lastTransitionTime":"2025-12-10T15:24:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:22 crc kubenswrapper[4755]: I1210 15:24:22.171615 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:22 crc kubenswrapper[4755]: I1210 15:24:22.171657 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:22 crc kubenswrapper[4755]: I1210 15:24:22.171669 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:22 crc kubenswrapper[4755]: I1210 15:24:22.171692 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:22 crc kubenswrapper[4755]: I1210 15:24:22.171709 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:22Z","lastTransitionTime":"2025-12-10T15:24:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:22 crc kubenswrapper[4755]: I1210 15:24:22.275113 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:22 crc kubenswrapper[4755]: I1210 15:24:22.275164 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:22 crc kubenswrapper[4755]: I1210 15:24:22.275174 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:22 crc kubenswrapper[4755]: I1210 15:24:22.275211 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:22 crc kubenswrapper[4755]: I1210 15:24:22.275222 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:22Z","lastTransitionTime":"2025-12-10T15:24:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:22 crc kubenswrapper[4755]: I1210 15:24:22.378253 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:22 crc kubenswrapper[4755]: I1210 15:24:22.378331 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:22 crc kubenswrapper[4755]: I1210 15:24:22.378345 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:22 crc kubenswrapper[4755]: I1210 15:24:22.378376 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:22 crc kubenswrapper[4755]: I1210 15:24:22.378406 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:22Z","lastTransitionTime":"2025-12-10T15:24:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:22 crc kubenswrapper[4755]: I1210 15:24:22.481868 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:22 crc kubenswrapper[4755]: I1210 15:24:22.481939 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:22 crc kubenswrapper[4755]: I1210 15:24:22.481961 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:22 crc kubenswrapper[4755]: I1210 15:24:22.481990 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:22 crc kubenswrapper[4755]: I1210 15:24:22.482013 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:22Z","lastTransitionTime":"2025-12-10T15:24:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:22 crc kubenswrapper[4755]: I1210 15:24:22.585578 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:22 crc kubenswrapper[4755]: I1210 15:24:22.585649 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:22 crc kubenswrapper[4755]: I1210 15:24:22.585671 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:22 crc kubenswrapper[4755]: I1210 15:24:22.585701 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:22 crc kubenswrapper[4755]: I1210 15:24:22.585723 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:22Z","lastTransitionTime":"2025-12-10T15:24:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:22 crc kubenswrapper[4755]: I1210 15:24:22.689007 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:22 crc kubenswrapper[4755]: I1210 15:24:22.689073 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:22 crc kubenswrapper[4755]: I1210 15:24:22.689097 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:22 crc kubenswrapper[4755]: I1210 15:24:22.689126 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:22 crc kubenswrapper[4755]: I1210 15:24:22.689147 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:22Z","lastTransitionTime":"2025-12-10T15:24:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:22 crc kubenswrapper[4755]: I1210 15:24:22.792899 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:22 crc kubenswrapper[4755]: I1210 15:24:22.792948 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:22 crc kubenswrapper[4755]: I1210 15:24:22.792964 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:22 crc kubenswrapper[4755]: I1210 15:24:22.792987 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:22 crc kubenswrapper[4755]: I1210 15:24:22.793003 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:22Z","lastTransitionTime":"2025-12-10T15:24:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:22 crc kubenswrapper[4755]: I1210 15:24:22.895795 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:22 crc kubenswrapper[4755]: I1210 15:24:22.895846 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:22 crc kubenswrapper[4755]: I1210 15:24:22.895855 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:22 crc kubenswrapper[4755]: I1210 15:24:22.895869 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:22 crc kubenswrapper[4755]: I1210 15:24:22.895879 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:22Z","lastTransitionTime":"2025-12-10T15:24:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:22 crc kubenswrapper[4755]: I1210 15:24:22.999455 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:22 crc kubenswrapper[4755]: I1210 15:24:22.999554 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:22 crc kubenswrapper[4755]: I1210 15:24:22.999572 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:22 crc kubenswrapper[4755]: I1210 15:24:22.999598 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:22 crc kubenswrapper[4755]: I1210 15:24:22.999615 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:22Z","lastTransitionTime":"2025-12-10T15:24:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:23 crc kubenswrapper[4755]: I1210 15:24:23.102808 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:23 crc kubenswrapper[4755]: I1210 15:24:23.102867 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:23 crc kubenswrapper[4755]: I1210 15:24:23.102880 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:23 crc kubenswrapper[4755]: I1210 15:24:23.102903 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:23 crc kubenswrapper[4755]: I1210 15:24:23.102917 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:23Z","lastTransitionTime":"2025-12-10T15:24:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:23 crc kubenswrapper[4755]: I1210 15:24:23.206383 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:23 crc kubenswrapper[4755]: I1210 15:24:23.206439 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:23 crc kubenswrapper[4755]: I1210 15:24:23.206450 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:23 crc kubenswrapper[4755]: I1210 15:24:23.206498 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:23 crc kubenswrapper[4755]: I1210 15:24:23.206516 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:23Z","lastTransitionTime":"2025-12-10T15:24:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:23 crc kubenswrapper[4755]: I1210 15:24:23.308645 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:23 crc kubenswrapper[4755]: I1210 15:24:23.308694 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:23 crc kubenswrapper[4755]: I1210 15:24:23.308710 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:23 crc kubenswrapper[4755]: I1210 15:24:23.308728 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:23 crc kubenswrapper[4755]: I1210 15:24:23.308738 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:23Z","lastTransitionTime":"2025-12-10T15:24:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:23 crc kubenswrapper[4755]: I1210 15:24:23.411507 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:23 crc kubenswrapper[4755]: I1210 15:24:23.411547 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:23 crc kubenswrapper[4755]: I1210 15:24:23.411558 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:23 crc kubenswrapper[4755]: I1210 15:24:23.411573 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:23 crc kubenswrapper[4755]: I1210 15:24:23.411604 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:23Z","lastTransitionTime":"2025-12-10T15:24:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:23 crc kubenswrapper[4755]: I1210 15:24:23.514393 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:23 crc kubenswrapper[4755]: I1210 15:24:23.514448 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:23 crc kubenswrapper[4755]: I1210 15:24:23.514508 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:23 crc kubenswrapper[4755]: I1210 15:24:23.514537 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:23 crc kubenswrapper[4755]: I1210 15:24:23.514568 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:23Z","lastTransitionTime":"2025-12-10T15:24:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:23 crc kubenswrapper[4755]: I1210 15:24:23.617427 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:23 crc kubenswrapper[4755]: I1210 15:24:23.617456 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:23 crc kubenswrapper[4755]: I1210 15:24:23.617488 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:23 crc kubenswrapper[4755]: I1210 15:24:23.617502 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:23 crc kubenswrapper[4755]: I1210 15:24:23.617511 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:23Z","lastTransitionTime":"2025-12-10T15:24:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:23 crc kubenswrapper[4755]: I1210 15:24:23.720160 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:23 crc kubenswrapper[4755]: I1210 15:24:23.720218 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:23 crc kubenswrapper[4755]: I1210 15:24:23.720235 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:23 crc kubenswrapper[4755]: I1210 15:24:23.720257 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:23 crc kubenswrapper[4755]: I1210 15:24:23.720273 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:23Z","lastTransitionTime":"2025-12-10T15:24:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:23 crc kubenswrapper[4755]: I1210 15:24:23.757007 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 15:24:23 crc kubenswrapper[4755]: I1210 15:24:23.757053 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 15:24:23 crc kubenswrapper[4755]: I1210 15:24:23.757007 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 15:24:23 crc kubenswrapper[4755]: I1210 15:24:23.757326 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5ctz" Dec 10 15:24:23 crc kubenswrapper[4755]: E1210 15:24:23.757326 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 15:24:23 crc kubenswrapper[4755]: E1210 15:24:23.757543 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 15:24:23 crc kubenswrapper[4755]: E1210 15:24:23.757698 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 15:24:23 crc kubenswrapper[4755]: E1210 15:24:23.757828 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5ctz" podUID="17673130-8212-4f8f-8859-92774f0ee202" Dec 10 15:24:23 crc kubenswrapper[4755]: I1210 15:24:23.785974 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e6db97-7e22-4fec-924e-20e90f463887\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef3871ddc16d2fc1b89a6f120390cf066424bd9ed4cca98f8dea4430c40c6c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08895c3ab20248c4bd70ccfae6442bdf8e76a602e605b043f0f00166624ada17\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5b2467645116856afa1e1e37501c7b453b03da193a96d1075886b85839a86ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16e58653dca08c9b33671279553975bca9507ffeeefea161119cec624813f167\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea18ef6c319336321d97c2a55449903b837b5a47da785da3ff5308244751943\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17ca0b6cc1b8b0bf9c34e03c494d1fc1594db013eccf862027ec8749ea51f6e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17ca0b6cc1b8b0bf9c34e03c494d1fc1594db013eccf862027ec8749ea51f6e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:23Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:23 crc kubenswrapper[4755]: I1210 15:24:23.807176 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:23Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:23 crc kubenswrapper[4755]: I1210 15:24:23.823440 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:23 crc kubenswrapper[4755]: I1210 15:24:23.823508 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:23 crc kubenswrapper[4755]: I1210 15:24:23.823520 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:23 crc kubenswrapper[4755]: I1210 15:24:23.823537 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:23 crc kubenswrapper[4755]: I1210 15:24:23.823550 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:23Z","lastTransitionTime":"2025-12-10T15:24:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:23 crc kubenswrapper[4755]: I1210 15:24:23.839487 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b1da51a-99c9-4f8e-920d-ce0973af6370\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6eb065dc6c0cc8914cb95553eb2683d894fb9a4e78ce7fac73bcce8d7f6cced9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e547993b9f2fa37bf924f909c47b62eb0cc02b659596b1cad9bbc42fdde8f9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9ba47683cc23d5b531a45f0658b6a9378650400b35b5372642b0430a5ac503f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a75407e83508af9adebb09c6466a966dd791d29f690c539656f9bd3396d7031\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://602b4e49987fa2cc6b54b822110aececbdddaf2bce8f27cce4ed906768d45791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://335bcab3a79f09796e97560365e1211fb30ddf288f4773c05ab353197add4365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://807b21c925067bde1df3babdc35da6c8805e10be110c0282376dbd6e8be69dcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://194ac31f80a7b5a0fde0c62cb794234cb84e5b12e4f3db214ec3d16b4250797f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T15:23:59Z\\\",\\\"message\\\":\\\"ss event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1210 15:23:58.955006 6153 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1210 15:23:58.954962 6153 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI1210 15:23:58.955022 6153 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI1210 15:23:58.955028 6153 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI1210 15:23:58.955034 6153 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nF1210 15:23:58.955079 6153 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"ht\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://807b21c925067bde1df3babdc35da6c8805e10be110c0282376dbd6e8be69dcb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T15:24:13Z\\\",\\\"message\\\":\\\"s, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI1210 15:24:12.549599 6361 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI1210 15:24:12.549769 6361 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1210 15:24:12.549565 6361 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-zl2tx in node crc\\\\nI1210 15:24:12.549780 6361 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-zl2tx after 0 failed attempt(s)\\\\nI1210 15:24:12.549784 6361 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-zl2tx\\\\nI1210 15:24:12.549311 6361 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-apiserver/apiserver_TCP_cluster\\\\\\\", UUID:\\\\\\\"d71b38eb-32af-4c0f-9490-7c317c111e3a\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-apiserver/apiserver\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]st\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T15:24:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59bf59d1b7fbc365a916fbbceca7ae30b7ebc754b34f2f7a34c2e21e1e1d2166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://375efb27cf6bef06a6a0ffc8f6b2963e1b9ffbca18b3c8e7b402bc552a411f6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://375efb27cf6bef06a6a0ffc8f6b2963e1b9ffbca18b3c8e7b402bc552a411f6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6lfvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:23Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:23 crc kubenswrapper[4755]: I1210 15:24:23.859622 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81a321f6-db4b-46c6-bf24-1ad62da5992b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e68926ec899dad3c070810697fb078955e202f270724638719a7ea21d2debcb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://728ff8c02e5b0bea5514375b60c7a66f025e3c2a7163f0753c65d914b7873531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://865537b06aba1b617a1a16ef38c6b5be072501d1bbbd69e14eaaf2ce76c6b1da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54e0e014ecb31c499da20621a8b9e2ad6fdbdc2170a93636c828082b5b63b683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e0e014ecb31c499da20621a8b9e2ad6fdbdc2170a93636c828082b5b63b683\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:23Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:23 crc kubenswrapper[4755]: I1210 15:24:23.880827 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7d089a8bddfa1f80d29011c7a6bab0a300f7dd44bdb2864f86951ebbb9ebea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:23Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:23 crc kubenswrapper[4755]: I1210 15:24:23.895415 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01ddc4056319db1f69268dcae192c3cb9db6c25284305803ae7588e59f77c346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aae6bf174cdc9bde18d7c959e976454e73c1e67642f0158d365b79582f63f3cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:23Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:23 crc kubenswrapper[4755]: I1210 15:24:23.908062 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:23Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:23 crc kubenswrapper[4755]: I1210 15:24:23.921606 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zl2tx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"796da6d5-6ccd-4786-a03e-9a8e47a55031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de63a123c46563bd8cd07e669d192bc8b019a889b9bdb7af1b988872c8f1fc48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzg4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zl2tx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:23Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:23 crc kubenswrapper[4755]: I1210 15:24:23.925817 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:23 crc kubenswrapper[4755]: I1210 15:24:23.925847 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:23 crc kubenswrapper[4755]: I1210 15:24:23.925858 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:23 crc kubenswrapper[4755]: I1210 15:24:23.925875 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:23 crc kubenswrapper[4755]: I1210 15:24:23.925889 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:23Z","lastTransitionTime":"2025-12-10T15:24:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:23 crc kubenswrapper[4755]: I1210 15:24:23.932349 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b132a8b9-1c99-414d-8773-229bf36b305d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5a5a59e9f156fb791ec822c2d5efe3fc6ec0e84bfcb2b6f5da81396951984c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2mdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a40c4bdaa23a60a665b8f565720d79b68cac62d40246be94fc6cd314b1bb3656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2mdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ggt8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:23Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:23 crc kubenswrapper[4755]: I1210 15:24:23.944916 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aed6bb9-78c2-410b-9c58-b60ab22a7bd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70d014e7227746c46b30f8f5a1f307a422d2fa0d4b98d98bfe5ba6217223489e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://104d730ea666ffd2d639ba7fc8d1ac5b444586788a62656fd260cb249f82b1ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09dbec85547c7170bb9551e5657876814d48528e3047daf3547711a563d2b6ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8bf1de0d8bdc0a20bc42ba5097d849ae0f507e5fbc18fb17b4ff3650e46ff0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:23Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:23 crc kubenswrapper[4755]: I1210 15:24:23.957127 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n8c6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b56ae78-835a-45da-bc46-5adff2bdf9fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8957db8ccef7b3c449920471d345aa81ce9a7ab9be36b2350ec428aebec7ac3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b9a1485f6d0c0e53fd3505623b2788476bc00abea2f11650386eac40e449bcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9a1485f6d0c0e53fd3505623b2788476bc00abea2f11650386eac40e449bcc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a532015d196c588ad3dd6c43d9fa3772ba2a42bd1b2231f830b60adb0addab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a532015d196c588ad3dd6c43d9fa3772ba2a42bd1b2231f830b60adb0addab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://417e326f541e04df27d540c14ec572bb8f1d749a9e3dc88f483241d69f7d0679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://417e326f541e04df27d540c14ec572bb8f1d749a9e3dc88f483241d69f7d0679\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42867f7a1f6aceaf708ad650b7634bdafb3f850872c540d424916440014e7941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42867f7a1f6aceaf708ad650b7634bdafb3f850872c540d424916440014e7941\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f2d22bfc26958a3aad99de39f5c831d36c1ba4f563851baafc73735e80d5c91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f2d22bfc26958a3aad99de39f5c831d36c1ba4f563851baafc73735e80d5c91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://548d326fc5c14c16176d4fff3446d93c17322c11d2a34e9e42376b665de3848d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://548d326fc5c14c16176d4fff3446d93c17322c11d2a34e9e42376b665de3848d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n8c6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:23Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:23 crc kubenswrapper[4755]: I1210 15:24:23.965689 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qnmst" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1693fcf1-bef4-4f82-8dd8-f1797b03f5e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8fd6a2ab2b9557574951c0ebbccd663fe576262b0de2c3c655427c977f62d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hzdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qnmst\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:23Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:23 crc kubenswrapper[4755]: I1210 15:24:23.975637 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-82wnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0a2f42c-a60e-4350-9ebb-c28d3cbfdad7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7af643111a7a5d1d78b0412b1621b5ffac6389760ea9190e26e9e5d1704eed4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xbzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://915c918562f8a69a28cbcf29e427bfdd94477193b98d5a2da9ad45cb4b44fa03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xbzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-82wnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:23Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:23 crc kubenswrapper[4755]: I1210 15:24:23.993087 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fa03060-a6e0-4aad-9aa1-43b1a0d00c85\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a823333ed5cb5de3988d25e50e4b7a0f9071c76fb39c22760a4f2acd5eb455d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d828f3f3b90a2dcb1c1908e6a686368af5b0d715b3251e4b8fcf3c8818ec75a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://172ab3fce08c8ba4095ff4095c89364a778644752bd7bb6c178d6e3ebcface69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfbb8e350ad18b78a6bcf6cfa4eb8f2fad90f970dba08a8b1b2026af6f255e83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c4b2bf0c88a16fd6b4b45a20730a92292895dc9f29ce756d347c302b25a8612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55da309288bc2c7ff1f22da0569d18155796700c200fe3ed508f6f8179756865\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55da309288bc2c7ff1f22da0569d18155796700c200fe3ed508f6f8179756865\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219c531368bb5379f3726d94ab0afc0f33eedbf91b0a4dbbe0757c6e232f79ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://219c531368bb5379f3726d94ab0afc0f33eedbf91b0a4dbbe0757c6e232f79ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f9da8f3d4f85ec83c16b71ecd983ff4f48049eb8e73461a2b46c4f66d71ed06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9da8f3d4f85ec83c16b71ecd983ff4f48049eb8e73461a2b46c4f66d71ed06e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:23Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:24 crc kubenswrapper[4755]: I1210 15:24:24.002877 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:24Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:24 crc kubenswrapper[4755]: I1210 15:24:24.012053 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c371b199c3643a68ea5eba935eb76a1b7e8a4027c9292a1324116ccfc14742ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:24Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:24 crc kubenswrapper[4755]: I1210 15:24:24.022216 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wv8fh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d150a22e-c59a-4376-a5c8-db4085ea0be0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58d106c5d1b9525ec821d009ca556449cd8d7f0e1b9c8ec7dd969df996e73625\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfw9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wv8fh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:24Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:24 crc kubenswrapper[4755]: I1210 15:24:24.028124 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:24 crc kubenswrapper[4755]: I1210 15:24:24.028169 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:24 crc kubenswrapper[4755]: I1210 15:24:24.028177 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:24 crc kubenswrapper[4755]: I1210 15:24:24.028194 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:24 crc kubenswrapper[4755]: I1210 15:24:24.028207 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:24Z","lastTransitionTime":"2025-12-10T15:24:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:24 crc kubenswrapper[4755]: I1210 15:24:24.031731 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q5ctz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17673130-8212-4f8f-8859-92774f0ee202\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcscn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcscn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q5ctz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:24Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:24 crc kubenswrapper[4755]: I1210 15:24:24.131365 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:24 crc kubenswrapper[4755]: I1210 15:24:24.131413 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:24 crc kubenswrapper[4755]: I1210 15:24:24.131425 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:24 crc kubenswrapper[4755]: I1210 15:24:24.131446 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:24 crc kubenswrapper[4755]: I1210 15:24:24.131460 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:24Z","lastTransitionTime":"2025-12-10T15:24:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:24 crc kubenswrapper[4755]: I1210 15:24:24.234082 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:24 crc kubenswrapper[4755]: I1210 15:24:24.234170 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:24 crc kubenswrapper[4755]: I1210 15:24:24.234181 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:24 crc kubenswrapper[4755]: I1210 15:24:24.234200 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:24 crc kubenswrapper[4755]: I1210 15:24:24.234243 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:24Z","lastTransitionTime":"2025-12-10T15:24:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:24 crc kubenswrapper[4755]: I1210 15:24:24.336972 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:24 crc kubenswrapper[4755]: I1210 15:24:24.337324 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:24 crc kubenswrapper[4755]: I1210 15:24:24.337398 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:24 crc kubenswrapper[4755]: I1210 15:24:24.337425 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:24 crc kubenswrapper[4755]: I1210 15:24:24.337440 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:24Z","lastTransitionTime":"2025-12-10T15:24:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:24 crc kubenswrapper[4755]: I1210 15:24:24.440695 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:24 crc kubenswrapper[4755]: I1210 15:24:24.440738 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:24 crc kubenswrapper[4755]: I1210 15:24:24.440750 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:24 crc kubenswrapper[4755]: I1210 15:24:24.440765 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:24 crc kubenswrapper[4755]: I1210 15:24:24.440775 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:24Z","lastTransitionTime":"2025-12-10T15:24:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:24 crc kubenswrapper[4755]: I1210 15:24:24.544754 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:24 crc kubenswrapper[4755]: I1210 15:24:24.544843 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:24 crc kubenswrapper[4755]: I1210 15:24:24.544868 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:24 crc kubenswrapper[4755]: I1210 15:24:24.544899 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:24 crc kubenswrapper[4755]: I1210 15:24:24.544924 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:24Z","lastTransitionTime":"2025-12-10T15:24:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:24 crc kubenswrapper[4755]: I1210 15:24:24.647640 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:24 crc kubenswrapper[4755]: I1210 15:24:24.647683 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:24 crc kubenswrapper[4755]: I1210 15:24:24.647875 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:24 crc kubenswrapper[4755]: I1210 15:24:24.647917 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:24 crc kubenswrapper[4755]: I1210 15:24:24.647937 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:24Z","lastTransitionTime":"2025-12-10T15:24:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:24 crc kubenswrapper[4755]: I1210 15:24:24.749799 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:24 crc kubenswrapper[4755]: I1210 15:24:24.749846 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:24 crc kubenswrapper[4755]: I1210 15:24:24.749857 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:24 crc kubenswrapper[4755]: I1210 15:24:24.749873 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:24 crc kubenswrapper[4755]: I1210 15:24:24.749886 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:24Z","lastTransitionTime":"2025-12-10T15:24:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:24 crc kubenswrapper[4755]: I1210 15:24:24.853332 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:24 crc kubenswrapper[4755]: I1210 15:24:24.853380 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:24 crc kubenswrapper[4755]: I1210 15:24:24.853400 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:24 crc kubenswrapper[4755]: I1210 15:24:24.853427 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:24 crc kubenswrapper[4755]: I1210 15:24:24.853448 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:24Z","lastTransitionTime":"2025-12-10T15:24:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:24 crc kubenswrapper[4755]: I1210 15:24:24.957181 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:24 crc kubenswrapper[4755]: I1210 15:24:24.957218 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:24 crc kubenswrapper[4755]: I1210 15:24:24.957228 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:24 crc kubenswrapper[4755]: I1210 15:24:24.957245 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:24 crc kubenswrapper[4755]: I1210 15:24:24.957256 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:24Z","lastTransitionTime":"2025-12-10T15:24:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:25 crc kubenswrapper[4755]: I1210 15:24:25.059946 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:25 crc kubenswrapper[4755]: I1210 15:24:25.059980 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:25 crc kubenswrapper[4755]: I1210 15:24:25.059988 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:25 crc kubenswrapper[4755]: I1210 15:24:25.060002 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:25 crc kubenswrapper[4755]: I1210 15:24:25.060011 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:25Z","lastTransitionTime":"2025-12-10T15:24:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:25 crc kubenswrapper[4755]: I1210 15:24:25.162100 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:25 crc kubenswrapper[4755]: I1210 15:24:25.162157 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:25 crc kubenswrapper[4755]: I1210 15:24:25.162170 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:25 crc kubenswrapper[4755]: I1210 15:24:25.162189 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:25 crc kubenswrapper[4755]: I1210 15:24:25.162201 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:25Z","lastTransitionTime":"2025-12-10T15:24:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:25 crc kubenswrapper[4755]: I1210 15:24:25.265216 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:25 crc kubenswrapper[4755]: I1210 15:24:25.265292 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:25 crc kubenswrapper[4755]: I1210 15:24:25.265313 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:25 crc kubenswrapper[4755]: I1210 15:24:25.265339 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:25 crc kubenswrapper[4755]: I1210 15:24:25.265357 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:25Z","lastTransitionTime":"2025-12-10T15:24:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:25 crc kubenswrapper[4755]: I1210 15:24:25.368193 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:25 crc kubenswrapper[4755]: I1210 15:24:25.368256 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:25 crc kubenswrapper[4755]: I1210 15:24:25.368273 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:25 crc kubenswrapper[4755]: I1210 15:24:25.368298 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:25 crc kubenswrapper[4755]: I1210 15:24:25.368318 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:25Z","lastTransitionTime":"2025-12-10T15:24:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:25 crc kubenswrapper[4755]: I1210 15:24:25.471335 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:25 crc kubenswrapper[4755]: I1210 15:24:25.471404 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:25 crc kubenswrapper[4755]: I1210 15:24:25.471421 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:25 crc kubenswrapper[4755]: I1210 15:24:25.471445 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:25 crc kubenswrapper[4755]: I1210 15:24:25.471486 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:25Z","lastTransitionTime":"2025-12-10T15:24:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:25 crc kubenswrapper[4755]: I1210 15:24:25.573541 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:25 crc kubenswrapper[4755]: I1210 15:24:25.573585 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:25 crc kubenswrapper[4755]: I1210 15:24:25.573595 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:25 crc kubenswrapper[4755]: I1210 15:24:25.573610 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:25 crc kubenswrapper[4755]: I1210 15:24:25.573621 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:25Z","lastTransitionTime":"2025-12-10T15:24:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:25 crc kubenswrapper[4755]: I1210 15:24:25.676531 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:25 crc kubenswrapper[4755]: I1210 15:24:25.676574 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:25 crc kubenswrapper[4755]: I1210 15:24:25.676587 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:25 crc kubenswrapper[4755]: I1210 15:24:25.676603 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:25 crc kubenswrapper[4755]: I1210 15:24:25.676615 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:25Z","lastTransitionTime":"2025-12-10T15:24:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:25 crc kubenswrapper[4755]: I1210 15:24:25.757623 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 15:24:25 crc kubenswrapper[4755]: I1210 15:24:25.757679 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 15:24:25 crc kubenswrapper[4755]: I1210 15:24:25.757687 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5ctz" Dec 10 15:24:25 crc kubenswrapper[4755]: E1210 15:24:25.757771 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 15:24:25 crc kubenswrapper[4755]: I1210 15:24:25.757782 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 15:24:25 crc kubenswrapper[4755]: E1210 15:24:25.757862 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 15:24:25 crc kubenswrapper[4755]: E1210 15:24:25.757925 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 15:24:25 crc kubenswrapper[4755]: E1210 15:24:25.757974 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5ctz" podUID="17673130-8212-4f8f-8859-92774f0ee202" Dec 10 15:24:25 crc kubenswrapper[4755]: I1210 15:24:25.778861 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:25 crc kubenswrapper[4755]: I1210 15:24:25.778932 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:25 crc kubenswrapper[4755]: I1210 15:24:25.778945 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:25 crc kubenswrapper[4755]: I1210 15:24:25.778966 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:25 crc kubenswrapper[4755]: I1210 15:24:25.778978 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:25Z","lastTransitionTime":"2025-12-10T15:24:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:25 crc kubenswrapper[4755]: I1210 15:24:25.918846 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:25 crc kubenswrapper[4755]: I1210 15:24:25.918887 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:25 crc kubenswrapper[4755]: I1210 15:24:25.918896 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:25 crc kubenswrapper[4755]: I1210 15:24:25.918926 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:25 crc kubenswrapper[4755]: I1210 15:24:25.918935 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:25Z","lastTransitionTime":"2025-12-10T15:24:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:26 crc kubenswrapper[4755]: I1210 15:24:26.021298 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:26 crc kubenswrapper[4755]: I1210 15:24:26.021358 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:26 crc kubenswrapper[4755]: I1210 15:24:26.021374 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:26 crc kubenswrapper[4755]: I1210 15:24:26.021395 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:26 crc kubenswrapper[4755]: I1210 15:24:26.021416 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:26Z","lastTransitionTime":"2025-12-10T15:24:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:26 crc kubenswrapper[4755]: I1210 15:24:26.123748 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:26 crc kubenswrapper[4755]: I1210 15:24:26.123782 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:26 crc kubenswrapper[4755]: I1210 15:24:26.123793 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:26 crc kubenswrapper[4755]: I1210 15:24:26.123807 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:26 crc kubenswrapper[4755]: I1210 15:24:26.123817 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:26Z","lastTransitionTime":"2025-12-10T15:24:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:26 crc kubenswrapper[4755]: I1210 15:24:26.226083 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:26 crc kubenswrapper[4755]: I1210 15:24:26.226141 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:26 crc kubenswrapper[4755]: I1210 15:24:26.226155 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:26 crc kubenswrapper[4755]: I1210 15:24:26.226172 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:26 crc kubenswrapper[4755]: I1210 15:24:26.226184 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:26Z","lastTransitionTime":"2025-12-10T15:24:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:26 crc kubenswrapper[4755]: I1210 15:24:26.328352 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:26 crc kubenswrapper[4755]: I1210 15:24:26.328384 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:26 crc kubenswrapper[4755]: I1210 15:24:26.328392 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:26 crc kubenswrapper[4755]: I1210 15:24:26.328405 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:26 crc kubenswrapper[4755]: I1210 15:24:26.328416 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:26Z","lastTransitionTime":"2025-12-10T15:24:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:26 crc kubenswrapper[4755]: I1210 15:24:26.431400 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:26 crc kubenswrapper[4755]: I1210 15:24:26.431442 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:26 crc kubenswrapper[4755]: I1210 15:24:26.431454 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:26 crc kubenswrapper[4755]: I1210 15:24:26.431485 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:26 crc kubenswrapper[4755]: I1210 15:24:26.431498 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:26Z","lastTransitionTime":"2025-12-10T15:24:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:26 crc kubenswrapper[4755]: I1210 15:24:26.534035 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:26 crc kubenswrapper[4755]: I1210 15:24:26.534066 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:26 crc kubenswrapper[4755]: I1210 15:24:26.534074 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:26 crc kubenswrapper[4755]: I1210 15:24:26.534107 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:26 crc kubenswrapper[4755]: I1210 15:24:26.534117 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:26Z","lastTransitionTime":"2025-12-10T15:24:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:26 crc kubenswrapper[4755]: I1210 15:24:26.637322 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:26 crc kubenswrapper[4755]: I1210 15:24:26.637402 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:26 crc kubenswrapper[4755]: I1210 15:24:26.637416 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:26 crc kubenswrapper[4755]: I1210 15:24:26.637438 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:26 crc kubenswrapper[4755]: I1210 15:24:26.637450 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:26Z","lastTransitionTime":"2025-12-10T15:24:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:26 crc kubenswrapper[4755]: I1210 15:24:26.740813 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:26 crc kubenswrapper[4755]: I1210 15:24:26.740853 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:26 crc kubenswrapper[4755]: I1210 15:24:26.740863 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:26 crc kubenswrapper[4755]: I1210 15:24:26.740883 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:26 crc kubenswrapper[4755]: I1210 15:24:26.740894 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:26Z","lastTransitionTime":"2025-12-10T15:24:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:26 crc kubenswrapper[4755]: I1210 15:24:26.843753 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:26 crc kubenswrapper[4755]: I1210 15:24:26.843834 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:26 crc kubenswrapper[4755]: I1210 15:24:26.843844 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:26 crc kubenswrapper[4755]: I1210 15:24:26.843859 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:26 crc kubenswrapper[4755]: I1210 15:24:26.843868 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:26Z","lastTransitionTime":"2025-12-10T15:24:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:26 crc kubenswrapper[4755]: I1210 15:24:26.894203 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:26 crc kubenswrapper[4755]: I1210 15:24:26.894236 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:26 crc kubenswrapper[4755]: I1210 15:24:26.894244 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:26 crc kubenswrapper[4755]: I1210 15:24:26.894257 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:26 crc kubenswrapper[4755]: I1210 15:24:26.894290 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:26Z","lastTransitionTime":"2025-12-10T15:24:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:26 crc kubenswrapper[4755]: E1210 15:24:26.905461 4755 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:24:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:24:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:24:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:24:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ba232303-88d5-4931-b82e-34d9a0e5c06a\\\",\\\"systemUUID\\\":\\\"ebd59de0-c6b0-47c1-bc17-6f665dcf344d\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:26Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:26 crc kubenswrapper[4755]: I1210 15:24:26.908333 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:26 crc kubenswrapper[4755]: I1210 15:24:26.908360 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:26 crc kubenswrapper[4755]: I1210 15:24:26.908371 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:26 crc kubenswrapper[4755]: I1210 15:24:26.908386 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:26 crc kubenswrapper[4755]: I1210 15:24:26.908395 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:26Z","lastTransitionTime":"2025-12-10T15:24:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:26 crc kubenswrapper[4755]: E1210 15:24:26.919805 4755 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:24:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:24:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:24:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:24:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ba232303-88d5-4931-b82e-34d9a0e5c06a\\\",\\\"systemUUID\\\":\\\"ebd59de0-c6b0-47c1-bc17-6f665dcf344d\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:26Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:26 crc kubenswrapper[4755]: I1210 15:24:26.923397 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:26 crc kubenswrapper[4755]: I1210 15:24:26.923435 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:26 crc kubenswrapper[4755]: I1210 15:24:26.923444 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:26 crc kubenswrapper[4755]: I1210 15:24:26.923459 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:26 crc kubenswrapper[4755]: I1210 15:24:26.923492 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:26Z","lastTransitionTime":"2025-12-10T15:24:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:26 crc kubenswrapper[4755]: E1210 15:24:26.934125 4755 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:24:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:24:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:24:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:24:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ba232303-88d5-4931-b82e-34d9a0e5c06a\\\",\\\"systemUUID\\\":\\\"ebd59de0-c6b0-47c1-bc17-6f665dcf344d\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:26Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:26 crc kubenswrapper[4755]: I1210 15:24:26.937193 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:26 crc kubenswrapper[4755]: I1210 15:24:26.937220 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:26 crc kubenswrapper[4755]: I1210 15:24:26.937228 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:26 crc kubenswrapper[4755]: I1210 15:24:26.937241 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:26 crc kubenswrapper[4755]: I1210 15:24:26.937250 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:26Z","lastTransitionTime":"2025-12-10T15:24:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:26 crc kubenswrapper[4755]: E1210 15:24:26.947689 4755 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:24:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:24:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:24:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:24:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ba232303-88d5-4931-b82e-34d9a0e5c06a\\\",\\\"systemUUID\\\":\\\"ebd59de0-c6b0-47c1-bc17-6f665dcf344d\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:26Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:26 crc kubenswrapper[4755]: I1210 15:24:26.950978 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:26 crc kubenswrapper[4755]: I1210 15:24:26.951008 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:26 crc kubenswrapper[4755]: I1210 15:24:26.951022 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:26 crc kubenswrapper[4755]: I1210 15:24:26.951039 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:26 crc kubenswrapper[4755]: I1210 15:24:26.951051 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:26Z","lastTransitionTime":"2025-12-10T15:24:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:26 crc kubenswrapper[4755]: E1210 15:24:26.962954 4755 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:24:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:24:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:24:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:24:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ba232303-88d5-4931-b82e-34d9a0e5c06a\\\",\\\"systemUUID\\\":\\\"ebd59de0-c6b0-47c1-bc17-6f665dcf344d\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:26Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:26 crc kubenswrapper[4755]: E1210 15:24:26.963073 4755 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 10 15:24:26 crc kubenswrapper[4755]: I1210 15:24:26.964344 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:26 crc kubenswrapper[4755]: I1210 15:24:26.964382 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:26 crc kubenswrapper[4755]: I1210 15:24:26.964395 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:26 crc kubenswrapper[4755]: I1210 15:24:26.964409 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:26 crc kubenswrapper[4755]: I1210 15:24:26.964420 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:26Z","lastTransitionTime":"2025-12-10T15:24:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:27 crc kubenswrapper[4755]: I1210 15:24:27.066750 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:27 crc kubenswrapper[4755]: I1210 15:24:27.066822 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:27 crc kubenswrapper[4755]: I1210 15:24:27.066840 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:27 crc kubenswrapper[4755]: I1210 15:24:27.066866 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:27 crc kubenswrapper[4755]: I1210 15:24:27.066883 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:27Z","lastTransitionTime":"2025-12-10T15:24:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:27 crc kubenswrapper[4755]: I1210 15:24:27.169935 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:27 crc kubenswrapper[4755]: I1210 15:24:27.169972 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:27 crc kubenswrapper[4755]: I1210 15:24:27.169980 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:27 crc kubenswrapper[4755]: I1210 15:24:27.169995 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:27 crc kubenswrapper[4755]: I1210 15:24:27.170004 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:27Z","lastTransitionTime":"2025-12-10T15:24:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:27 crc kubenswrapper[4755]: I1210 15:24:27.272127 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:27 crc kubenswrapper[4755]: I1210 15:24:27.272178 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:27 crc kubenswrapper[4755]: I1210 15:24:27.272191 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:27 crc kubenswrapper[4755]: I1210 15:24:27.272209 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:27 crc kubenswrapper[4755]: I1210 15:24:27.272238 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:27Z","lastTransitionTime":"2025-12-10T15:24:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:27 crc kubenswrapper[4755]: I1210 15:24:27.374518 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:27 crc kubenswrapper[4755]: I1210 15:24:27.374595 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:27 crc kubenswrapper[4755]: I1210 15:24:27.374608 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:27 crc kubenswrapper[4755]: I1210 15:24:27.374628 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:27 crc kubenswrapper[4755]: I1210 15:24:27.374670 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:27Z","lastTransitionTime":"2025-12-10T15:24:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:27 crc kubenswrapper[4755]: I1210 15:24:27.477131 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:27 crc kubenswrapper[4755]: I1210 15:24:27.477170 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:27 crc kubenswrapper[4755]: I1210 15:24:27.477181 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:27 crc kubenswrapper[4755]: I1210 15:24:27.477197 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:27 crc kubenswrapper[4755]: I1210 15:24:27.477206 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:27Z","lastTransitionTime":"2025-12-10T15:24:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:27 crc kubenswrapper[4755]: I1210 15:24:27.579575 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:27 crc kubenswrapper[4755]: I1210 15:24:27.579623 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:27 crc kubenswrapper[4755]: I1210 15:24:27.579639 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:27 crc kubenswrapper[4755]: I1210 15:24:27.579662 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:27 crc kubenswrapper[4755]: I1210 15:24:27.579679 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:27Z","lastTransitionTime":"2025-12-10T15:24:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:27 crc kubenswrapper[4755]: I1210 15:24:27.682357 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:27 crc kubenswrapper[4755]: I1210 15:24:27.682420 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:27 crc kubenswrapper[4755]: I1210 15:24:27.682431 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:27 crc kubenswrapper[4755]: I1210 15:24:27.682450 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:27 crc kubenswrapper[4755]: I1210 15:24:27.682462 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:27Z","lastTransitionTime":"2025-12-10T15:24:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:27 crc kubenswrapper[4755]: I1210 15:24:27.757508 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 15:24:27 crc kubenswrapper[4755]: I1210 15:24:27.757575 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5ctz" Dec 10 15:24:27 crc kubenswrapper[4755]: I1210 15:24:27.757635 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 15:24:27 crc kubenswrapper[4755]: E1210 15:24:27.757763 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 15:24:27 crc kubenswrapper[4755]: I1210 15:24:27.757810 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 15:24:27 crc kubenswrapper[4755]: E1210 15:24:27.758024 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 15:24:27 crc kubenswrapper[4755]: E1210 15:24:27.758110 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 15:24:27 crc kubenswrapper[4755]: E1210 15:24:27.758309 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5ctz" podUID="17673130-8212-4f8f-8859-92774f0ee202" Dec 10 15:24:27 crc kubenswrapper[4755]: I1210 15:24:27.758592 4755 scope.go:117] "RemoveContainer" containerID="807b21c925067bde1df3babdc35da6c8805e10be110c0282376dbd6e8be69dcb" Dec 10 15:24:27 crc kubenswrapper[4755]: E1210 15:24:27.758851 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6lfvk_openshift-ovn-kubernetes(4b1da51a-99c9-4f8e-920d-ce0973af6370)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" podUID="4b1da51a-99c9-4f8e-920d-ce0973af6370" Dec 10 15:24:27 crc kubenswrapper[4755]: I1210 15:24:27.771077 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e6db97-7e22-4fec-924e-20e90f463887\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef3871ddc16d2fc1b89a6f120390cf066424bd9ed4cca98f8dea4430c40c6c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08895c3ab20248c4bd70ccfae6442bdf8e76a602e605b043f0f00166624ada17\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5b2467645116856afa1e1e37501c7b453b03da193a96d1075886b85839a86ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16e58653dca08c9b33671279553975bca9507ffeeefea161119cec624813f167\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea18ef6c319336321d97c2a55449903b837b5a47da785da3ff5308244751943\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17ca0b6cc1b8b0bf9c34e03c494d1fc1594db013eccf862027ec8749ea51f6e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17ca0b6cc1b8b0bf9c34e03c494d1fc1594db013eccf862027ec8749ea51f6e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:27Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:27 crc kubenswrapper[4755]: I1210 15:24:27.784504 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:27Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:27 crc kubenswrapper[4755]: I1210 15:24:27.784582 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:27 crc kubenswrapper[4755]: I1210 15:24:27.784615 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:27 crc kubenswrapper[4755]: I1210 15:24:27.784624 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:27 crc kubenswrapper[4755]: I1210 15:24:27.784640 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:27 crc kubenswrapper[4755]: I1210 15:24:27.784648 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:27Z","lastTransitionTime":"2025-12-10T15:24:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:27 crc kubenswrapper[4755]: I1210 15:24:27.800305 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b1da51a-99c9-4f8e-920d-ce0973af6370\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6eb065dc6c0cc8914cb95553eb2683d894fb9a4e78ce7fac73bcce8d7f6cced9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e547993b9f2fa37bf924f909c47b62eb0cc02b659596b1cad9bbc42fdde8f9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9ba47683cc23d5b531a45f0658b6a9378650400b35b5372642b0430a5ac503f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a75407e83508af9adebb09c6466a966dd791d29f690c539656f9bd3396d7031\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://602b4e49987fa2cc6b54b822110aececbdddaf2bce8f27cce4ed906768d45791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://335bcab3a79f09796e97560365e1211fb30ddf288f4773c05ab353197add4365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://807b21c925067bde1df3babdc35da6c8805e10be110c0282376dbd6e8be69dcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://807b21c925067bde1df3babdc35da6c8805e10be110c0282376dbd6e8be69dcb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T15:24:13Z\\\",\\\"message\\\":\\\"s, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI1210 15:24:12.549599 6361 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI1210 15:24:12.549769 6361 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1210 15:24:12.549565 6361 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-zl2tx in node crc\\\\nI1210 15:24:12.549780 6361 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-zl2tx after 0 failed attempt(s)\\\\nI1210 15:24:12.549784 6361 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-zl2tx\\\\nI1210 15:24:12.549311 6361 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-apiserver/apiserver_TCP_cluster\\\\\\\", UUID:\\\\\\\"d71b38eb-32af-4c0f-9490-7c317c111e3a\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-apiserver/apiserver\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]st\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T15:24:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6lfvk_openshift-ovn-kubernetes(4b1da51a-99c9-4f8e-920d-ce0973af6370)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59bf59d1b7fbc365a916fbbceca7ae30b7ebc754b34f2f7a34c2e21e1e1d2166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://375efb27cf6bef06a6a0ffc8f6b2963e1b9ffbca18b3c8e7b402bc552a411f6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://375efb27cf6bef06a6a0ffc8f6b2963e1b9ffbca18b3c8e7b402bc552a411f6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6lfvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:27Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:27 crc kubenswrapper[4755]: I1210 15:24:27.810973 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b132a8b9-1c99-414d-8773-229bf36b305d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5a5a59e9f156fb791ec822c2d5efe3fc6ec0e84bfcb2b6f5da81396951984c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2mdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a40c4bdaa23a60a665b8f565720d79b68cac62d40246be94fc6cd314b1bb3656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2mdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ggt8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:27Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:27 crc kubenswrapper[4755]: I1210 15:24:27.820392 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81a321f6-db4b-46c6-bf24-1ad62da5992b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e68926ec899dad3c070810697fb078955e202f270724638719a7ea21d2debcb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://728ff8c02e5b0bea5514375b60c7a66f025e3c2a7163f0753c65d914b7873531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://865537b06aba1b617a1a16ef38c6b5be072501d1bbbd69e14eaaf2ce76c6b1da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54e0e014ecb31c499da20621a8b9e2ad6fdbdc2170a93636c828082b5b63b683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e0e014ecb31c499da20621a8b9e2ad6fdbdc2170a93636c828082b5b63b683\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:27Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:27 crc kubenswrapper[4755]: I1210 15:24:27.833436 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7d089a8bddfa1f80d29011c7a6bab0a300f7dd44bdb2864f86951ebbb9ebea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:27Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:27 crc kubenswrapper[4755]: I1210 15:24:27.847187 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01ddc4056319db1f69268dcae192c3cb9db6c25284305803ae7588e59f77c346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aae6bf174cdc9bde18d7c959e976454e73c1e67642f0158d365b79582f63f3cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:27Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:27 crc kubenswrapper[4755]: I1210 15:24:27.860633 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:27Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:27 crc kubenswrapper[4755]: I1210 15:24:27.871559 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zl2tx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"796da6d5-6ccd-4786-a03e-9a8e47a55031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de63a123c46563bd8cd07e669d192bc8b019a889b9bdb7af1b988872c8f1fc48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzg4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zl2tx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:27Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:27 crc kubenswrapper[4755]: I1210 15:24:27.887564 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:27 crc kubenswrapper[4755]: I1210 15:24:27.887623 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:27 crc kubenswrapper[4755]: I1210 15:24:27.887679 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:27 crc kubenswrapper[4755]: I1210 15:24:27.887704 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:27 crc kubenswrapper[4755]: I1210 15:24:27.887751 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:27Z","lastTransitionTime":"2025-12-10T15:24:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:27 crc kubenswrapper[4755]: I1210 15:24:27.889128 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aed6bb9-78c2-410b-9c58-b60ab22a7bd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70d014e7227746c46b30f8f5a1f307a422d2fa0d4b98d98bfe5ba6217223489e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://104d730ea666ffd2d639ba7fc8d1ac5b444586788a62656fd260cb249f82b1ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09dbec85547c7170bb9551e5657876814d48528e3047daf3547711a563d2b6ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8bf1de0d8bdc0a20bc42ba5097d849ae0f507e5fbc18fb17b4ff3650e46ff0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:27Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:27 crc kubenswrapper[4755]: I1210 15:24:27.906594 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n8c6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b56ae78-835a-45da-bc46-5adff2bdf9fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8957db8ccef7b3c449920471d345aa81ce9a7ab9be36b2350ec428aebec7ac3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b9a1485f6d0c0e53fd3505623b2788476bc00abea2f11650386eac40e449bcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9a1485f6d0c0e53fd3505623b2788476bc00abea2f11650386eac40e449bcc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a532015d196c588ad3dd6c43d9fa3772ba2a42bd1b2231f830b60adb0addab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a532015d196c588ad3dd6c43d9fa3772ba2a42bd1b2231f830b60adb0addab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://417e326f541e04df27d540c14ec572bb8f1d749a9e3dc88f483241d69f7d0679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://417e326f541e04df27d540c14ec572bb8f1d749a9e3dc88f483241d69f7d0679\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42867f7a1f6aceaf708ad650b7634bdafb3f850872c540d424916440014e7941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42867f7a1f6aceaf708ad650b7634bdafb3f850872c540d424916440014e7941\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f2d22bfc26958a3aad99de39f5c831d36c1ba4f563851baafc73735e80d5c91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f2d22bfc26958a3aad99de39f5c831d36c1ba4f563851baafc73735e80d5c91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://548d326fc5c14c16176d4fff3446d93c17322c11d2a34e9e42376b665de3848d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://548d326fc5c14c16176d4fff3446d93c17322c11d2a34e9e42376b665de3848d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n8c6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:27Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:27 crc kubenswrapper[4755]: I1210 15:24:27.917021 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qnmst" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1693fcf1-bef4-4f82-8dd8-f1797b03f5e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8fd6a2ab2b9557574951c0ebbccd663fe576262b0de2c3c655427c977f62d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hzdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qnmst\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:27Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:27 crc kubenswrapper[4755]: I1210 15:24:27.927228 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-82wnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0a2f42c-a60e-4350-9ebb-c28d3cbfdad7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7af643111a7a5d1d78b0412b1621b5ffac6389760ea9190e26e9e5d1704eed4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xbzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://915c918562f8a69a28cbcf29e427bfdd94477193b98d5a2da9ad45cb4b44fa03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xbzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-82wnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:27Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:27 crc kubenswrapper[4755]: I1210 15:24:27.943426 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fa03060-a6e0-4aad-9aa1-43b1a0d00c85\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a823333ed5cb5de3988d25e50e4b7a0f9071c76fb39c22760a4f2acd5eb455d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d828f3f3b90a2dcb1c1908e6a686368af5b0d715b3251e4b8fcf3c8818ec75a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://172ab3fce08c8ba4095ff4095c89364a778644752bd7bb6c178d6e3ebcface69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfbb8e350ad18b78a6bcf6cfa4eb8f2fad90f970dba08a8b1b2026af6f255e83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c4b2bf0c88a16fd6b4b45a20730a92292895dc9f29ce756d347c302b25a8612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55da309288bc2c7ff1f22da0569d18155796700c200fe3ed508f6f8179756865\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55da309288bc2c7ff1f22da0569d18155796700c200fe3ed508f6f8179756865\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219c531368bb5379f3726d94ab0afc0f33eedbf91b0a4dbbe0757c6e232f79ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://219c531368bb5379f3726d94ab0afc0f33eedbf91b0a4dbbe0757c6e232f79ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f9da8f3d4f85ec83c16b71ecd983ff4f48049eb8e73461a2b46c4f66d71ed06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9da8f3d4f85ec83c16b71ecd983ff4f48049eb8e73461a2b46c4f66d71ed06e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:27Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:27 crc kubenswrapper[4755]: I1210 15:24:27.954587 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:27Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:27 crc kubenswrapper[4755]: I1210 15:24:27.964562 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c371b199c3643a68ea5eba935eb76a1b7e8a4027c9292a1324116ccfc14742ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:27Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:27 crc kubenswrapper[4755]: I1210 15:24:27.973819 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wv8fh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d150a22e-c59a-4376-a5c8-db4085ea0be0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58d106c5d1b9525ec821d009ca556449cd8d7f0e1b9c8ec7dd969df996e73625\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfw9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wv8fh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:27Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:27 crc kubenswrapper[4755]: I1210 15:24:27.983494 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q5ctz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17673130-8212-4f8f-8859-92774f0ee202\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcscn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcscn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q5ctz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:27Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:27 crc kubenswrapper[4755]: I1210 15:24:27.989832 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:27 crc kubenswrapper[4755]: I1210 15:24:27.989881 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:27 crc kubenswrapper[4755]: I1210 15:24:27.989895 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:27 crc kubenswrapper[4755]: I1210 15:24:27.989912 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:27 crc kubenswrapper[4755]: I1210 15:24:27.989925 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:27Z","lastTransitionTime":"2025-12-10T15:24:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:28 crc kubenswrapper[4755]: I1210 15:24:28.091893 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:28 crc kubenswrapper[4755]: I1210 15:24:28.091948 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:28 crc kubenswrapper[4755]: I1210 15:24:28.091963 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:28 crc kubenswrapper[4755]: I1210 15:24:28.091980 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:28 crc kubenswrapper[4755]: I1210 15:24:28.091992 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:28Z","lastTransitionTime":"2025-12-10T15:24:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:28 crc kubenswrapper[4755]: I1210 15:24:28.193157 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:28 crc kubenswrapper[4755]: I1210 15:24:28.193193 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:28 crc kubenswrapper[4755]: I1210 15:24:28.193201 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:28 crc kubenswrapper[4755]: I1210 15:24:28.193216 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:28 crc kubenswrapper[4755]: I1210 15:24:28.193226 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:28Z","lastTransitionTime":"2025-12-10T15:24:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:28 crc kubenswrapper[4755]: I1210 15:24:28.295153 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:28 crc kubenswrapper[4755]: I1210 15:24:28.295193 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:28 crc kubenswrapper[4755]: I1210 15:24:28.295201 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:28 crc kubenswrapper[4755]: I1210 15:24:28.295216 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:28 crc kubenswrapper[4755]: I1210 15:24:28.295225 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:28Z","lastTransitionTime":"2025-12-10T15:24:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:28 crc kubenswrapper[4755]: I1210 15:24:28.396944 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:28 crc kubenswrapper[4755]: I1210 15:24:28.396980 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:28 crc kubenswrapper[4755]: I1210 15:24:28.396991 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:28 crc kubenswrapper[4755]: I1210 15:24:28.397007 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:28 crc kubenswrapper[4755]: I1210 15:24:28.397018 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:28Z","lastTransitionTime":"2025-12-10T15:24:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:28 crc kubenswrapper[4755]: I1210 15:24:28.499306 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:28 crc kubenswrapper[4755]: I1210 15:24:28.499685 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:28 crc kubenswrapper[4755]: I1210 15:24:28.499821 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:28 crc kubenswrapper[4755]: I1210 15:24:28.499973 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:28 crc kubenswrapper[4755]: I1210 15:24:28.500091 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:28Z","lastTransitionTime":"2025-12-10T15:24:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:28 crc kubenswrapper[4755]: I1210 15:24:28.602747 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:28 crc kubenswrapper[4755]: I1210 15:24:28.602787 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:28 crc kubenswrapper[4755]: I1210 15:24:28.602796 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:28 crc kubenswrapper[4755]: I1210 15:24:28.602813 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:28 crc kubenswrapper[4755]: I1210 15:24:28.602822 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:28Z","lastTransitionTime":"2025-12-10T15:24:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:28 crc kubenswrapper[4755]: I1210 15:24:28.705046 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:28 crc kubenswrapper[4755]: I1210 15:24:28.705092 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:28 crc kubenswrapper[4755]: I1210 15:24:28.705106 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:28 crc kubenswrapper[4755]: I1210 15:24:28.705125 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:28 crc kubenswrapper[4755]: I1210 15:24:28.705138 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:28Z","lastTransitionTime":"2025-12-10T15:24:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:28 crc kubenswrapper[4755]: I1210 15:24:28.807483 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:28 crc kubenswrapper[4755]: I1210 15:24:28.807525 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:28 crc kubenswrapper[4755]: I1210 15:24:28.807533 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:28 crc kubenswrapper[4755]: I1210 15:24:28.807548 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:28 crc kubenswrapper[4755]: I1210 15:24:28.807556 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:28Z","lastTransitionTime":"2025-12-10T15:24:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:28 crc kubenswrapper[4755]: I1210 15:24:28.909926 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:28 crc kubenswrapper[4755]: I1210 15:24:28.909966 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:28 crc kubenswrapper[4755]: I1210 15:24:28.909980 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:28 crc kubenswrapper[4755]: I1210 15:24:28.909995 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:28 crc kubenswrapper[4755]: I1210 15:24:28.910006 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:28Z","lastTransitionTime":"2025-12-10T15:24:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:29 crc kubenswrapper[4755]: I1210 15:24:29.012512 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:29 crc kubenswrapper[4755]: I1210 15:24:29.012554 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:29 crc kubenswrapper[4755]: I1210 15:24:29.012566 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:29 crc kubenswrapper[4755]: I1210 15:24:29.012586 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:29 crc kubenswrapper[4755]: I1210 15:24:29.012600 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:29Z","lastTransitionTime":"2025-12-10T15:24:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:29 crc kubenswrapper[4755]: I1210 15:24:29.115292 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:29 crc kubenswrapper[4755]: I1210 15:24:29.115326 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:29 crc kubenswrapper[4755]: I1210 15:24:29.115335 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:29 crc kubenswrapper[4755]: I1210 15:24:29.115350 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:29 crc kubenswrapper[4755]: I1210 15:24:29.115360 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:29Z","lastTransitionTime":"2025-12-10T15:24:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:29 crc kubenswrapper[4755]: I1210 15:24:29.217084 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:29 crc kubenswrapper[4755]: I1210 15:24:29.217118 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:29 crc kubenswrapper[4755]: I1210 15:24:29.217129 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:29 crc kubenswrapper[4755]: I1210 15:24:29.217145 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:29 crc kubenswrapper[4755]: I1210 15:24:29.217155 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:29Z","lastTransitionTime":"2025-12-10T15:24:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:29 crc kubenswrapper[4755]: I1210 15:24:29.320016 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:29 crc kubenswrapper[4755]: I1210 15:24:29.320052 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:29 crc kubenswrapper[4755]: I1210 15:24:29.320061 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:29 crc kubenswrapper[4755]: I1210 15:24:29.320077 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:29 crc kubenswrapper[4755]: I1210 15:24:29.320086 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:29Z","lastTransitionTime":"2025-12-10T15:24:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:29 crc kubenswrapper[4755]: I1210 15:24:29.422975 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:29 crc kubenswrapper[4755]: I1210 15:24:29.423059 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:29 crc kubenswrapper[4755]: I1210 15:24:29.423081 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:29 crc kubenswrapper[4755]: I1210 15:24:29.423106 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:29 crc kubenswrapper[4755]: I1210 15:24:29.423123 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:29Z","lastTransitionTime":"2025-12-10T15:24:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:29 crc kubenswrapper[4755]: I1210 15:24:29.525685 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:29 crc kubenswrapper[4755]: I1210 15:24:29.525736 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:29 crc kubenswrapper[4755]: I1210 15:24:29.525752 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:29 crc kubenswrapper[4755]: I1210 15:24:29.525772 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:29 crc kubenswrapper[4755]: I1210 15:24:29.525790 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:29Z","lastTransitionTime":"2025-12-10T15:24:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:29 crc kubenswrapper[4755]: I1210 15:24:29.628512 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:29 crc kubenswrapper[4755]: I1210 15:24:29.628564 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:29 crc kubenswrapper[4755]: I1210 15:24:29.628576 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:29 crc kubenswrapper[4755]: I1210 15:24:29.628594 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:29 crc kubenswrapper[4755]: I1210 15:24:29.628606 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:29Z","lastTransitionTime":"2025-12-10T15:24:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:29 crc kubenswrapper[4755]: I1210 15:24:29.730800 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:29 crc kubenswrapper[4755]: I1210 15:24:29.730847 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:29 crc kubenswrapper[4755]: I1210 15:24:29.730860 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:29 crc kubenswrapper[4755]: I1210 15:24:29.730878 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:29 crc kubenswrapper[4755]: I1210 15:24:29.730890 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:29Z","lastTransitionTime":"2025-12-10T15:24:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:29 crc kubenswrapper[4755]: I1210 15:24:29.757203 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 15:24:29 crc kubenswrapper[4755]: I1210 15:24:29.757238 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 15:24:29 crc kubenswrapper[4755]: I1210 15:24:29.757220 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 15:24:29 crc kubenswrapper[4755]: I1210 15:24:29.757226 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5ctz" Dec 10 15:24:29 crc kubenswrapper[4755]: E1210 15:24:29.757338 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 15:24:29 crc kubenswrapper[4755]: E1210 15:24:29.757507 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 15:24:29 crc kubenswrapper[4755]: E1210 15:24:29.757550 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5ctz" podUID="17673130-8212-4f8f-8859-92774f0ee202" Dec 10 15:24:29 crc kubenswrapper[4755]: E1210 15:24:29.757595 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 15:24:29 crc kubenswrapper[4755]: I1210 15:24:29.833686 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:29 crc kubenswrapper[4755]: I1210 15:24:29.833724 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:29 crc kubenswrapper[4755]: I1210 15:24:29.833733 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:29 crc kubenswrapper[4755]: I1210 15:24:29.833746 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:29 crc kubenswrapper[4755]: I1210 15:24:29.833755 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:29Z","lastTransitionTime":"2025-12-10T15:24:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:29 crc kubenswrapper[4755]: I1210 15:24:29.936788 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:29 crc kubenswrapper[4755]: I1210 15:24:29.936865 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:29 crc kubenswrapper[4755]: I1210 15:24:29.936883 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:29 crc kubenswrapper[4755]: I1210 15:24:29.936910 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:29 crc kubenswrapper[4755]: I1210 15:24:29.936927 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:29Z","lastTransitionTime":"2025-12-10T15:24:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:30 crc kubenswrapper[4755]: I1210 15:24:30.013538 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/17673130-8212-4f8f-8859-92774f0ee202-metrics-certs\") pod \"network-metrics-daemon-q5ctz\" (UID: \"17673130-8212-4f8f-8859-92774f0ee202\") " pod="openshift-multus/network-metrics-daemon-q5ctz" Dec 10 15:24:30 crc kubenswrapper[4755]: E1210 15:24:30.013752 4755 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 10 15:24:30 crc kubenswrapper[4755]: E1210 15:24:30.013866 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/17673130-8212-4f8f-8859-92774f0ee202-metrics-certs podName:17673130-8212-4f8f-8859-92774f0ee202 nodeName:}" failed. No retries permitted until 2025-12-10 15:25:02.013840566 +0000 UTC m=+98.614724228 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/17673130-8212-4f8f-8859-92774f0ee202-metrics-certs") pod "network-metrics-daemon-q5ctz" (UID: "17673130-8212-4f8f-8859-92774f0ee202") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 10 15:24:30 crc kubenswrapper[4755]: I1210 15:24:30.039583 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:30 crc kubenswrapper[4755]: I1210 15:24:30.039634 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:30 crc kubenswrapper[4755]: I1210 15:24:30.039645 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:30 crc kubenswrapper[4755]: I1210 15:24:30.039663 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:30 crc kubenswrapper[4755]: I1210 15:24:30.039674 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:30Z","lastTransitionTime":"2025-12-10T15:24:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:30 crc kubenswrapper[4755]: I1210 15:24:30.142067 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:30 crc kubenswrapper[4755]: I1210 15:24:30.142129 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:30 crc kubenswrapper[4755]: I1210 15:24:30.142147 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:30 crc kubenswrapper[4755]: I1210 15:24:30.142171 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:30 crc kubenswrapper[4755]: I1210 15:24:30.142188 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:30Z","lastTransitionTime":"2025-12-10T15:24:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:30 crc kubenswrapper[4755]: I1210 15:24:30.244899 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:30 crc kubenswrapper[4755]: I1210 15:24:30.244926 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:30 crc kubenswrapper[4755]: I1210 15:24:30.244935 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:30 crc kubenswrapper[4755]: I1210 15:24:30.244949 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:30 crc kubenswrapper[4755]: I1210 15:24:30.244959 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:30Z","lastTransitionTime":"2025-12-10T15:24:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:30 crc kubenswrapper[4755]: I1210 15:24:30.347656 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:30 crc kubenswrapper[4755]: I1210 15:24:30.347724 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:30 crc kubenswrapper[4755]: I1210 15:24:30.347732 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:30 crc kubenswrapper[4755]: I1210 15:24:30.347747 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:30 crc kubenswrapper[4755]: I1210 15:24:30.347756 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:30Z","lastTransitionTime":"2025-12-10T15:24:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:30 crc kubenswrapper[4755]: I1210 15:24:30.449771 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:30 crc kubenswrapper[4755]: I1210 15:24:30.449806 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:30 crc kubenswrapper[4755]: I1210 15:24:30.449818 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:30 crc kubenswrapper[4755]: I1210 15:24:30.449834 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:30 crc kubenswrapper[4755]: I1210 15:24:30.449846 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:30Z","lastTransitionTime":"2025-12-10T15:24:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:30 crc kubenswrapper[4755]: I1210 15:24:30.552612 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:30 crc kubenswrapper[4755]: I1210 15:24:30.552654 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:30 crc kubenswrapper[4755]: I1210 15:24:30.552662 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:30 crc kubenswrapper[4755]: I1210 15:24:30.552677 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:30 crc kubenswrapper[4755]: I1210 15:24:30.552686 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:30Z","lastTransitionTime":"2025-12-10T15:24:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:30 crc kubenswrapper[4755]: I1210 15:24:30.655640 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:30 crc kubenswrapper[4755]: I1210 15:24:30.655700 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:30 crc kubenswrapper[4755]: I1210 15:24:30.655712 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:30 crc kubenswrapper[4755]: I1210 15:24:30.655733 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:30 crc kubenswrapper[4755]: I1210 15:24:30.655745 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:30Z","lastTransitionTime":"2025-12-10T15:24:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:30 crc kubenswrapper[4755]: I1210 15:24:30.758142 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:30 crc kubenswrapper[4755]: I1210 15:24:30.758191 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:30 crc kubenswrapper[4755]: I1210 15:24:30.758202 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:30 crc kubenswrapper[4755]: I1210 15:24:30.758221 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:30 crc kubenswrapper[4755]: I1210 15:24:30.758232 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:30Z","lastTransitionTime":"2025-12-10T15:24:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:30 crc kubenswrapper[4755]: I1210 15:24:30.860727 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:30 crc kubenswrapper[4755]: I1210 15:24:30.860766 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:30 crc kubenswrapper[4755]: I1210 15:24:30.860774 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:30 crc kubenswrapper[4755]: I1210 15:24:30.860793 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:30 crc kubenswrapper[4755]: I1210 15:24:30.860802 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:30Z","lastTransitionTime":"2025-12-10T15:24:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:30 crc kubenswrapper[4755]: I1210 15:24:30.963626 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:30 crc kubenswrapper[4755]: I1210 15:24:30.963684 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:30 crc kubenswrapper[4755]: I1210 15:24:30.963703 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:30 crc kubenswrapper[4755]: I1210 15:24:30.963727 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:30 crc kubenswrapper[4755]: I1210 15:24:30.963744 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:30Z","lastTransitionTime":"2025-12-10T15:24:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:31 crc kubenswrapper[4755]: I1210 15:24:31.065971 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:31 crc kubenswrapper[4755]: I1210 15:24:31.066001 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:31 crc kubenswrapper[4755]: I1210 15:24:31.066010 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:31 crc kubenswrapper[4755]: I1210 15:24:31.066025 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:31 crc kubenswrapper[4755]: I1210 15:24:31.066034 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:31Z","lastTransitionTime":"2025-12-10T15:24:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:31 crc kubenswrapper[4755]: I1210 15:24:31.168281 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:31 crc kubenswrapper[4755]: I1210 15:24:31.168311 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:31 crc kubenswrapper[4755]: I1210 15:24:31.168320 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:31 crc kubenswrapper[4755]: I1210 15:24:31.168333 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:31 crc kubenswrapper[4755]: I1210 15:24:31.168343 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:31Z","lastTransitionTime":"2025-12-10T15:24:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:31 crc kubenswrapper[4755]: I1210 15:24:31.195056 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zl2tx_796da6d5-6ccd-4786-a03e-9a8e47a55031/kube-multus/0.log" Dec 10 15:24:31 crc kubenswrapper[4755]: I1210 15:24:31.195098 4755 generic.go:334] "Generic (PLEG): container finished" podID="796da6d5-6ccd-4786-a03e-9a8e47a55031" containerID="de63a123c46563bd8cd07e669d192bc8b019a889b9bdb7af1b988872c8f1fc48" exitCode=1 Dec 10 15:24:31 crc kubenswrapper[4755]: I1210 15:24:31.195123 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-zl2tx" event={"ID":"796da6d5-6ccd-4786-a03e-9a8e47a55031","Type":"ContainerDied","Data":"de63a123c46563bd8cd07e669d192bc8b019a889b9bdb7af1b988872c8f1fc48"} Dec 10 15:24:31 crc kubenswrapper[4755]: I1210 15:24:31.195445 4755 scope.go:117] "RemoveContainer" containerID="de63a123c46563bd8cd07e669d192bc8b019a889b9bdb7af1b988872c8f1fc48" Dec 10 15:24:31 crc kubenswrapper[4755]: I1210 15:24:31.206962 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81a321f6-db4b-46c6-bf24-1ad62da5992b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e68926ec899dad3c070810697fb078955e202f270724638719a7ea21d2debcb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://728ff8c02e5b0bea5514375b60c7a66f025e3c2a7163f0753c65d914b7873531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://865537b06aba1b617a1a16ef38c6b5be072501d1bbbd69e14eaaf2ce76c6b1da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54e0e014ecb31c499da20621a8b9e2ad6fdbdc2170a93636c828082b5b63b683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e0e014ecb31c499da20621a8b9e2ad6fdbdc2170a93636c828082b5b63b683\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:31Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:31 crc kubenswrapper[4755]: I1210 15:24:31.219433 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7d089a8bddfa1f80d29011c7a6bab0a300f7dd44bdb2864f86951ebbb9ebea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:31Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:31 crc kubenswrapper[4755]: I1210 15:24:31.237716 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01ddc4056319db1f69268dcae192c3cb9db6c25284305803ae7588e59f77c346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aae6bf174cdc9bde18d7c959e976454e73c1e67642f0158d365b79582f63f3cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:31Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:31 crc kubenswrapper[4755]: I1210 15:24:31.252526 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:31Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:31 crc kubenswrapper[4755]: I1210 15:24:31.267614 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zl2tx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"796da6d5-6ccd-4786-a03e-9a8e47a55031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de63a123c46563bd8cd07e669d192bc8b019a889b9bdb7af1b988872c8f1fc48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de63a123c46563bd8cd07e669d192bc8b019a889b9bdb7af1b988872c8f1fc48\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T15:24:30Z\\\",\\\"message\\\":\\\"2025-12-10T15:23:45+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5295941e-ea46-4d40-a35d-bd2fd673e118\\\\n2025-12-10T15:23:45+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5295941e-ea46-4d40-a35d-bd2fd673e118 to /host/opt/cni/bin/\\\\n2025-12-10T15:23:45Z [verbose] multus-daemon started\\\\n2025-12-10T15:23:45Z [verbose] Readiness Indicator file check\\\\n2025-12-10T15:24:30Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzg4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zl2tx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:31Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:31 crc kubenswrapper[4755]: I1210 15:24:31.270374 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:31 crc kubenswrapper[4755]: I1210 15:24:31.270403 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:31 crc kubenswrapper[4755]: I1210 15:24:31.270412 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:31 crc kubenswrapper[4755]: I1210 15:24:31.270428 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:31 crc kubenswrapper[4755]: I1210 15:24:31.270437 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:31Z","lastTransitionTime":"2025-12-10T15:24:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:31 crc kubenswrapper[4755]: I1210 15:24:31.280722 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b132a8b9-1c99-414d-8773-229bf36b305d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5a5a59e9f156fb791ec822c2d5efe3fc6ec0e84bfcb2b6f5da81396951984c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2mdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a40c4bdaa23a60a665b8f565720d79b68cac62d40246be94fc6cd314b1bb3656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2mdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ggt8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:31Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:31 crc kubenswrapper[4755]: I1210 15:24:31.294843 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aed6bb9-78c2-410b-9c58-b60ab22a7bd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70d014e7227746c46b30f8f5a1f307a422d2fa0d4b98d98bfe5ba6217223489e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://104d730ea666ffd2d639ba7fc8d1ac5b444586788a62656fd260cb249f82b1ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09dbec85547c7170bb9551e5657876814d48528e3047daf3547711a563d2b6ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8bf1de0d8bdc0a20bc42ba5097d849ae0f507e5fbc18fb17b4ff3650e46ff0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:31Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:31 crc kubenswrapper[4755]: I1210 15:24:31.312007 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n8c6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b56ae78-835a-45da-bc46-5adff2bdf9fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8957db8ccef7b3c449920471d345aa81ce9a7ab9be36b2350ec428aebec7ac3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b9a1485f6d0c0e53fd3505623b2788476bc00abea2f11650386eac40e449bcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9a1485f6d0c0e53fd3505623b2788476bc00abea2f11650386eac40e449bcc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a532015d196c588ad3dd6c43d9fa3772ba2a42bd1b2231f830b60adb0addab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a532015d196c588ad3dd6c43d9fa3772ba2a42bd1b2231f830b60adb0addab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://417e326f541e04df27d540c14ec572bb8f1d749a9e3dc88f483241d69f7d0679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://417e326f541e04df27d540c14ec572bb8f1d749a9e3dc88f483241d69f7d0679\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42867f7a1f6aceaf708ad650b7634bdafb3f850872c540d424916440014e7941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42867f7a1f6aceaf708ad650b7634bdafb3f850872c540d424916440014e7941\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f2d22bfc26958a3aad99de39f5c831d36c1ba4f563851baafc73735e80d5c91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f2d22bfc26958a3aad99de39f5c831d36c1ba4f563851baafc73735e80d5c91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://548d326fc5c14c16176d4fff3446d93c17322c11d2a34e9e42376b665de3848d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://548d326fc5c14c16176d4fff3446d93c17322c11d2a34e9e42376b665de3848d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n8c6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:31Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:31 crc kubenswrapper[4755]: I1210 15:24:31.324239 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qnmst" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1693fcf1-bef4-4f82-8dd8-f1797b03f5e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8fd6a2ab2b9557574951c0ebbccd663fe576262b0de2c3c655427c977f62d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hzdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qnmst\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:31Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:31 crc kubenswrapper[4755]: I1210 15:24:31.335373 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-82wnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0a2f42c-a60e-4350-9ebb-c28d3cbfdad7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7af643111a7a5d1d78b0412b1621b5ffac6389760ea9190e26e9e5d1704eed4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xbzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://915c918562f8a69a28cbcf29e427bfdd94477193b98d5a2da9ad45cb4b44fa03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xbzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-82wnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:31Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:31 crc kubenswrapper[4755]: I1210 15:24:31.358736 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fa03060-a6e0-4aad-9aa1-43b1a0d00c85\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a823333ed5cb5de3988d25e50e4b7a0f9071c76fb39c22760a4f2acd5eb455d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d828f3f3b90a2dcb1c1908e6a686368af5b0d715b3251e4b8fcf3c8818ec75a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://172ab3fce08c8ba4095ff4095c89364a778644752bd7bb6c178d6e3ebcface69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfbb8e350ad18b78a6bcf6cfa4eb8f2fad90f970dba08a8b1b2026af6f255e83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c4b2bf0c88a16fd6b4b45a20730a92292895dc9f29ce756d347c302b25a8612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55da309288bc2c7ff1f22da0569d18155796700c200fe3ed508f6f8179756865\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55da309288bc2c7ff1f22da0569d18155796700c200fe3ed508f6f8179756865\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219c531368bb5379f3726d94ab0afc0f33eedbf91b0a4dbbe0757c6e232f79ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://219c531368bb5379f3726d94ab0afc0f33eedbf91b0a4dbbe0757c6e232f79ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f9da8f3d4f85ec83c16b71ecd983ff4f48049eb8e73461a2b46c4f66d71ed06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9da8f3d4f85ec83c16b71ecd983ff4f48049eb8e73461a2b46c4f66d71ed06e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:31Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:31 crc kubenswrapper[4755]: I1210 15:24:31.371781 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:31Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:31 crc kubenswrapper[4755]: I1210 15:24:31.372872 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:31 crc kubenswrapper[4755]: I1210 15:24:31.372900 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:31 crc kubenswrapper[4755]: I1210 15:24:31.372909 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:31 crc kubenswrapper[4755]: I1210 15:24:31.372925 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:31 crc kubenswrapper[4755]: I1210 15:24:31.372936 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:31Z","lastTransitionTime":"2025-12-10T15:24:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:31 crc kubenswrapper[4755]: I1210 15:24:31.383653 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c371b199c3643a68ea5eba935eb76a1b7e8a4027c9292a1324116ccfc14742ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:31Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:31 crc kubenswrapper[4755]: I1210 15:24:31.392710 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wv8fh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d150a22e-c59a-4376-a5c8-db4085ea0be0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58d106c5d1b9525ec821d009ca556449cd8d7f0e1b9c8ec7dd969df996e73625\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfw9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wv8fh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:31Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:31 crc kubenswrapper[4755]: I1210 15:24:31.404537 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q5ctz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17673130-8212-4f8f-8859-92774f0ee202\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcscn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcscn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q5ctz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:31Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:31 crc kubenswrapper[4755]: I1210 15:24:31.421982 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e6db97-7e22-4fec-924e-20e90f463887\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef3871ddc16d2fc1b89a6f120390cf066424bd9ed4cca98f8dea4430c40c6c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08895c3ab20248c4bd70ccfae6442bdf8e76a602e605b043f0f00166624ada17\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5b2467645116856afa1e1e37501c7b453b03da193a96d1075886b85839a86ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16e58653dca08c9b33671279553975bca9507ffeeefea161119cec624813f167\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea18ef6c319336321d97c2a55449903b837b5a47da785da3ff5308244751943\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17ca0b6cc1b8b0bf9c34e03c494d1fc1594db013eccf862027ec8749ea51f6e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17ca0b6cc1b8b0bf9c34e03c494d1fc1594db013eccf862027ec8749ea51f6e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:31Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:31 crc kubenswrapper[4755]: I1210 15:24:31.434287 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:31Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:31 crc kubenswrapper[4755]: I1210 15:24:31.451872 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b1da51a-99c9-4f8e-920d-ce0973af6370\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6eb065dc6c0cc8914cb95553eb2683d894fb9a4e78ce7fac73bcce8d7f6cced9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e547993b9f2fa37bf924f909c47b62eb0cc02b659596b1cad9bbc42fdde8f9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9ba47683cc23d5b531a45f0658b6a9378650400b35b5372642b0430a5ac503f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a75407e83508af9adebb09c6466a966dd791d29f690c539656f9bd3396d7031\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://602b4e49987fa2cc6b54b822110aececbdddaf2bce8f27cce4ed906768d45791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://335bcab3a79f09796e97560365e1211fb30ddf288f4773c05ab353197add4365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://807b21c925067bde1df3babdc35da6c8805e10be110c0282376dbd6e8be69dcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://807b21c925067bde1df3babdc35da6c8805e10be110c0282376dbd6e8be69dcb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T15:24:13Z\\\",\\\"message\\\":\\\"s, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI1210 15:24:12.549599 6361 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI1210 15:24:12.549769 6361 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1210 15:24:12.549565 6361 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-zl2tx in node crc\\\\nI1210 15:24:12.549780 6361 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-zl2tx after 0 failed attempt(s)\\\\nI1210 15:24:12.549784 6361 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-zl2tx\\\\nI1210 15:24:12.549311 6361 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-apiserver/apiserver_TCP_cluster\\\\\\\", UUID:\\\\\\\"d71b38eb-32af-4c0f-9490-7c317c111e3a\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-apiserver/apiserver\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]st\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T15:24:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6lfvk_openshift-ovn-kubernetes(4b1da51a-99c9-4f8e-920d-ce0973af6370)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59bf59d1b7fbc365a916fbbceca7ae30b7ebc754b34f2f7a34c2e21e1e1d2166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://375efb27cf6bef06a6a0ffc8f6b2963e1b9ffbca18b3c8e7b402bc552a411f6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://375efb27cf6bef06a6a0ffc8f6b2963e1b9ffbca18b3c8e7b402bc552a411f6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6lfvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:31Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:31 crc kubenswrapper[4755]: I1210 15:24:31.475630 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:31 crc kubenswrapper[4755]: I1210 15:24:31.475707 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:31 crc kubenswrapper[4755]: I1210 15:24:31.475725 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:31 crc kubenswrapper[4755]: I1210 15:24:31.475808 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:31 crc kubenswrapper[4755]: I1210 15:24:31.475835 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:31Z","lastTransitionTime":"2025-12-10T15:24:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:31 crc kubenswrapper[4755]: I1210 15:24:31.577676 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:31 crc kubenswrapper[4755]: I1210 15:24:31.577719 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:31 crc kubenswrapper[4755]: I1210 15:24:31.577734 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:31 crc kubenswrapper[4755]: I1210 15:24:31.577753 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:31 crc kubenswrapper[4755]: I1210 15:24:31.577769 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:31Z","lastTransitionTime":"2025-12-10T15:24:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:31 crc kubenswrapper[4755]: I1210 15:24:31.680073 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:31 crc kubenswrapper[4755]: I1210 15:24:31.680125 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:31 crc kubenswrapper[4755]: I1210 15:24:31.680135 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:31 crc kubenswrapper[4755]: I1210 15:24:31.680151 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:31 crc kubenswrapper[4755]: I1210 15:24:31.680164 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:31Z","lastTransitionTime":"2025-12-10T15:24:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:31 crc kubenswrapper[4755]: I1210 15:24:31.757527 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 15:24:31 crc kubenswrapper[4755]: I1210 15:24:31.757575 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5ctz" Dec 10 15:24:31 crc kubenswrapper[4755]: I1210 15:24:31.757611 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 15:24:31 crc kubenswrapper[4755]: I1210 15:24:31.757590 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 15:24:31 crc kubenswrapper[4755]: E1210 15:24:31.757676 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 15:24:31 crc kubenswrapper[4755]: E1210 15:24:31.757778 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 15:24:31 crc kubenswrapper[4755]: E1210 15:24:31.758090 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5ctz" podUID="17673130-8212-4f8f-8859-92774f0ee202" Dec 10 15:24:31 crc kubenswrapper[4755]: E1210 15:24:31.758211 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 15:24:31 crc kubenswrapper[4755]: I1210 15:24:31.782583 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:31 crc kubenswrapper[4755]: I1210 15:24:31.782969 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:31 crc kubenswrapper[4755]: I1210 15:24:31.783067 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:31 crc kubenswrapper[4755]: I1210 15:24:31.783172 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:31 crc kubenswrapper[4755]: I1210 15:24:31.783256 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:31Z","lastTransitionTime":"2025-12-10T15:24:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:31 crc kubenswrapper[4755]: I1210 15:24:31.885342 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:31 crc kubenswrapper[4755]: I1210 15:24:31.885395 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:31 crc kubenswrapper[4755]: I1210 15:24:31.885413 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:31 crc kubenswrapper[4755]: I1210 15:24:31.885436 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:31 crc kubenswrapper[4755]: I1210 15:24:31.885454 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:31Z","lastTransitionTime":"2025-12-10T15:24:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:31 crc kubenswrapper[4755]: I1210 15:24:31.987720 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:31 crc kubenswrapper[4755]: I1210 15:24:31.987756 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:31 crc kubenswrapper[4755]: I1210 15:24:31.987767 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:31 crc kubenswrapper[4755]: I1210 15:24:31.987782 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:31 crc kubenswrapper[4755]: I1210 15:24:31.987791 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:31Z","lastTransitionTime":"2025-12-10T15:24:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:32 crc kubenswrapper[4755]: I1210 15:24:32.090233 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:32 crc kubenswrapper[4755]: I1210 15:24:32.090264 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:32 crc kubenswrapper[4755]: I1210 15:24:32.090276 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:32 crc kubenswrapper[4755]: I1210 15:24:32.090291 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:32 crc kubenswrapper[4755]: I1210 15:24:32.090301 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:32Z","lastTransitionTime":"2025-12-10T15:24:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:32 crc kubenswrapper[4755]: I1210 15:24:32.193178 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:32 crc kubenswrapper[4755]: I1210 15:24:32.193218 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:32 crc kubenswrapper[4755]: I1210 15:24:32.193228 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:32 crc kubenswrapper[4755]: I1210 15:24:32.193244 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:32 crc kubenswrapper[4755]: I1210 15:24:32.193255 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:32Z","lastTransitionTime":"2025-12-10T15:24:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:32 crc kubenswrapper[4755]: I1210 15:24:32.199610 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zl2tx_796da6d5-6ccd-4786-a03e-9a8e47a55031/kube-multus/0.log" Dec 10 15:24:32 crc kubenswrapper[4755]: I1210 15:24:32.199656 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-zl2tx" event={"ID":"796da6d5-6ccd-4786-a03e-9a8e47a55031","Type":"ContainerStarted","Data":"2e0f974f9ba614dcaef08cf7168b77eeee007dfe65cc4e32df9b8e45005ff4ed"} Dec 10 15:24:32 crc kubenswrapper[4755]: I1210 15:24:32.213500 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:32Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:32 crc kubenswrapper[4755]: I1210 15:24:32.224364 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zl2tx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"796da6d5-6ccd-4786-a03e-9a8e47a55031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e0f974f9ba614dcaef08cf7168b77eeee007dfe65cc4e32df9b8e45005ff4ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de63a123c46563bd8cd07e669d192bc8b019a889b9bdb7af1b988872c8f1fc48\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T15:24:30Z\\\",\\\"message\\\":\\\"2025-12-10T15:23:45+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5295941e-ea46-4d40-a35d-bd2fd673e118\\\\n2025-12-10T15:23:45+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5295941e-ea46-4d40-a35d-bd2fd673e118 to /host/opt/cni/bin/\\\\n2025-12-10T15:23:45Z [verbose] multus-daemon started\\\\n2025-12-10T15:23:45Z [verbose] Readiness Indicator file check\\\\n2025-12-10T15:24:30Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:24:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzg4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zl2tx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:32Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:32 crc kubenswrapper[4755]: I1210 15:24:32.233400 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b132a8b9-1c99-414d-8773-229bf36b305d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5a5a59e9f156fb791ec822c2d5efe3fc6ec0e84bfcb2b6f5da81396951984c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2mdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a40c4bdaa23a60a665b8f565720d79b68cac62d40246be94fc6cd314b1bb3656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2mdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ggt8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:32Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:32 crc kubenswrapper[4755]: I1210 15:24:32.245056 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81a321f6-db4b-46c6-bf24-1ad62da5992b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e68926ec899dad3c070810697fb078955e202f270724638719a7ea21d2debcb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://728ff8c02e5b0bea5514375b60c7a66f025e3c2a7163f0753c65d914b7873531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://865537b06aba1b617a1a16ef38c6b5be072501d1bbbd69e14eaaf2ce76c6b1da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54e0e014ecb31c499da20621a8b9e2ad6fdbdc2170a93636c828082b5b63b683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e0e014ecb31c499da20621a8b9e2ad6fdbdc2170a93636c828082b5b63b683\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:32Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:32 crc kubenswrapper[4755]: I1210 15:24:32.255982 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7d089a8bddfa1f80d29011c7a6bab0a300f7dd44bdb2864f86951ebbb9ebea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:32Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:32 crc kubenswrapper[4755]: I1210 15:24:32.266718 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01ddc4056319db1f69268dcae192c3cb9db6c25284305803ae7588e59f77c346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aae6bf174cdc9bde18d7c959e976454e73c1e67642f0158d365b79582f63f3cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:32Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:32 crc kubenswrapper[4755]: I1210 15:24:32.277314 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-82wnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0a2f42c-a60e-4350-9ebb-c28d3cbfdad7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7af643111a7a5d1d78b0412b1621b5ffac6389760ea9190e26e9e5d1704eed4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xbzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://915c918562f8a69a28cbcf29e427bfdd94477193b98d5a2da9ad45cb4b44fa03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xbzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-82wnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:32Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:32 crc kubenswrapper[4755]: I1210 15:24:32.291925 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aed6bb9-78c2-410b-9c58-b60ab22a7bd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70d014e7227746c46b30f8f5a1f307a422d2fa0d4b98d98bfe5ba6217223489e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://104d730ea666ffd2d639ba7fc8d1ac5b444586788a62656fd260cb249f82b1ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09dbec85547c7170bb9551e5657876814d48528e3047daf3547711a563d2b6ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8bf1de0d8bdc0a20bc42ba5097d849ae0f507e5fbc18fb17b4ff3650e46ff0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:32Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:32 crc kubenswrapper[4755]: I1210 15:24:32.296604 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:32 crc kubenswrapper[4755]: I1210 15:24:32.296647 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:32 crc kubenswrapper[4755]: I1210 15:24:32.296658 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:32 crc kubenswrapper[4755]: I1210 15:24:32.296674 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:32 crc kubenswrapper[4755]: I1210 15:24:32.296684 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:32Z","lastTransitionTime":"2025-12-10T15:24:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:32 crc kubenswrapper[4755]: I1210 15:24:32.310247 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n8c6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b56ae78-835a-45da-bc46-5adff2bdf9fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8957db8ccef7b3c449920471d345aa81ce9a7ab9be36b2350ec428aebec7ac3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b9a1485f6d0c0e53fd3505623b2788476bc00abea2f11650386eac40e449bcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9a1485f6d0c0e53fd3505623b2788476bc00abea2f11650386eac40e449bcc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a532015d196c588ad3dd6c43d9fa3772ba2a42bd1b2231f830b60adb0addab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a532015d196c588ad3dd6c43d9fa3772ba2a42bd1b2231f830b60adb0addab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://417e326f541e04df27d540c14ec572bb8f1d749a9e3dc88f483241d69f7d0679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://417e326f541e04df27d540c14ec572bb8f1d749a9e3dc88f483241d69f7d0679\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42867f7a1f6aceaf708ad650b7634bdafb3f850872c540d424916440014e7941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42867f7a1f6aceaf708ad650b7634bdafb3f850872c540d424916440014e7941\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f2d22bfc26958a3aad99de39f5c831d36c1ba4f563851baafc73735e80d5c91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f2d22bfc26958a3aad99de39f5c831d36c1ba4f563851baafc73735e80d5c91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://548d326fc5c14c16176d4fff3446d93c17322c11d2a34e9e42376b665de3848d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://548d326fc5c14c16176d4fff3446d93c17322c11d2a34e9e42376b665de3848d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n8c6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:32Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:32 crc kubenswrapper[4755]: I1210 15:24:32.322095 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qnmst" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1693fcf1-bef4-4f82-8dd8-f1797b03f5e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8fd6a2ab2b9557574951c0ebbccd663fe576262b0de2c3c655427c977f62d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hzdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qnmst\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:32Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:32 crc kubenswrapper[4755]: I1210 15:24:32.331182 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wv8fh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d150a22e-c59a-4376-a5c8-db4085ea0be0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58d106c5d1b9525ec821d009ca556449cd8d7f0e1b9c8ec7dd969df996e73625\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfw9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wv8fh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:32Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:32 crc kubenswrapper[4755]: I1210 15:24:32.340073 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q5ctz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17673130-8212-4f8f-8859-92774f0ee202\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcscn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcscn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q5ctz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:32Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:32 crc kubenswrapper[4755]: I1210 15:24:32.360539 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fa03060-a6e0-4aad-9aa1-43b1a0d00c85\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a823333ed5cb5de3988d25e50e4b7a0f9071c76fb39c22760a4f2acd5eb455d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d828f3f3b90a2dcb1c1908e6a686368af5b0d715b3251e4b8fcf3c8818ec75a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://172ab3fce08c8ba4095ff4095c89364a778644752bd7bb6c178d6e3ebcface69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfbb8e350ad18b78a6bcf6cfa4eb8f2fad90f970dba08a8b1b2026af6f255e83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c4b2bf0c88a16fd6b4b45a20730a92292895dc9f29ce756d347c302b25a8612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55da309288bc2c7ff1f22da0569d18155796700c200fe3ed508f6f8179756865\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55da309288bc2c7ff1f22da0569d18155796700c200fe3ed508f6f8179756865\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219c531368bb5379f3726d94ab0afc0f33eedbf91b0a4dbbe0757c6e232f79ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://219c531368bb5379f3726d94ab0afc0f33eedbf91b0a4dbbe0757c6e232f79ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f9da8f3d4f85ec83c16b71ecd983ff4f48049eb8e73461a2b46c4f66d71ed06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9da8f3d4f85ec83c16b71ecd983ff4f48049eb8e73461a2b46c4f66d71ed06e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:32Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:32 crc kubenswrapper[4755]: I1210 15:24:32.373206 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:32Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:32 crc kubenswrapper[4755]: I1210 15:24:32.383725 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c371b199c3643a68ea5eba935eb76a1b7e8a4027c9292a1324116ccfc14742ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:32Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:32 crc kubenswrapper[4755]: I1210 15:24:32.398446 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:32 crc kubenswrapper[4755]: I1210 15:24:32.398499 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:32 crc kubenswrapper[4755]: I1210 15:24:32.398509 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:32 crc kubenswrapper[4755]: I1210 15:24:32.398523 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:32 crc kubenswrapper[4755]: I1210 15:24:32.398536 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:32Z","lastTransitionTime":"2025-12-10T15:24:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:32 crc kubenswrapper[4755]: I1210 15:24:32.405052 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b1da51a-99c9-4f8e-920d-ce0973af6370\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6eb065dc6c0cc8914cb95553eb2683d894fb9a4e78ce7fac73bcce8d7f6cced9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e547993b9f2fa37bf924f909c47b62eb0cc02b659596b1cad9bbc42fdde8f9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9ba47683cc23d5b531a45f0658b6a9378650400b35b5372642b0430a5ac503f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a75407e83508af9adebb09c6466a966dd791d29f690c539656f9bd3396d7031\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://602b4e49987fa2cc6b54b822110aececbdddaf2bce8f27cce4ed906768d45791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://335bcab3a79f09796e97560365e1211fb30ddf288f4773c05ab353197add4365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://807b21c925067bde1df3babdc35da6c8805e10be110c0282376dbd6e8be69dcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://807b21c925067bde1df3babdc35da6c8805e10be110c0282376dbd6e8be69dcb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T15:24:13Z\\\",\\\"message\\\":\\\"s, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI1210 15:24:12.549599 6361 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI1210 15:24:12.549769 6361 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1210 15:24:12.549565 6361 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-zl2tx in node crc\\\\nI1210 15:24:12.549780 6361 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-zl2tx after 0 failed attempt(s)\\\\nI1210 15:24:12.549784 6361 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-zl2tx\\\\nI1210 15:24:12.549311 6361 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-apiserver/apiserver_TCP_cluster\\\\\\\", UUID:\\\\\\\"d71b38eb-32af-4c0f-9490-7c317c111e3a\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-apiserver/apiserver\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]st\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T15:24:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6lfvk_openshift-ovn-kubernetes(4b1da51a-99c9-4f8e-920d-ce0973af6370)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59bf59d1b7fbc365a916fbbceca7ae30b7ebc754b34f2f7a34c2e21e1e1d2166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://375efb27cf6bef06a6a0ffc8f6b2963e1b9ffbca18b3c8e7b402bc552a411f6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://375efb27cf6bef06a6a0ffc8f6b2963e1b9ffbca18b3c8e7b402bc552a411f6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6lfvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:32Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:32 crc kubenswrapper[4755]: I1210 15:24:32.419102 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e6db97-7e22-4fec-924e-20e90f463887\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef3871ddc16d2fc1b89a6f120390cf066424bd9ed4cca98f8dea4430c40c6c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08895c3ab20248c4bd70ccfae6442bdf8e76a602e605b043f0f00166624ada17\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5b2467645116856afa1e1e37501c7b453b03da193a96d1075886b85839a86ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16e58653dca08c9b33671279553975bca9507ffeeefea161119cec624813f167\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea18ef6c319336321d97c2a55449903b837b5a47da785da3ff5308244751943\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17ca0b6cc1b8b0bf9c34e03c494d1fc1594db013eccf862027ec8749ea51f6e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17ca0b6cc1b8b0bf9c34e03c494d1fc1594db013eccf862027ec8749ea51f6e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:32Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:32 crc kubenswrapper[4755]: I1210 15:24:32.430102 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:32Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:32 crc kubenswrapper[4755]: I1210 15:24:32.501145 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:32 crc kubenswrapper[4755]: I1210 15:24:32.501207 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:32 crc kubenswrapper[4755]: I1210 15:24:32.501216 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:32 crc kubenswrapper[4755]: I1210 15:24:32.501234 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:32 crc kubenswrapper[4755]: I1210 15:24:32.501245 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:32Z","lastTransitionTime":"2025-12-10T15:24:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:32 crc kubenswrapper[4755]: I1210 15:24:32.603971 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:32 crc kubenswrapper[4755]: I1210 15:24:32.604014 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:32 crc kubenswrapper[4755]: I1210 15:24:32.604027 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:32 crc kubenswrapper[4755]: I1210 15:24:32.604043 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:32 crc kubenswrapper[4755]: I1210 15:24:32.604055 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:32Z","lastTransitionTime":"2025-12-10T15:24:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:32 crc kubenswrapper[4755]: I1210 15:24:32.706382 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:32 crc kubenswrapper[4755]: I1210 15:24:32.706428 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:32 crc kubenswrapper[4755]: I1210 15:24:32.706438 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:32 crc kubenswrapper[4755]: I1210 15:24:32.706457 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:32 crc kubenswrapper[4755]: I1210 15:24:32.706486 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:32Z","lastTransitionTime":"2025-12-10T15:24:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:32 crc kubenswrapper[4755]: I1210 15:24:32.809174 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:32 crc kubenswrapper[4755]: I1210 15:24:32.809220 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:32 crc kubenswrapper[4755]: I1210 15:24:32.809231 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:32 crc kubenswrapper[4755]: I1210 15:24:32.809248 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:32 crc kubenswrapper[4755]: I1210 15:24:32.809259 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:32Z","lastTransitionTime":"2025-12-10T15:24:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:32 crc kubenswrapper[4755]: I1210 15:24:32.912030 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:32 crc kubenswrapper[4755]: I1210 15:24:32.912063 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:32 crc kubenswrapper[4755]: I1210 15:24:32.912074 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:32 crc kubenswrapper[4755]: I1210 15:24:32.912090 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:32 crc kubenswrapper[4755]: I1210 15:24:32.912101 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:32Z","lastTransitionTime":"2025-12-10T15:24:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:33 crc kubenswrapper[4755]: I1210 15:24:33.013893 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:33 crc kubenswrapper[4755]: I1210 15:24:33.013929 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:33 crc kubenswrapper[4755]: I1210 15:24:33.013938 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:33 crc kubenswrapper[4755]: I1210 15:24:33.013951 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:33 crc kubenswrapper[4755]: I1210 15:24:33.013960 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:33Z","lastTransitionTime":"2025-12-10T15:24:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:33 crc kubenswrapper[4755]: I1210 15:24:33.116370 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:33 crc kubenswrapper[4755]: I1210 15:24:33.116413 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:33 crc kubenswrapper[4755]: I1210 15:24:33.116423 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:33 crc kubenswrapper[4755]: I1210 15:24:33.116440 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:33 crc kubenswrapper[4755]: I1210 15:24:33.116453 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:33Z","lastTransitionTime":"2025-12-10T15:24:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:33 crc kubenswrapper[4755]: I1210 15:24:33.218951 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:33 crc kubenswrapper[4755]: I1210 15:24:33.218991 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:33 crc kubenswrapper[4755]: I1210 15:24:33.219002 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:33 crc kubenswrapper[4755]: I1210 15:24:33.219017 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:33 crc kubenswrapper[4755]: I1210 15:24:33.219027 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:33Z","lastTransitionTime":"2025-12-10T15:24:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:33 crc kubenswrapper[4755]: I1210 15:24:33.321726 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:33 crc kubenswrapper[4755]: I1210 15:24:33.321769 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:33 crc kubenswrapper[4755]: I1210 15:24:33.321777 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:33 crc kubenswrapper[4755]: I1210 15:24:33.321791 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:33 crc kubenswrapper[4755]: I1210 15:24:33.321801 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:33Z","lastTransitionTime":"2025-12-10T15:24:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:33 crc kubenswrapper[4755]: I1210 15:24:33.424072 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:33 crc kubenswrapper[4755]: I1210 15:24:33.424111 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:33 crc kubenswrapper[4755]: I1210 15:24:33.424119 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:33 crc kubenswrapper[4755]: I1210 15:24:33.424134 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:33 crc kubenswrapper[4755]: I1210 15:24:33.424144 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:33Z","lastTransitionTime":"2025-12-10T15:24:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:33 crc kubenswrapper[4755]: I1210 15:24:33.526227 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:33 crc kubenswrapper[4755]: I1210 15:24:33.526263 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:33 crc kubenswrapper[4755]: I1210 15:24:33.526272 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:33 crc kubenswrapper[4755]: I1210 15:24:33.526286 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:33 crc kubenswrapper[4755]: I1210 15:24:33.526295 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:33Z","lastTransitionTime":"2025-12-10T15:24:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:33 crc kubenswrapper[4755]: I1210 15:24:33.628330 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:33 crc kubenswrapper[4755]: I1210 15:24:33.628366 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:33 crc kubenswrapper[4755]: I1210 15:24:33.628375 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:33 crc kubenswrapper[4755]: I1210 15:24:33.628391 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:33 crc kubenswrapper[4755]: I1210 15:24:33.628399 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:33Z","lastTransitionTime":"2025-12-10T15:24:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:33 crc kubenswrapper[4755]: I1210 15:24:33.730531 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:33 crc kubenswrapper[4755]: I1210 15:24:33.730572 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:33 crc kubenswrapper[4755]: I1210 15:24:33.730581 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:33 crc kubenswrapper[4755]: I1210 15:24:33.730597 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:33 crc kubenswrapper[4755]: I1210 15:24:33.730607 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:33Z","lastTransitionTime":"2025-12-10T15:24:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:33 crc kubenswrapper[4755]: I1210 15:24:33.757236 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 15:24:33 crc kubenswrapper[4755]: I1210 15:24:33.757521 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 15:24:33 crc kubenswrapper[4755]: E1210 15:24:33.757584 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 15:24:33 crc kubenswrapper[4755]: I1210 15:24:33.757615 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5ctz" Dec 10 15:24:33 crc kubenswrapper[4755]: E1210 15:24:33.757831 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 15:24:33 crc kubenswrapper[4755]: I1210 15:24:33.757669 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 15:24:33 crc kubenswrapper[4755]: E1210 15:24:33.758091 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5ctz" podUID="17673130-8212-4f8f-8859-92774f0ee202" Dec 10 15:24:33 crc kubenswrapper[4755]: E1210 15:24:33.758130 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 15:24:33 crc kubenswrapper[4755]: I1210 15:24:33.778708 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fa03060-a6e0-4aad-9aa1-43b1a0d00c85\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a823333ed5cb5de3988d25e50e4b7a0f9071c76fb39c22760a4f2acd5eb455d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d828f3f3b90a2dcb1c1908e6a686368af5b0d715b3251e4b8fcf3c8818ec75a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://172ab3fce08c8ba4095ff4095c89364a778644752bd7bb6c178d6e3ebcface69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfbb8e350ad18b78a6bcf6cfa4eb8f2fad90f970dba08a8b1b2026af6f255e83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c4b2bf0c88a16fd6b4b45a20730a92292895dc9f29ce756d347c302b25a8612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55da309288bc2c7ff1f22da0569d18155796700c200fe3ed508f6f8179756865\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55da309288bc2c7ff1f22da0569d18155796700c200fe3ed508f6f8179756865\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219c531368bb5379f3726d94ab0afc0f33eedbf91b0a4dbbe0757c6e232f79ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://219c531368bb5379f3726d94ab0afc0f33eedbf91b0a4dbbe0757c6e232f79ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f9da8f3d4f85ec83c16b71ecd983ff4f48049eb8e73461a2b46c4f66d71ed06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9da8f3d4f85ec83c16b71ecd983ff4f48049eb8e73461a2b46c4f66d71ed06e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:33Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:33 crc kubenswrapper[4755]: I1210 15:24:33.790221 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:33Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:33 crc kubenswrapper[4755]: I1210 15:24:33.800885 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c371b199c3643a68ea5eba935eb76a1b7e8a4027c9292a1324116ccfc14742ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:33Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:33 crc kubenswrapper[4755]: I1210 15:24:33.808976 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wv8fh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d150a22e-c59a-4376-a5c8-db4085ea0be0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58d106c5d1b9525ec821d009ca556449cd8d7f0e1b9c8ec7dd969df996e73625\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfw9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wv8fh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:33Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:33 crc kubenswrapper[4755]: I1210 15:24:33.818177 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q5ctz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17673130-8212-4f8f-8859-92774f0ee202\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcscn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcscn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q5ctz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:33Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:33 crc kubenswrapper[4755]: I1210 15:24:33.829621 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e6db97-7e22-4fec-924e-20e90f463887\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef3871ddc16d2fc1b89a6f120390cf066424bd9ed4cca98f8dea4430c40c6c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08895c3ab20248c4bd70ccfae6442bdf8e76a602e605b043f0f00166624ada17\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5b2467645116856afa1e1e37501c7b453b03da193a96d1075886b85839a86ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16e58653dca08c9b33671279553975bca9507ffeeefea161119cec624813f167\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea18ef6c319336321d97c2a55449903b837b5a47da785da3ff5308244751943\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17ca0b6cc1b8b0bf9c34e03c494d1fc1594db013eccf862027ec8749ea51f6e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17ca0b6cc1b8b0bf9c34e03c494d1fc1594db013eccf862027ec8749ea51f6e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:33Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:33 crc kubenswrapper[4755]: I1210 15:24:33.832777 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:33 crc kubenswrapper[4755]: I1210 15:24:33.832835 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:33 crc kubenswrapper[4755]: I1210 15:24:33.832849 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:33 crc kubenswrapper[4755]: I1210 15:24:33.832864 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:33 crc kubenswrapper[4755]: I1210 15:24:33.832875 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:33Z","lastTransitionTime":"2025-12-10T15:24:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:33 crc kubenswrapper[4755]: I1210 15:24:33.839694 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:33Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:33 crc kubenswrapper[4755]: I1210 15:24:33.856942 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b1da51a-99c9-4f8e-920d-ce0973af6370\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6eb065dc6c0cc8914cb95553eb2683d894fb9a4e78ce7fac73bcce8d7f6cced9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e547993b9f2fa37bf924f909c47b62eb0cc02b659596b1cad9bbc42fdde8f9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9ba47683cc23d5b531a45f0658b6a9378650400b35b5372642b0430a5ac503f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a75407e83508af9adebb09c6466a966dd791d29f690c539656f9bd3396d7031\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://602b4e49987fa2cc6b54b822110aececbdddaf2bce8f27cce4ed906768d45791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://335bcab3a79f09796e97560365e1211fb30ddf288f4773c05ab353197add4365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://807b21c925067bde1df3babdc35da6c8805e10be110c0282376dbd6e8be69dcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://807b21c925067bde1df3babdc35da6c8805e10be110c0282376dbd6e8be69dcb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T15:24:13Z\\\",\\\"message\\\":\\\"s, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI1210 15:24:12.549599 6361 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI1210 15:24:12.549769 6361 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1210 15:24:12.549565 6361 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-zl2tx in node crc\\\\nI1210 15:24:12.549780 6361 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-zl2tx after 0 failed attempt(s)\\\\nI1210 15:24:12.549784 6361 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-zl2tx\\\\nI1210 15:24:12.549311 6361 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-apiserver/apiserver_TCP_cluster\\\\\\\", UUID:\\\\\\\"d71b38eb-32af-4c0f-9490-7c317c111e3a\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-apiserver/apiserver\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]st\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T15:24:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6lfvk_openshift-ovn-kubernetes(4b1da51a-99c9-4f8e-920d-ce0973af6370)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59bf59d1b7fbc365a916fbbceca7ae30b7ebc754b34f2f7a34c2e21e1e1d2166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://375efb27cf6bef06a6a0ffc8f6b2963e1b9ffbca18b3c8e7b402bc552a411f6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://375efb27cf6bef06a6a0ffc8f6b2963e1b9ffbca18b3c8e7b402bc552a411f6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6lfvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:33Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:33 crc kubenswrapper[4755]: I1210 15:24:33.870454 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81a321f6-db4b-46c6-bf24-1ad62da5992b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e68926ec899dad3c070810697fb078955e202f270724638719a7ea21d2debcb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://728ff8c02e5b0bea5514375b60c7a66f025e3c2a7163f0753c65d914b7873531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://865537b06aba1b617a1a16ef38c6b5be072501d1bbbd69e14eaaf2ce76c6b1da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54e0e014ecb31c499da20621a8b9e2ad6fdbdc2170a93636c828082b5b63b683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e0e014ecb31c499da20621a8b9e2ad6fdbdc2170a93636c828082b5b63b683\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:33Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:33 crc kubenswrapper[4755]: I1210 15:24:33.885051 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7d089a8bddfa1f80d29011c7a6bab0a300f7dd44bdb2864f86951ebbb9ebea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:33Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:33 crc kubenswrapper[4755]: I1210 15:24:33.898718 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01ddc4056319db1f69268dcae192c3cb9db6c25284305803ae7588e59f77c346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aae6bf174cdc9bde18d7c959e976454e73c1e67642f0158d365b79582f63f3cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:33Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:33 crc kubenswrapper[4755]: I1210 15:24:33.910137 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:33Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:33 crc kubenswrapper[4755]: I1210 15:24:33.922157 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zl2tx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"796da6d5-6ccd-4786-a03e-9a8e47a55031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e0f974f9ba614dcaef08cf7168b77eeee007dfe65cc4e32df9b8e45005ff4ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de63a123c46563bd8cd07e669d192bc8b019a889b9bdb7af1b988872c8f1fc48\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T15:24:30Z\\\",\\\"message\\\":\\\"2025-12-10T15:23:45+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5295941e-ea46-4d40-a35d-bd2fd673e118\\\\n2025-12-10T15:23:45+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5295941e-ea46-4d40-a35d-bd2fd673e118 to /host/opt/cni/bin/\\\\n2025-12-10T15:23:45Z [verbose] multus-daemon started\\\\n2025-12-10T15:23:45Z [verbose] Readiness Indicator file check\\\\n2025-12-10T15:24:30Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:24:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzg4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zl2tx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:33Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:33 crc kubenswrapper[4755]: I1210 15:24:33.932044 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b132a8b9-1c99-414d-8773-229bf36b305d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5a5a59e9f156fb791ec822c2d5efe3fc6ec0e84bfcb2b6f5da81396951984c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2mdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a40c4bdaa23a60a665b8f565720d79b68cac62d40246be94fc6cd314b1bb3656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2mdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ggt8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:33Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:33 crc kubenswrapper[4755]: I1210 15:24:33.934443 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:33 crc kubenswrapper[4755]: I1210 15:24:33.934494 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:33 crc kubenswrapper[4755]: I1210 15:24:33.934507 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:33 crc kubenswrapper[4755]: I1210 15:24:33.934524 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:33 crc kubenswrapper[4755]: I1210 15:24:33.934540 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:33Z","lastTransitionTime":"2025-12-10T15:24:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:33 crc kubenswrapper[4755]: I1210 15:24:33.944324 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aed6bb9-78c2-410b-9c58-b60ab22a7bd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70d014e7227746c46b30f8f5a1f307a422d2fa0d4b98d98bfe5ba6217223489e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://104d730ea666ffd2d639ba7fc8d1ac5b444586788a62656fd260cb249f82b1ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09dbec85547c7170bb9551e5657876814d48528e3047daf3547711a563d2b6ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8bf1de0d8bdc0a20bc42ba5097d849ae0f507e5fbc18fb17b4ff3650e46ff0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:33Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:33 crc kubenswrapper[4755]: I1210 15:24:33.956980 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n8c6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b56ae78-835a-45da-bc46-5adff2bdf9fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8957db8ccef7b3c449920471d345aa81ce9a7ab9be36b2350ec428aebec7ac3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b9a1485f6d0c0e53fd3505623b2788476bc00abea2f11650386eac40e449bcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9a1485f6d0c0e53fd3505623b2788476bc00abea2f11650386eac40e449bcc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a532015d196c588ad3dd6c43d9fa3772ba2a42bd1b2231f830b60adb0addab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a532015d196c588ad3dd6c43d9fa3772ba2a42bd1b2231f830b60adb0addab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://417e326f541e04df27d540c14ec572bb8f1d749a9e3dc88f483241d69f7d0679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://417e326f541e04df27d540c14ec572bb8f1d749a9e3dc88f483241d69f7d0679\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42867f7a1f6aceaf708ad650b7634bdafb3f850872c540d424916440014e7941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42867f7a1f6aceaf708ad650b7634bdafb3f850872c540d424916440014e7941\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f2d22bfc26958a3aad99de39f5c831d36c1ba4f563851baafc73735e80d5c91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f2d22bfc26958a3aad99de39f5c831d36c1ba4f563851baafc73735e80d5c91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://548d326fc5c14c16176d4fff3446d93c17322c11d2a34e9e42376b665de3848d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://548d326fc5c14c16176d4fff3446d93c17322c11d2a34e9e42376b665de3848d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n8c6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:33Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:33 crc kubenswrapper[4755]: I1210 15:24:33.965297 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qnmst" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1693fcf1-bef4-4f82-8dd8-f1797b03f5e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8fd6a2ab2b9557574951c0ebbccd663fe576262b0de2c3c655427c977f62d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hzdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qnmst\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:33Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:33 crc kubenswrapper[4755]: I1210 15:24:33.975992 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-82wnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0a2f42c-a60e-4350-9ebb-c28d3cbfdad7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7af643111a7a5d1d78b0412b1621b5ffac6389760ea9190e26e9e5d1704eed4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xbzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://915c918562f8a69a28cbcf29e427bfdd94477193b98d5a2da9ad45cb4b44fa03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xbzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-82wnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:33Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:34 crc kubenswrapper[4755]: I1210 15:24:34.036519 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:34 crc kubenswrapper[4755]: I1210 15:24:34.036551 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:34 crc kubenswrapper[4755]: I1210 15:24:34.036561 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:34 crc kubenswrapper[4755]: I1210 15:24:34.036577 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:34 crc kubenswrapper[4755]: I1210 15:24:34.036592 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:34Z","lastTransitionTime":"2025-12-10T15:24:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:34 crc kubenswrapper[4755]: I1210 15:24:34.138612 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:34 crc kubenswrapper[4755]: I1210 15:24:34.138834 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:34 crc kubenswrapper[4755]: I1210 15:24:34.138950 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:34 crc kubenswrapper[4755]: I1210 15:24:34.139043 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:34 crc kubenswrapper[4755]: I1210 15:24:34.139116 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:34Z","lastTransitionTime":"2025-12-10T15:24:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:34 crc kubenswrapper[4755]: I1210 15:24:34.241537 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:34 crc kubenswrapper[4755]: I1210 15:24:34.241592 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:34 crc kubenswrapper[4755]: I1210 15:24:34.241603 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:34 crc kubenswrapper[4755]: I1210 15:24:34.241626 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:34 crc kubenswrapper[4755]: I1210 15:24:34.241641 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:34Z","lastTransitionTime":"2025-12-10T15:24:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:34 crc kubenswrapper[4755]: I1210 15:24:34.344171 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:34 crc kubenswrapper[4755]: I1210 15:24:34.344211 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:34 crc kubenswrapper[4755]: I1210 15:24:34.344223 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:34 crc kubenswrapper[4755]: I1210 15:24:34.344240 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:34 crc kubenswrapper[4755]: I1210 15:24:34.344250 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:34Z","lastTransitionTime":"2025-12-10T15:24:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:34 crc kubenswrapper[4755]: I1210 15:24:34.446834 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:34 crc kubenswrapper[4755]: I1210 15:24:34.446881 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:34 crc kubenswrapper[4755]: I1210 15:24:34.446895 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:34 crc kubenswrapper[4755]: I1210 15:24:34.446915 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:34 crc kubenswrapper[4755]: I1210 15:24:34.446927 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:34Z","lastTransitionTime":"2025-12-10T15:24:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:34 crc kubenswrapper[4755]: I1210 15:24:34.549113 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:34 crc kubenswrapper[4755]: I1210 15:24:34.549150 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:34 crc kubenswrapper[4755]: I1210 15:24:34.549158 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:34 crc kubenswrapper[4755]: I1210 15:24:34.549175 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:34 crc kubenswrapper[4755]: I1210 15:24:34.549185 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:34Z","lastTransitionTime":"2025-12-10T15:24:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:34 crc kubenswrapper[4755]: I1210 15:24:34.651183 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:34 crc kubenswrapper[4755]: I1210 15:24:34.651225 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:34 crc kubenswrapper[4755]: I1210 15:24:34.651237 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:34 crc kubenswrapper[4755]: I1210 15:24:34.651256 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:34 crc kubenswrapper[4755]: I1210 15:24:34.651268 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:34Z","lastTransitionTime":"2025-12-10T15:24:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:34 crc kubenswrapper[4755]: I1210 15:24:34.753322 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:34 crc kubenswrapper[4755]: I1210 15:24:34.753353 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:34 crc kubenswrapper[4755]: I1210 15:24:34.753363 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:34 crc kubenswrapper[4755]: I1210 15:24:34.753387 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:34 crc kubenswrapper[4755]: I1210 15:24:34.753398 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:34Z","lastTransitionTime":"2025-12-10T15:24:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:34 crc kubenswrapper[4755]: I1210 15:24:34.855631 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:34 crc kubenswrapper[4755]: I1210 15:24:34.855710 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:34 crc kubenswrapper[4755]: I1210 15:24:34.855722 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:34 crc kubenswrapper[4755]: I1210 15:24:34.855772 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:34 crc kubenswrapper[4755]: I1210 15:24:34.855785 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:34Z","lastTransitionTime":"2025-12-10T15:24:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:34 crc kubenswrapper[4755]: I1210 15:24:34.958693 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:34 crc kubenswrapper[4755]: I1210 15:24:34.958729 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:34 crc kubenswrapper[4755]: I1210 15:24:34.958739 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:34 crc kubenswrapper[4755]: I1210 15:24:34.958755 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:34 crc kubenswrapper[4755]: I1210 15:24:34.958766 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:34Z","lastTransitionTime":"2025-12-10T15:24:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:35 crc kubenswrapper[4755]: I1210 15:24:35.060944 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:35 crc kubenswrapper[4755]: I1210 15:24:35.060984 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:35 crc kubenswrapper[4755]: I1210 15:24:35.060996 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:35 crc kubenswrapper[4755]: I1210 15:24:35.061013 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:35 crc kubenswrapper[4755]: I1210 15:24:35.061024 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:35Z","lastTransitionTime":"2025-12-10T15:24:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:35 crc kubenswrapper[4755]: I1210 15:24:35.171188 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:35 crc kubenswrapper[4755]: I1210 15:24:35.171257 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:35 crc kubenswrapper[4755]: I1210 15:24:35.171317 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:35 crc kubenswrapper[4755]: I1210 15:24:35.171346 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:35 crc kubenswrapper[4755]: I1210 15:24:35.171997 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:35Z","lastTransitionTime":"2025-12-10T15:24:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:35 crc kubenswrapper[4755]: I1210 15:24:35.274736 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:35 crc kubenswrapper[4755]: I1210 15:24:35.274767 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:35 crc kubenswrapper[4755]: I1210 15:24:35.274777 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:35 crc kubenswrapper[4755]: I1210 15:24:35.274791 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:35 crc kubenswrapper[4755]: I1210 15:24:35.274801 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:35Z","lastTransitionTime":"2025-12-10T15:24:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:35 crc kubenswrapper[4755]: I1210 15:24:35.377223 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:35 crc kubenswrapper[4755]: I1210 15:24:35.377286 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:35 crc kubenswrapper[4755]: I1210 15:24:35.377298 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:35 crc kubenswrapper[4755]: I1210 15:24:35.377315 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:35 crc kubenswrapper[4755]: I1210 15:24:35.377326 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:35Z","lastTransitionTime":"2025-12-10T15:24:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:35 crc kubenswrapper[4755]: I1210 15:24:35.479734 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:35 crc kubenswrapper[4755]: I1210 15:24:35.479771 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:35 crc kubenswrapper[4755]: I1210 15:24:35.479780 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:35 crc kubenswrapper[4755]: I1210 15:24:35.479794 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:35 crc kubenswrapper[4755]: I1210 15:24:35.479803 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:35Z","lastTransitionTime":"2025-12-10T15:24:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:35 crc kubenswrapper[4755]: I1210 15:24:35.582271 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:35 crc kubenswrapper[4755]: I1210 15:24:35.582300 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:35 crc kubenswrapper[4755]: I1210 15:24:35.582309 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:35 crc kubenswrapper[4755]: I1210 15:24:35.582322 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:35 crc kubenswrapper[4755]: I1210 15:24:35.582332 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:35Z","lastTransitionTime":"2025-12-10T15:24:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:35 crc kubenswrapper[4755]: I1210 15:24:35.684897 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:35 crc kubenswrapper[4755]: I1210 15:24:35.684941 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:35 crc kubenswrapper[4755]: I1210 15:24:35.684954 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:35 crc kubenswrapper[4755]: I1210 15:24:35.684973 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:35 crc kubenswrapper[4755]: I1210 15:24:35.684986 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:35Z","lastTransitionTime":"2025-12-10T15:24:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:35 crc kubenswrapper[4755]: I1210 15:24:35.757082 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 15:24:35 crc kubenswrapper[4755]: I1210 15:24:35.757155 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5ctz" Dec 10 15:24:35 crc kubenswrapper[4755]: I1210 15:24:35.757168 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 15:24:35 crc kubenswrapper[4755]: E1210 15:24:35.757222 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 15:24:35 crc kubenswrapper[4755]: I1210 15:24:35.757342 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 15:24:35 crc kubenswrapper[4755]: E1210 15:24:35.757335 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5ctz" podUID="17673130-8212-4f8f-8859-92774f0ee202" Dec 10 15:24:35 crc kubenswrapper[4755]: E1210 15:24:35.757478 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 15:24:35 crc kubenswrapper[4755]: E1210 15:24:35.757561 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 15:24:35 crc kubenswrapper[4755]: I1210 15:24:35.787195 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:35 crc kubenswrapper[4755]: I1210 15:24:35.787480 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:35 crc kubenswrapper[4755]: I1210 15:24:35.787544 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:35 crc kubenswrapper[4755]: I1210 15:24:35.787614 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:35 crc kubenswrapper[4755]: I1210 15:24:35.787672 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:35Z","lastTransitionTime":"2025-12-10T15:24:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:35 crc kubenswrapper[4755]: I1210 15:24:35.889893 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:35 crc kubenswrapper[4755]: I1210 15:24:35.889935 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:35 crc kubenswrapper[4755]: I1210 15:24:35.889946 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:35 crc kubenswrapper[4755]: I1210 15:24:35.889966 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:35 crc kubenswrapper[4755]: I1210 15:24:35.889991 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:35Z","lastTransitionTime":"2025-12-10T15:24:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:35 crc kubenswrapper[4755]: I1210 15:24:35.991812 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:35 crc kubenswrapper[4755]: I1210 15:24:35.992425 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:35 crc kubenswrapper[4755]: I1210 15:24:35.992521 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:35 crc kubenswrapper[4755]: I1210 15:24:35.992587 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:35 crc kubenswrapper[4755]: I1210 15:24:35.992639 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:35Z","lastTransitionTime":"2025-12-10T15:24:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:36 crc kubenswrapper[4755]: I1210 15:24:36.094761 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:36 crc kubenswrapper[4755]: I1210 15:24:36.094805 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:36 crc kubenswrapper[4755]: I1210 15:24:36.094817 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:36 crc kubenswrapper[4755]: I1210 15:24:36.094842 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:36 crc kubenswrapper[4755]: I1210 15:24:36.094856 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:36Z","lastTransitionTime":"2025-12-10T15:24:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:36 crc kubenswrapper[4755]: I1210 15:24:36.197626 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:36 crc kubenswrapper[4755]: I1210 15:24:36.197676 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:36 crc kubenswrapper[4755]: I1210 15:24:36.197688 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:36 crc kubenswrapper[4755]: I1210 15:24:36.197708 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:36 crc kubenswrapper[4755]: I1210 15:24:36.197718 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:36Z","lastTransitionTime":"2025-12-10T15:24:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:36 crc kubenswrapper[4755]: I1210 15:24:36.300530 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:36 crc kubenswrapper[4755]: I1210 15:24:36.300579 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:36 crc kubenswrapper[4755]: I1210 15:24:36.300594 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:36 crc kubenswrapper[4755]: I1210 15:24:36.300613 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:36 crc kubenswrapper[4755]: I1210 15:24:36.300625 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:36Z","lastTransitionTime":"2025-12-10T15:24:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:36 crc kubenswrapper[4755]: I1210 15:24:36.402839 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:36 crc kubenswrapper[4755]: I1210 15:24:36.402871 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:36 crc kubenswrapper[4755]: I1210 15:24:36.402879 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:36 crc kubenswrapper[4755]: I1210 15:24:36.402910 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:36 crc kubenswrapper[4755]: I1210 15:24:36.402921 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:36Z","lastTransitionTime":"2025-12-10T15:24:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:36 crc kubenswrapper[4755]: I1210 15:24:36.511087 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:36 crc kubenswrapper[4755]: I1210 15:24:36.511131 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:36 crc kubenswrapper[4755]: I1210 15:24:36.511143 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:36 crc kubenswrapper[4755]: I1210 15:24:36.511160 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:36 crc kubenswrapper[4755]: I1210 15:24:36.511173 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:36Z","lastTransitionTime":"2025-12-10T15:24:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:36 crc kubenswrapper[4755]: I1210 15:24:36.613707 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:36 crc kubenswrapper[4755]: I1210 15:24:36.613732 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:36 crc kubenswrapper[4755]: I1210 15:24:36.613740 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:36 crc kubenswrapper[4755]: I1210 15:24:36.613754 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:36 crc kubenswrapper[4755]: I1210 15:24:36.613762 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:36Z","lastTransitionTime":"2025-12-10T15:24:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:36 crc kubenswrapper[4755]: I1210 15:24:36.715666 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:36 crc kubenswrapper[4755]: I1210 15:24:36.715697 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:36 crc kubenswrapper[4755]: I1210 15:24:36.715707 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:36 crc kubenswrapper[4755]: I1210 15:24:36.715720 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:36 crc kubenswrapper[4755]: I1210 15:24:36.715731 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:36Z","lastTransitionTime":"2025-12-10T15:24:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:36 crc kubenswrapper[4755]: I1210 15:24:36.818311 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:36 crc kubenswrapper[4755]: I1210 15:24:36.818357 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:36 crc kubenswrapper[4755]: I1210 15:24:36.818368 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:36 crc kubenswrapper[4755]: I1210 15:24:36.818384 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:36 crc kubenswrapper[4755]: I1210 15:24:36.818395 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:36Z","lastTransitionTime":"2025-12-10T15:24:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:36 crc kubenswrapper[4755]: I1210 15:24:36.921387 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:36 crc kubenswrapper[4755]: I1210 15:24:36.921436 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:36 crc kubenswrapper[4755]: I1210 15:24:36.921448 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:36 crc kubenswrapper[4755]: I1210 15:24:36.921490 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:36 crc kubenswrapper[4755]: I1210 15:24:36.921506 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:36Z","lastTransitionTime":"2025-12-10T15:24:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:37 crc kubenswrapper[4755]: I1210 15:24:37.023853 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:37 crc kubenswrapper[4755]: I1210 15:24:37.023888 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:37 crc kubenswrapper[4755]: I1210 15:24:37.023897 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:37 crc kubenswrapper[4755]: I1210 15:24:37.023911 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:37 crc kubenswrapper[4755]: I1210 15:24:37.023920 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:37Z","lastTransitionTime":"2025-12-10T15:24:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:37 crc kubenswrapper[4755]: I1210 15:24:37.126682 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:37 crc kubenswrapper[4755]: I1210 15:24:37.126733 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:37 crc kubenswrapper[4755]: I1210 15:24:37.126745 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:37 crc kubenswrapper[4755]: I1210 15:24:37.126763 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:37 crc kubenswrapper[4755]: I1210 15:24:37.126775 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:37Z","lastTransitionTime":"2025-12-10T15:24:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:37 crc kubenswrapper[4755]: I1210 15:24:37.229571 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:37 crc kubenswrapper[4755]: I1210 15:24:37.229606 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:37 crc kubenswrapper[4755]: I1210 15:24:37.229614 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:37 crc kubenswrapper[4755]: I1210 15:24:37.229631 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:37 crc kubenswrapper[4755]: I1210 15:24:37.229641 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:37Z","lastTransitionTime":"2025-12-10T15:24:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:37 crc kubenswrapper[4755]: I1210 15:24:37.293582 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:37 crc kubenswrapper[4755]: I1210 15:24:37.293619 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:37 crc kubenswrapper[4755]: I1210 15:24:37.293633 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:37 crc kubenswrapper[4755]: I1210 15:24:37.293649 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:37 crc kubenswrapper[4755]: I1210 15:24:37.293662 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:37Z","lastTransitionTime":"2025-12-10T15:24:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:37 crc kubenswrapper[4755]: E1210 15:24:37.308271 4755 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:24:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:24:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:24:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:24:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ba232303-88d5-4931-b82e-34d9a0e5c06a\\\",\\\"systemUUID\\\":\\\"ebd59de0-c6b0-47c1-bc17-6f665dcf344d\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:37Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:37 crc kubenswrapper[4755]: I1210 15:24:37.311510 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:37 crc kubenswrapper[4755]: I1210 15:24:37.311547 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:37 crc kubenswrapper[4755]: I1210 15:24:37.311556 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:37 crc kubenswrapper[4755]: I1210 15:24:37.311571 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:37 crc kubenswrapper[4755]: I1210 15:24:37.311580 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:37Z","lastTransitionTime":"2025-12-10T15:24:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:37 crc kubenswrapper[4755]: E1210 15:24:37.322187 4755 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:24:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:24:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:24:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:24:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ba232303-88d5-4931-b82e-34d9a0e5c06a\\\",\\\"systemUUID\\\":\\\"ebd59de0-c6b0-47c1-bc17-6f665dcf344d\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:37Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:37 crc kubenswrapper[4755]: I1210 15:24:37.325579 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:37 crc kubenswrapper[4755]: I1210 15:24:37.325623 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:37 crc kubenswrapper[4755]: I1210 15:24:37.325636 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:37 crc kubenswrapper[4755]: I1210 15:24:37.325656 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:37 crc kubenswrapper[4755]: I1210 15:24:37.325668 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:37Z","lastTransitionTime":"2025-12-10T15:24:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:37 crc kubenswrapper[4755]: E1210 15:24:37.337592 4755 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:24:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:24:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:24:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:24:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ba232303-88d5-4931-b82e-34d9a0e5c06a\\\",\\\"systemUUID\\\":\\\"ebd59de0-c6b0-47c1-bc17-6f665dcf344d\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:37Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:37 crc kubenswrapper[4755]: I1210 15:24:37.340954 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:37 crc kubenswrapper[4755]: I1210 15:24:37.340993 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:37 crc kubenswrapper[4755]: I1210 15:24:37.341004 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:37 crc kubenswrapper[4755]: I1210 15:24:37.341019 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:37 crc kubenswrapper[4755]: I1210 15:24:37.341032 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:37Z","lastTransitionTime":"2025-12-10T15:24:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:37 crc kubenswrapper[4755]: E1210 15:24:37.351205 4755 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:24:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:24:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:24:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:24:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ba232303-88d5-4931-b82e-34d9a0e5c06a\\\",\\\"systemUUID\\\":\\\"ebd59de0-c6b0-47c1-bc17-6f665dcf344d\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:37Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:37 crc kubenswrapper[4755]: I1210 15:24:37.353936 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:37 crc kubenswrapper[4755]: I1210 15:24:37.354095 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:37 crc kubenswrapper[4755]: I1210 15:24:37.354198 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:37 crc kubenswrapper[4755]: I1210 15:24:37.354293 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:37 crc kubenswrapper[4755]: I1210 15:24:37.354384 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:37Z","lastTransitionTime":"2025-12-10T15:24:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:37 crc kubenswrapper[4755]: E1210 15:24:37.365756 4755 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:24:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:24:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:24:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:24:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ba232303-88d5-4931-b82e-34d9a0e5c06a\\\",\\\"systemUUID\\\":\\\"ebd59de0-c6b0-47c1-bc17-6f665dcf344d\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:37Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:37 crc kubenswrapper[4755]: E1210 15:24:37.365900 4755 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 10 15:24:37 crc kubenswrapper[4755]: I1210 15:24:37.367374 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:37 crc kubenswrapper[4755]: I1210 15:24:37.367398 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:37 crc kubenswrapper[4755]: I1210 15:24:37.367407 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:37 crc kubenswrapper[4755]: I1210 15:24:37.367419 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:37 crc kubenswrapper[4755]: I1210 15:24:37.367428 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:37Z","lastTransitionTime":"2025-12-10T15:24:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:37 crc kubenswrapper[4755]: I1210 15:24:37.470243 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:37 crc kubenswrapper[4755]: I1210 15:24:37.470323 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:37 crc kubenswrapper[4755]: I1210 15:24:37.470339 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:37 crc kubenswrapper[4755]: I1210 15:24:37.470357 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:37 crc kubenswrapper[4755]: I1210 15:24:37.470370 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:37Z","lastTransitionTime":"2025-12-10T15:24:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:37 crc kubenswrapper[4755]: I1210 15:24:37.572351 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:37 crc kubenswrapper[4755]: I1210 15:24:37.572386 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:37 crc kubenswrapper[4755]: I1210 15:24:37.572394 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:37 crc kubenswrapper[4755]: I1210 15:24:37.572408 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:37 crc kubenswrapper[4755]: I1210 15:24:37.572418 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:37Z","lastTransitionTime":"2025-12-10T15:24:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:37 crc kubenswrapper[4755]: I1210 15:24:37.674363 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:37 crc kubenswrapper[4755]: I1210 15:24:37.674415 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:37 crc kubenswrapper[4755]: I1210 15:24:37.674429 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:37 crc kubenswrapper[4755]: I1210 15:24:37.674444 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:37 crc kubenswrapper[4755]: I1210 15:24:37.674457 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:37Z","lastTransitionTime":"2025-12-10T15:24:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:37 crc kubenswrapper[4755]: I1210 15:24:37.756732 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 15:24:37 crc kubenswrapper[4755]: I1210 15:24:37.756740 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 15:24:37 crc kubenswrapper[4755]: I1210 15:24:37.756804 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 15:24:37 crc kubenswrapper[4755]: I1210 15:24:37.756879 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5ctz" Dec 10 15:24:37 crc kubenswrapper[4755]: E1210 15:24:37.757085 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 15:24:37 crc kubenswrapper[4755]: E1210 15:24:37.757274 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 15:24:37 crc kubenswrapper[4755]: E1210 15:24:37.757455 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 15:24:37 crc kubenswrapper[4755]: E1210 15:24:37.757664 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5ctz" podUID="17673130-8212-4f8f-8859-92774f0ee202" Dec 10 15:24:37 crc kubenswrapper[4755]: I1210 15:24:37.777152 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:37 crc kubenswrapper[4755]: I1210 15:24:37.777179 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:37 crc kubenswrapper[4755]: I1210 15:24:37.777188 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:37 crc kubenswrapper[4755]: I1210 15:24:37.777202 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:37 crc kubenswrapper[4755]: I1210 15:24:37.777212 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:37Z","lastTransitionTime":"2025-12-10T15:24:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:37 crc kubenswrapper[4755]: I1210 15:24:37.879234 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:37 crc kubenswrapper[4755]: I1210 15:24:37.879260 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:37 crc kubenswrapper[4755]: I1210 15:24:37.879267 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:37 crc kubenswrapper[4755]: I1210 15:24:37.879280 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:37 crc kubenswrapper[4755]: I1210 15:24:37.879289 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:37Z","lastTransitionTime":"2025-12-10T15:24:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:37 crc kubenswrapper[4755]: I1210 15:24:37.981663 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:37 crc kubenswrapper[4755]: I1210 15:24:37.981697 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:37 crc kubenswrapper[4755]: I1210 15:24:37.981706 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:37 crc kubenswrapper[4755]: I1210 15:24:37.981721 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:37 crc kubenswrapper[4755]: I1210 15:24:37.981760 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:37Z","lastTransitionTime":"2025-12-10T15:24:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:38 crc kubenswrapper[4755]: I1210 15:24:38.084276 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:38 crc kubenswrapper[4755]: I1210 15:24:38.084308 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:38 crc kubenswrapper[4755]: I1210 15:24:38.084319 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:38 crc kubenswrapper[4755]: I1210 15:24:38.084334 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:38 crc kubenswrapper[4755]: I1210 15:24:38.084344 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:38Z","lastTransitionTime":"2025-12-10T15:24:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:38 crc kubenswrapper[4755]: I1210 15:24:38.186954 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:38 crc kubenswrapper[4755]: I1210 15:24:38.186984 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:38 crc kubenswrapper[4755]: I1210 15:24:38.186993 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:38 crc kubenswrapper[4755]: I1210 15:24:38.187008 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:38 crc kubenswrapper[4755]: I1210 15:24:38.187018 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:38Z","lastTransitionTime":"2025-12-10T15:24:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:38 crc kubenswrapper[4755]: I1210 15:24:38.290102 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:38 crc kubenswrapper[4755]: I1210 15:24:38.290138 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:38 crc kubenswrapper[4755]: I1210 15:24:38.290150 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:38 crc kubenswrapper[4755]: I1210 15:24:38.290166 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:38 crc kubenswrapper[4755]: I1210 15:24:38.290177 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:38Z","lastTransitionTime":"2025-12-10T15:24:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:38 crc kubenswrapper[4755]: I1210 15:24:38.392738 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:38 crc kubenswrapper[4755]: I1210 15:24:38.392792 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:38 crc kubenswrapper[4755]: I1210 15:24:38.392817 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:38 crc kubenswrapper[4755]: I1210 15:24:38.392845 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:38 crc kubenswrapper[4755]: I1210 15:24:38.392866 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:38Z","lastTransitionTime":"2025-12-10T15:24:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:38 crc kubenswrapper[4755]: I1210 15:24:38.495291 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:38 crc kubenswrapper[4755]: I1210 15:24:38.495329 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:38 crc kubenswrapper[4755]: I1210 15:24:38.495340 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:38 crc kubenswrapper[4755]: I1210 15:24:38.495357 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:38 crc kubenswrapper[4755]: I1210 15:24:38.495369 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:38Z","lastTransitionTime":"2025-12-10T15:24:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:38 crc kubenswrapper[4755]: I1210 15:24:38.597759 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:38 crc kubenswrapper[4755]: I1210 15:24:38.597806 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:38 crc kubenswrapper[4755]: I1210 15:24:38.597818 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:38 crc kubenswrapper[4755]: I1210 15:24:38.597833 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:38 crc kubenswrapper[4755]: I1210 15:24:38.597843 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:38Z","lastTransitionTime":"2025-12-10T15:24:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:38 crc kubenswrapper[4755]: I1210 15:24:38.700421 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:38 crc kubenswrapper[4755]: I1210 15:24:38.700494 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:38 crc kubenswrapper[4755]: I1210 15:24:38.700520 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:38 crc kubenswrapper[4755]: I1210 15:24:38.700542 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:38 crc kubenswrapper[4755]: I1210 15:24:38.700556 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:38Z","lastTransitionTime":"2025-12-10T15:24:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:38 crc kubenswrapper[4755]: I1210 15:24:38.804075 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:38 crc kubenswrapper[4755]: I1210 15:24:38.804332 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:38 crc kubenswrapper[4755]: I1210 15:24:38.804422 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:38 crc kubenswrapper[4755]: I1210 15:24:38.804546 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:38 crc kubenswrapper[4755]: I1210 15:24:38.804637 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:38Z","lastTransitionTime":"2025-12-10T15:24:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:38 crc kubenswrapper[4755]: I1210 15:24:38.908829 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:38 crc kubenswrapper[4755]: I1210 15:24:38.908877 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:38 crc kubenswrapper[4755]: I1210 15:24:38.908891 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:38 crc kubenswrapper[4755]: I1210 15:24:38.908907 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:38 crc kubenswrapper[4755]: I1210 15:24:38.908917 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:38Z","lastTransitionTime":"2025-12-10T15:24:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:39 crc kubenswrapper[4755]: I1210 15:24:39.011408 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:39 crc kubenswrapper[4755]: I1210 15:24:39.011448 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:39 crc kubenswrapper[4755]: I1210 15:24:39.011459 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:39 crc kubenswrapper[4755]: I1210 15:24:39.011492 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:39 crc kubenswrapper[4755]: I1210 15:24:39.011507 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:39Z","lastTransitionTime":"2025-12-10T15:24:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:39 crc kubenswrapper[4755]: I1210 15:24:39.114299 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:39 crc kubenswrapper[4755]: I1210 15:24:39.114341 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:39 crc kubenswrapper[4755]: I1210 15:24:39.114352 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:39 crc kubenswrapper[4755]: I1210 15:24:39.114371 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:39 crc kubenswrapper[4755]: I1210 15:24:39.114385 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:39Z","lastTransitionTime":"2025-12-10T15:24:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:39 crc kubenswrapper[4755]: I1210 15:24:39.217131 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:39 crc kubenswrapper[4755]: I1210 15:24:39.217187 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:39 crc kubenswrapper[4755]: I1210 15:24:39.217195 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:39 crc kubenswrapper[4755]: I1210 15:24:39.217210 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:39 crc kubenswrapper[4755]: I1210 15:24:39.217220 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:39Z","lastTransitionTime":"2025-12-10T15:24:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:39 crc kubenswrapper[4755]: I1210 15:24:39.319342 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:39 crc kubenswrapper[4755]: I1210 15:24:39.319371 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:39 crc kubenswrapper[4755]: I1210 15:24:39.319380 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:39 crc kubenswrapper[4755]: I1210 15:24:39.319393 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:39 crc kubenswrapper[4755]: I1210 15:24:39.319402 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:39Z","lastTransitionTime":"2025-12-10T15:24:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:39 crc kubenswrapper[4755]: I1210 15:24:39.421486 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:39 crc kubenswrapper[4755]: I1210 15:24:39.421519 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:39 crc kubenswrapper[4755]: I1210 15:24:39.421527 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:39 crc kubenswrapper[4755]: I1210 15:24:39.421542 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:39 crc kubenswrapper[4755]: I1210 15:24:39.421551 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:39Z","lastTransitionTime":"2025-12-10T15:24:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:39 crc kubenswrapper[4755]: I1210 15:24:39.525496 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:39 crc kubenswrapper[4755]: I1210 15:24:39.525541 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:39 crc kubenswrapper[4755]: I1210 15:24:39.525596 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:39 crc kubenswrapper[4755]: I1210 15:24:39.525617 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:39 crc kubenswrapper[4755]: I1210 15:24:39.525631 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:39Z","lastTransitionTime":"2025-12-10T15:24:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:39 crc kubenswrapper[4755]: I1210 15:24:39.628873 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:39 crc kubenswrapper[4755]: I1210 15:24:39.628954 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:39 crc kubenswrapper[4755]: I1210 15:24:39.628970 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:39 crc kubenswrapper[4755]: I1210 15:24:39.629019 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:39 crc kubenswrapper[4755]: I1210 15:24:39.629036 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:39Z","lastTransitionTime":"2025-12-10T15:24:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:39 crc kubenswrapper[4755]: I1210 15:24:39.731769 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:39 crc kubenswrapper[4755]: I1210 15:24:39.731823 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:39 crc kubenswrapper[4755]: I1210 15:24:39.731838 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:39 crc kubenswrapper[4755]: I1210 15:24:39.731862 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:39 crc kubenswrapper[4755]: I1210 15:24:39.731878 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:39Z","lastTransitionTime":"2025-12-10T15:24:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:39 crc kubenswrapper[4755]: I1210 15:24:39.757536 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 15:24:39 crc kubenswrapper[4755]: I1210 15:24:39.757575 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 15:24:39 crc kubenswrapper[4755]: I1210 15:24:39.757594 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5ctz" Dec 10 15:24:39 crc kubenswrapper[4755]: I1210 15:24:39.757536 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 15:24:39 crc kubenswrapper[4755]: E1210 15:24:39.757697 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 15:24:39 crc kubenswrapper[4755]: E1210 15:24:39.757776 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 15:24:39 crc kubenswrapper[4755]: E1210 15:24:39.758038 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 15:24:39 crc kubenswrapper[4755]: E1210 15:24:39.758123 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5ctz" podUID="17673130-8212-4f8f-8859-92774f0ee202" Dec 10 15:24:39 crc kubenswrapper[4755]: I1210 15:24:39.771580 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Dec 10 15:24:39 crc kubenswrapper[4755]: I1210 15:24:39.834678 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:39 crc kubenswrapper[4755]: I1210 15:24:39.834706 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:39 crc kubenswrapper[4755]: I1210 15:24:39.834715 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:39 crc kubenswrapper[4755]: I1210 15:24:39.834728 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:39 crc kubenswrapper[4755]: I1210 15:24:39.834737 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:39Z","lastTransitionTime":"2025-12-10T15:24:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:39 crc kubenswrapper[4755]: I1210 15:24:39.937944 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:39 crc kubenswrapper[4755]: I1210 15:24:39.937980 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:39 crc kubenswrapper[4755]: I1210 15:24:39.937990 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:39 crc kubenswrapper[4755]: I1210 15:24:39.938005 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:39 crc kubenswrapper[4755]: I1210 15:24:39.938017 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:39Z","lastTransitionTime":"2025-12-10T15:24:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:40 crc kubenswrapper[4755]: I1210 15:24:40.041295 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:40 crc kubenswrapper[4755]: I1210 15:24:40.041360 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:40 crc kubenswrapper[4755]: I1210 15:24:40.041371 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:40 crc kubenswrapper[4755]: I1210 15:24:40.041388 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:40 crc kubenswrapper[4755]: I1210 15:24:40.041400 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:40Z","lastTransitionTime":"2025-12-10T15:24:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:40 crc kubenswrapper[4755]: I1210 15:24:40.143267 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:40 crc kubenswrapper[4755]: I1210 15:24:40.143301 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:40 crc kubenswrapper[4755]: I1210 15:24:40.143313 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:40 crc kubenswrapper[4755]: I1210 15:24:40.143329 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:40 crc kubenswrapper[4755]: I1210 15:24:40.143340 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:40Z","lastTransitionTime":"2025-12-10T15:24:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:40 crc kubenswrapper[4755]: I1210 15:24:40.245618 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:40 crc kubenswrapper[4755]: I1210 15:24:40.245667 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:40 crc kubenswrapper[4755]: I1210 15:24:40.245682 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:40 crc kubenswrapper[4755]: I1210 15:24:40.245702 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:40 crc kubenswrapper[4755]: I1210 15:24:40.245717 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:40Z","lastTransitionTime":"2025-12-10T15:24:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:40 crc kubenswrapper[4755]: I1210 15:24:40.348309 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:40 crc kubenswrapper[4755]: I1210 15:24:40.348346 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:40 crc kubenswrapper[4755]: I1210 15:24:40.348358 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:40 crc kubenswrapper[4755]: I1210 15:24:40.348373 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:40 crc kubenswrapper[4755]: I1210 15:24:40.348384 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:40Z","lastTransitionTime":"2025-12-10T15:24:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:40 crc kubenswrapper[4755]: I1210 15:24:40.450977 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:40 crc kubenswrapper[4755]: I1210 15:24:40.451027 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:40 crc kubenswrapper[4755]: I1210 15:24:40.451040 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:40 crc kubenswrapper[4755]: I1210 15:24:40.451057 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:40 crc kubenswrapper[4755]: I1210 15:24:40.451069 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:40Z","lastTransitionTime":"2025-12-10T15:24:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:40 crc kubenswrapper[4755]: I1210 15:24:40.553175 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:40 crc kubenswrapper[4755]: I1210 15:24:40.553235 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:40 crc kubenswrapper[4755]: I1210 15:24:40.553247 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:40 crc kubenswrapper[4755]: I1210 15:24:40.553266 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:40 crc kubenswrapper[4755]: I1210 15:24:40.553277 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:40Z","lastTransitionTime":"2025-12-10T15:24:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:40 crc kubenswrapper[4755]: I1210 15:24:40.655151 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:40 crc kubenswrapper[4755]: I1210 15:24:40.655203 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:40 crc kubenswrapper[4755]: I1210 15:24:40.655214 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:40 crc kubenswrapper[4755]: I1210 15:24:40.655231 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:40 crc kubenswrapper[4755]: I1210 15:24:40.655244 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:40Z","lastTransitionTime":"2025-12-10T15:24:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:40 crc kubenswrapper[4755]: I1210 15:24:40.758625 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:40 crc kubenswrapper[4755]: I1210 15:24:40.758663 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:40 crc kubenswrapper[4755]: I1210 15:24:40.758671 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:40 crc kubenswrapper[4755]: I1210 15:24:40.758708 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:40 crc kubenswrapper[4755]: I1210 15:24:40.758719 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:40Z","lastTransitionTime":"2025-12-10T15:24:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:40 crc kubenswrapper[4755]: I1210 15:24:40.860910 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:40 crc kubenswrapper[4755]: I1210 15:24:40.860949 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:40 crc kubenswrapper[4755]: I1210 15:24:40.860958 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:40 crc kubenswrapper[4755]: I1210 15:24:40.860973 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:40 crc kubenswrapper[4755]: I1210 15:24:40.860982 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:40Z","lastTransitionTime":"2025-12-10T15:24:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:40 crc kubenswrapper[4755]: I1210 15:24:40.963143 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:40 crc kubenswrapper[4755]: I1210 15:24:40.963179 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:40 crc kubenswrapper[4755]: I1210 15:24:40.963190 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:40 crc kubenswrapper[4755]: I1210 15:24:40.963205 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:40 crc kubenswrapper[4755]: I1210 15:24:40.963217 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:40Z","lastTransitionTime":"2025-12-10T15:24:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:41 crc kubenswrapper[4755]: I1210 15:24:41.065934 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:41 crc kubenswrapper[4755]: I1210 15:24:41.065988 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:41 crc kubenswrapper[4755]: I1210 15:24:41.065998 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:41 crc kubenswrapper[4755]: I1210 15:24:41.066013 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:41 crc kubenswrapper[4755]: I1210 15:24:41.066022 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:41Z","lastTransitionTime":"2025-12-10T15:24:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:41 crc kubenswrapper[4755]: I1210 15:24:41.169063 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:41 crc kubenswrapper[4755]: I1210 15:24:41.169105 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:41 crc kubenswrapper[4755]: I1210 15:24:41.169145 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:41 crc kubenswrapper[4755]: I1210 15:24:41.169162 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:41 crc kubenswrapper[4755]: I1210 15:24:41.169176 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:41Z","lastTransitionTime":"2025-12-10T15:24:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:41 crc kubenswrapper[4755]: I1210 15:24:41.272907 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:41 crc kubenswrapper[4755]: I1210 15:24:41.272972 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:41 crc kubenswrapper[4755]: I1210 15:24:41.272984 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:41 crc kubenswrapper[4755]: I1210 15:24:41.272998 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:41 crc kubenswrapper[4755]: I1210 15:24:41.273006 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:41Z","lastTransitionTime":"2025-12-10T15:24:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:41 crc kubenswrapper[4755]: I1210 15:24:41.376189 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:41 crc kubenswrapper[4755]: I1210 15:24:41.376256 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:41 crc kubenswrapper[4755]: I1210 15:24:41.376266 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:41 crc kubenswrapper[4755]: I1210 15:24:41.376281 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:41 crc kubenswrapper[4755]: I1210 15:24:41.376290 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:41Z","lastTransitionTime":"2025-12-10T15:24:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:41 crc kubenswrapper[4755]: I1210 15:24:41.478422 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:41 crc kubenswrapper[4755]: I1210 15:24:41.478450 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:41 crc kubenswrapper[4755]: I1210 15:24:41.478484 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:41 crc kubenswrapper[4755]: I1210 15:24:41.478507 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:41 crc kubenswrapper[4755]: I1210 15:24:41.478539 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:41Z","lastTransitionTime":"2025-12-10T15:24:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:41 crc kubenswrapper[4755]: I1210 15:24:41.580742 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:41 crc kubenswrapper[4755]: I1210 15:24:41.580897 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:41 crc kubenswrapper[4755]: I1210 15:24:41.580911 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:41 crc kubenswrapper[4755]: I1210 15:24:41.580927 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:41 crc kubenswrapper[4755]: I1210 15:24:41.580939 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:41Z","lastTransitionTime":"2025-12-10T15:24:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:41 crc kubenswrapper[4755]: I1210 15:24:41.683646 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:41 crc kubenswrapper[4755]: I1210 15:24:41.683679 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:41 crc kubenswrapper[4755]: I1210 15:24:41.683689 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:41 crc kubenswrapper[4755]: I1210 15:24:41.683705 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:41 crc kubenswrapper[4755]: I1210 15:24:41.683715 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:41Z","lastTransitionTime":"2025-12-10T15:24:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:41 crc kubenswrapper[4755]: I1210 15:24:41.757625 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 15:24:41 crc kubenswrapper[4755]: I1210 15:24:41.757673 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 15:24:41 crc kubenswrapper[4755]: I1210 15:24:41.757635 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 15:24:41 crc kubenswrapper[4755]: E1210 15:24:41.757745 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 15:24:41 crc kubenswrapper[4755]: E1210 15:24:41.757896 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 15:24:41 crc kubenswrapper[4755]: I1210 15:24:41.757974 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5ctz" Dec 10 15:24:41 crc kubenswrapper[4755]: E1210 15:24:41.758296 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 15:24:41 crc kubenswrapper[4755]: I1210 15:24:41.758509 4755 scope.go:117] "RemoveContainer" containerID="807b21c925067bde1df3babdc35da6c8805e10be110c0282376dbd6e8be69dcb" Dec 10 15:24:41 crc kubenswrapper[4755]: E1210 15:24:41.758502 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5ctz" podUID="17673130-8212-4f8f-8859-92774f0ee202" Dec 10 15:24:41 crc kubenswrapper[4755]: I1210 15:24:41.786323 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:41 crc kubenswrapper[4755]: I1210 15:24:41.786365 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:41 crc kubenswrapper[4755]: I1210 15:24:41.786380 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:41 crc kubenswrapper[4755]: I1210 15:24:41.786397 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:41 crc kubenswrapper[4755]: I1210 15:24:41.786408 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:41Z","lastTransitionTime":"2025-12-10T15:24:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:41 crc kubenswrapper[4755]: I1210 15:24:41.888251 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:41 crc kubenswrapper[4755]: I1210 15:24:41.888287 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:41 crc kubenswrapper[4755]: I1210 15:24:41.888295 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:41 crc kubenswrapper[4755]: I1210 15:24:41.888310 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:41 crc kubenswrapper[4755]: I1210 15:24:41.888319 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:41Z","lastTransitionTime":"2025-12-10T15:24:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:41 crc kubenswrapper[4755]: I1210 15:24:41.990954 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:41 crc kubenswrapper[4755]: I1210 15:24:41.991207 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:41 crc kubenswrapper[4755]: I1210 15:24:41.991274 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:41 crc kubenswrapper[4755]: I1210 15:24:41.991337 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:41 crc kubenswrapper[4755]: I1210 15:24:41.991392 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:41Z","lastTransitionTime":"2025-12-10T15:24:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:42 crc kubenswrapper[4755]: I1210 15:24:42.094448 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:42 crc kubenswrapper[4755]: I1210 15:24:42.094859 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:42 crc kubenswrapper[4755]: I1210 15:24:42.094978 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:42 crc kubenswrapper[4755]: I1210 15:24:42.095086 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:42 crc kubenswrapper[4755]: I1210 15:24:42.095180 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:42Z","lastTransitionTime":"2025-12-10T15:24:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:42 crc kubenswrapper[4755]: I1210 15:24:42.198334 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:42 crc kubenswrapper[4755]: I1210 15:24:42.198406 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:42 crc kubenswrapper[4755]: I1210 15:24:42.198425 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:42 crc kubenswrapper[4755]: I1210 15:24:42.198450 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:42 crc kubenswrapper[4755]: I1210 15:24:42.198493 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:42Z","lastTransitionTime":"2025-12-10T15:24:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:42 crc kubenswrapper[4755]: I1210 15:24:42.300565 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:42 crc kubenswrapper[4755]: I1210 15:24:42.300802 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:42 crc kubenswrapper[4755]: I1210 15:24:42.300832 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:42 crc kubenswrapper[4755]: I1210 15:24:42.300860 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:42 crc kubenswrapper[4755]: I1210 15:24:42.300878 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:42Z","lastTransitionTime":"2025-12-10T15:24:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:42 crc kubenswrapper[4755]: I1210 15:24:42.403448 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:42 crc kubenswrapper[4755]: I1210 15:24:42.403537 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:42 crc kubenswrapper[4755]: I1210 15:24:42.403548 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:42 crc kubenswrapper[4755]: I1210 15:24:42.403563 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:42 crc kubenswrapper[4755]: I1210 15:24:42.403574 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:42Z","lastTransitionTime":"2025-12-10T15:24:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:42 crc kubenswrapper[4755]: I1210 15:24:42.506806 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:42 crc kubenswrapper[4755]: I1210 15:24:42.507166 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:42 crc kubenswrapper[4755]: I1210 15:24:42.507583 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:42 crc kubenswrapper[4755]: I1210 15:24:42.507981 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:42 crc kubenswrapper[4755]: I1210 15:24:42.508148 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:42Z","lastTransitionTime":"2025-12-10T15:24:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:42 crc kubenswrapper[4755]: I1210 15:24:42.610987 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:42 crc kubenswrapper[4755]: I1210 15:24:42.611037 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:42 crc kubenswrapper[4755]: I1210 15:24:42.611052 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:42 crc kubenswrapper[4755]: I1210 15:24:42.611072 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:42 crc kubenswrapper[4755]: I1210 15:24:42.611086 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:42Z","lastTransitionTime":"2025-12-10T15:24:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:42 crc kubenswrapper[4755]: I1210 15:24:42.714302 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:42 crc kubenswrapper[4755]: I1210 15:24:42.714823 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:42 crc kubenswrapper[4755]: I1210 15:24:42.714897 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:42 crc kubenswrapper[4755]: I1210 15:24:42.714972 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:42 crc kubenswrapper[4755]: I1210 15:24:42.715036 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:42Z","lastTransitionTime":"2025-12-10T15:24:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:42 crc kubenswrapper[4755]: I1210 15:24:42.817936 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:42 crc kubenswrapper[4755]: I1210 15:24:42.817993 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:42 crc kubenswrapper[4755]: I1210 15:24:42.818011 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:42 crc kubenswrapper[4755]: I1210 15:24:42.818031 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:42 crc kubenswrapper[4755]: I1210 15:24:42.818046 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:42Z","lastTransitionTime":"2025-12-10T15:24:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:42 crc kubenswrapper[4755]: I1210 15:24:42.922235 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:42 crc kubenswrapper[4755]: I1210 15:24:42.922276 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:42 crc kubenswrapper[4755]: I1210 15:24:42.922289 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:42 crc kubenswrapper[4755]: I1210 15:24:42.922310 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:42 crc kubenswrapper[4755]: I1210 15:24:42.922324 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:42Z","lastTransitionTime":"2025-12-10T15:24:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:43 crc kubenswrapper[4755]: I1210 15:24:43.024839 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:43 crc kubenswrapper[4755]: I1210 15:24:43.024873 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:43 crc kubenswrapper[4755]: I1210 15:24:43.024882 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:43 crc kubenswrapper[4755]: I1210 15:24:43.024896 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:43 crc kubenswrapper[4755]: I1210 15:24:43.024906 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:43Z","lastTransitionTime":"2025-12-10T15:24:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:43 crc kubenswrapper[4755]: I1210 15:24:43.127552 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:43 crc kubenswrapper[4755]: I1210 15:24:43.127596 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:43 crc kubenswrapper[4755]: I1210 15:24:43.127606 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:43 crc kubenswrapper[4755]: I1210 15:24:43.127620 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:43 crc kubenswrapper[4755]: I1210 15:24:43.127631 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:43Z","lastTransitionTime":"2025-12-10T15:24:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:43 crc kubenswrapper[4755]: I1210 15:24:43.229795 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:43 crc kubenswrapper[4755]: I1210 15:24:43.229846 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:43 crc kubenswrapper[4755]: I1210 15:24:43.229857 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:43 crc kubenswrapper[4755]: I1210 15:24:43.229877 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:43 crc kubenswrapper[4755]: I1210 15:24:43.229891 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:43Z","lastTransitionTime":"2025-12-10T15:24:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:43 crc kubenswrapper[4755]: I1210 15:24:43.240605 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6lfvk_4b1da51a-99c9-4f8e-920d-ce0973af6370/ovnkube-controller/2.log" Dec 10 15:24:43 crc kubenswrapper[4755]: I1210 15:24:43.243002 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" event={"ID":"4b1da51a-99c9-4f8e-920d-ce0973af6370","Type":"ContainerStarted","Data":"0386a60f9d2d9c0cec943720b300e0cd71348b81b74234f19f1c51d34142b089"} Dec 10 15:24:43 crc kubenswrapper[4755]: I1210 15:24:43.332888 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:43 crc kubenswrapper[4755]: I1210 15:24:43.332935 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:43 crc kubenswrapper[4755]: I1210 15:24:43.332944 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:43 crc kubenswrapper[4755]: I1210 15:24:43.332959 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:43 crc kubenswrapper[4755]: I1210 15:24:43.332970 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:43Z","lastTransitionTime":"2025-12-10T15:24:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:43 crc kubenswrapper[4755]: I1210 15:24:43.435008 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:43 crc kubenswrapper[4755]: I1210 15:24:43.435044 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:43 crc kubenswrapper[4755]: I1210 15:24:43.435054 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:43 crc kubenswrapper[4755]: I1210 15:24:43.435068 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:43 crc kubenswrapper[4755]: I1210 15:24:43.435080 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:43Z","lastTransitionTime":"2025-12-10T15:24:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:43 crc kubenswrapper[4755]: I1210 15:24:43.537457 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:43 crc kubenswrapper[4755]: I1210 15:24:43.537495 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:43 crc kubenswrapper[4755]: I1210 15:24:43.537505 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:43 crc kubenswrapper[4755]: I1210 15:24:43.537519 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:43 crc kubenswrapper[4755]: I1210 15:24:43.537529 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:43Z","lastTransitionTime":"2025-12-10T15:24:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:43 crc kubenswrapper[4755]: I1210 15:24:43.640118 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:43 crc kubenswrapper[4755]: I1210 15:24:43.640158 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:43 crc kubenswrapper[4755]: I1210 15:24:43.640171 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:43 crc kubenswrapper[4755]: I1210 15:24:43.640189 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:43 crc kubenswrapper[4755]: I1210 15:24:43.640199 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:43Z","lastTransitionTime":"2025-12-10T15:24:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:43 crc kubenswrapper[4755]: I1210 15:24:43.743838 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:43 crc kubenswrapper[4755]: I1210 15:24:43.743891 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:43 crc kubenswrapper[4755]: I1210 15:24:43.743941 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:43 crc kubenswrapper[4755]: I1210 15:24:43.743990 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:43 crc kubenswrapper[4755]: I1210 15:24:43.744003 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:43Z","lastTransitionTime":"2025-12-10T15:24:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:43 crc kubenswrapper[4755]: I1210 15:24:43.757281 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 15:24:43 crc kubenswrapper[4755]: I1210 15:24:43.757337 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5ctz" Dec 10 15:24:43 crc kubenswrapper[4755]: I1210 15:24:43.757297 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 15:24:43 crc kubenswrapper[4755]: E1210 15:24:43.757424 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 15:24:43 crc kubenswrapper[4755]: I1210 15:24:43.757680 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 15:24:43 crc kubenswrapper[4755]: E1210 15:24:43.757864 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 15:24:43 crc kubenswrapper[4755]: E1210 15:24:43.757802 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5ctz" podUID="17673130-8212-4f8f-8859-92774f0ee202" Dec 10 15:24:43 crc kubenswrapper[4755]: E1210 15:24:43.757675 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 15:24:43 crc kubenswrapper[4755]: I1210 15:24:43.769230 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-82wnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0a2f42c-a60e-4350-9ebb-c28d3cbfdad7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7af643111a7a5d1d78b0412b1621b5ffac6389760ea9190e26e9e5d1704eed4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xbzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://915c918562f8a69a28cbcf29e427bfdd94477193b98d5a2da9ad45cb4b44fa03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xbzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-82wnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:43Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:43 crc kubenswrapper[4755]: I1210 15:24:43.785870 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aed6bb9-78c2-410b-9c58-b60ab22a7bd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70d014e7227746c46b30f8f5a1f307a422d2fa0d4b98d98bfe5ba6217223489e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://104d730ea666ffd2d639ba7fc8d1ac5b444586788a62656fd260cb249f82b1ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09dbec85547c7170bb9551e5657876814d48528e3047daf3547711a563d2b6ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8bf1de0d8bdc0a20bc42ba5097d849ae0f507e5fbc18fb17b4ff3650e46ff0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:43Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:43 crc kubenswrapper[4755]: I1210 15:24:43.804199 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n8c6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b56ae78-835a-45da-bc46-5adff2bdf9fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8957db8ccef7b3c449920471d345aa81ce9a7ab9be36b2350ec428aebec7ac3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b9a1485f6d0c0e53fd3505623b2788476bc00abea2f11650386eac40e449bcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9a1485f6d0c0e53fd3505623b2788476bc00abea2f11650386eac40e449bcc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a532015d196c588ad3dd6c43d9fa3772ba2a42bd1b2231f830b60adb0addab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a532015d196c588ad3dd6c43d9fa3772ba2a42bd1b2231f830b60adb0addab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://417e326f541e04df27d540c14ec572bb8f1d749a9e3dc88f483241d69f7d0679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://417e326f541e04df27d540c14ec572bb8f1d749a9e3dc88f483241d69f7d0679\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42867f7a1f6aceaf708ad650b7634bdafb3f850872c540d424916440014e7941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42867f7a1f6aceaf708ad650b7634bdafb3f850872c540d424916440014e7941\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f2d22bfc26958a3aad99de39f5c831d36c1ba4f563851baafc73735e80d5c91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f2d22bfc26958a3aad99de39f5c831d36c1ba4f563851baafc73735e80d5c91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://548d326fc5c14c16176d4fff3446d93c17322c11d2a34e9e42376b665de3848d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://548d326fc5c14c16176d4fff3446d93c17322c11d2a34e9e42376b665de3848d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n8c6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:43Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:43 crc kubenswrapper[4755]: I1210 15:24:43.814891 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qnmst" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1693fcf1-bef4-4f82-8dd8-f1797b03f5e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8fd6a2ab2b9557574951c0ebbccd663fe576262b0de2c3c655427c977f62d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hzdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qnmst\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:43Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:43 crc kubenswrapper[4755]: I1210 15:24:43.827507 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wv8fh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d150a22e-c59a-4376-a5c8-db4085ea0be0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58d106c5d1b9525ec821d009ca556449cd8d7f0e1b9c8ec7dd969df996e73625\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfw9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wv8fh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:43Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:43 crc kubenswrapper[4755]: I1210 15:24:43.839075 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q5ctz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17673130-8212-4f8f-8859-92774f0ee202\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcscn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcscn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q5ctz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:43Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:43 crc kubenswrapper[4755]: I1210 15:24:43.847041 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:43 crc kubenswrapper[4755]: I1210 15:24:43.847095 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:43 crc kubenswrapper[4755]: I1210 15:24:43.847107 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:43 crc kubenswrapper[4755]: I1210 15:24:43.847124 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:43 crc kubenswrapper[4755]: I1210 15:24:43.847136 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:43Z","lastTransitionTime":"2025-12-10T15:24:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:43 crc kubenswrapper[4755]: I1210 15:24:43.860407 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fa03060-a6e0-4aad-9aa1-43b1a0d00c85\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a823333ed5cb5de3988d25e50e4b7a0f9071c76fb39c22760a4f2acd5eb455d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d828f3f3b90a2dcb1c1908e6a686368af5b0d715b3251e4b8fcf3c8818ec75a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://172ab3fce08c8ba4095ff4095c89364a778644752bd7bb6c178d6e3ebcface69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfbb8e350ad18b78a6bcf6cfa4eb8f2fad90f970dba08a8b1b2026af6f255e83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c4b2bf0c88a16fd6b4b45a20730a92292895dc9f29ce756d347c302b25a8612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55da309288bc2c7ff1f22da0569d18155796700c200fe3ed508f6f8179756865\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55da309288bc2c7ff1f22da0569d18155796700c200fe3ed508f6f8179756865\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219c531368bb5379f3726d94ab0afc0f33eedbf91b0a4dbbe0757c6e232f79ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://219c531368bb5379f3726d94ab0afc0f33eedbf91b0a4dbbe0757c6e232f79ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f9da8f3d4f85ec83c16b71ecd983ff4f48049eb8e73461a2b46c4f66d71ed06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9da8f3d4f85ec83c16b71ecd983ff4f48049eb8e73461a2b46c4f66d71ed06e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:43Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:43 crc kubenswrapper[4755]: I1210 15:24:43.872873 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:43Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:43 crc kubenswrapper[4755]: I1210 15:24:43.886493 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c371b199c3643a68ea5eba935eb76a1b7e8a4027c9292a1324116ccfc14742ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:43Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:43 crc kubenswrapper[4755]: I1210 15:24:43.908412 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b1da51a-99c9-4f8e-920d-ce0973af6370\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6eb065dc6c0cc8914cb95553eb2683d894fb9a4e78ce7fac73bcce8d7f6cced9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e547993b9f2fa37bf924f909c47b62eb0cc02b659596b1cad9bbc42fdde8f9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9ba47683cc23d5b531a45f0658b6a9378650400b35b5372642b0430a5ac503f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a75407e83508af9adebb09c6466a966dd791d29f690c539656f9bd3396d7031\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://602b4e49987fa2cc6b54b822110aececbdddaf2bce8f27cce4ed906768d45791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://335bcab3a79f09796e97560365e1211fb30ddf288f4773c05ab353197add4365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://807b21c925067bde1df3babdc35da6c8805e10be110c0282376dbd6e8be69dcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://807b21c925067bde1df3babdc35da6c8805e10be110c0282376dbd6e8be69dcb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T15:24:13Z\\\",\\\"message\\\":\\\"s, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI1210 15:24:12.549599 6361 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI1210 15:24:12.549769 6361 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1210 15:24:12.549565 6361 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-zl2tx in node crc\\\\nI1210 15:24:12.549780 6361 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-zl2tx after 0 failed attempt(s)\\\\nI1210 15:24:12.549784 6361 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-zl2tx\\\\nI1210 15:24:12.549311 6361 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-apiserver/apiserver_TCP_cluster\\\\\\\", UUID:\\\\\\\"d71b38eb-32af-4c0f-9490-7c317c111e3a\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-apiserver/apiserver\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]st\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T15:24:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6lfvk_openshift-ovn-kubernetes(4b1da51a-99c9-4f8e-920d-ce0973af6370)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59bf59d1b7fbc365a916fbbceca7ae30b7ebc754b34f2f7a34c2e21e1e1d2166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://375efb27cf6bef06a6a0ffc8f6b2963e1b9ffbca18b3c8e7b402bc552a411f6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://375efb27cf6bef06a6a0ffc8f6b2963e1b9ffbca18b3c8e7b402bc552a411f6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6lfvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:43Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:43 crc kubenswrapper[4755]: I1210 15:24:43.917928 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cc57cd7-2dd9-45cb-bc66-4c0d7a0cf043\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7446bf39567cbd3f5a0a9a1252748e01144968fd0a67b39d8c32b326a13dec38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34c0f838adfa6e512d9529c4999cc194d978276c8736abccd0ef75d5b7e9b6e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34c0f838adfa6e512d9529c4999cc194d978276c8736abccd0ef75d5b7e9b6e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:43Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:43 crc kubenswrapper[4755]: I1210 15:24:43.975348 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:43 crc kubenswrapper[4755]: I1210 15:24:43.975409 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:43 crc kubenswrapper[4755]: I1210 15:24:43.975422 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:43 crc kubenswrapper[4755]: I1210 15:24:43.975441 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:43 crc kubenswrapper[4755]: I1210 15:24:43.975478 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:43Z","lastTransitionTime":"2025-12-10T15:24:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:43 crc kubenswrapper[4755]: I1210 15:24:43.980845 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e6db97-7e22-4fec-924e-20e90f463887\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef3871ddc16d2fc1b89a6f120390cf066424bd9ed4cca98f8dea4430c40c6c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08895c3ab20248c4bd70ccfae6442bdf8e76a602e605b043f0f00166624ada17\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5b2467645116856afa1e1e37501c7b453b03da193a96d1075886b85839a86ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16e58653dca08c9b33671279553975bca9507ffeeefea161119cec624813f167\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea18ef6c319336321d97c2a55449903b837b5a47da785da3ff5308244751943\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17ca0b6cc1b8b0bf9c34e03c494d1fc1594db013eccf862027ec8749ea51f6e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17ca0b6cc1b8b0bf9c34e03c494d1fc1594db013eccf862027ec8749ea51f6e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:43Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:44 crc kubenswrapper[4755]: I1210 15:24:44.002217 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:43Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:44 crc kubenswrapper[4755]: I1210 15:24:44.023767 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:44Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:44 crc kubenswrapper[4755]: I1210 15:24:44.043949 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zl2tx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"796da6d5-6ccd-4786-a03e-9a8e47a55031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e0f974f9ba614dcaef08cf7168b77eeee007dfe65cc4e32df9b8e45005ff4ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de63a123c46563bd8cd07e669d192bc8b019a889b9bdb7af1b988872c8f1fc48\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T15:24:30Z\\\",\\\"message\\\":\\\"2025-12-10T15:23:45+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5295941e-ea46-4d40-a35d-bd2fd673e118\\\\n2025-12-10T15:23:45+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5295941e-ea46-4d40-a35d-bd2fd673e118 to /host/opt/cni/bin/\\\\n2025-12-10T15:23:45Z [verbose] multus-daemon started\\\\n2025-12-10T15:23:45Z [verbose] Readiness Indicator file check\\\\n2025-12-10T15:24:30Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:24:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzg4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zl2tx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:44Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:44 crc kubenswrapper[4755]: I1210 15:24:44.054519 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b132a8b9-1c99-414d-8773-229bf36b305d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5a5a59e9f156fb791ec822c2d5efe3fc6ec0e84bfcb2b6f5da81396951984c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2mdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a40c4bdaa23a60a665b8f565720d79b68cac62d40246be94fc6cd314b1bb3656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2mdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ggt8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:44Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:44 crc kubenswrapper[4755]: I1210 15:24:44.065312 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81a321f6-db4b-46c6-bf24-1ad62da5992b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e68926ec899dad3c070810697fb078955e202f270724638719a7ea21d2debcb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://728ff8c02e5b0bea5514375b60c7a66f025e3c2a7163f0753c65d914b7873531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://865537b06aba1b617a1a16ef38c6b5be072501d1bbbd69e14eaaf2ce76c6b1da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54e0e014ecb31c499da20621a8b9e2ad6fdbdc2170a93636c828082b5b63b683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e0e014ecb31c499da20621a8b9e2ad6fdbdc2170a93636c828082b5b63b683\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:44Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:44 crc kubenswrapper[4755]: I1210 15:24:44.076666 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7d089a8bddfa1f80d29011c7a6bab0a300f7dd44bdb2864f86951ebbb9ebea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:44Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:44 crc kubenswrapper[4755]: I1210 15:24:44.078017 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:44 crc kubenswrapper[4755]: I1210 15:24:44.078052 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:44 crc kubenswrapper[4755]: I1210 15:24:44.078064 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:44 crc kubenswrapper[4755]: I1210 15:24:44.078081 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:44 crc kubenswrapper[4755]: I1210 15:24:44.078091 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:44Z","lastTransitionTime":"2025-12-10T15:24:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:44 crc kubenswrapper[4755]: I1210 15:24:44.088684 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01ddc4056319db1f69268dcae192c3cb9db6c25284305803ae7588e59f77c346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aae6bf174cdc9bde18d7c959e976454e73c1e67642f0158d365b79582f63f3cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:44Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:44 crc kubenswrapper[4755]: I1210 15:24:44.181054 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:44 crc kubenswrapper[4755]: I1210 15:24:44.181093 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:44 crc kubenswrapper[4755]: I1210 15:24:44.181104 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:44 crc kubenswrapper[4755]: I1210 15:24:44.181121 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:44 crc kubenswrapper[4755]: I1210 15:24:44.181133 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:44Z","lastTransitionTime":"2025-12-10T15:24:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:44 crc kubenswrapper[4755]: I1210 15:24:44.247489 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" Dec 10 15:24:44 crc kubenswrapper[4755]: I1210 15:24:44.258599 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q5ctz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17673130-8212-4f8f-8859-92774f0ee202\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcscn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcscn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q5ctz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:44Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:44 crc kubenswrapper[4755]: I1210 15:24:44.280842 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fa03060-a6e0-4aad-9aa1-43b1a0d00c85\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a823333ed5cb5de3988d25e50e4b7a0f9071c76fb39c22760a4f2acd5eb455d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d828f3f3b90a2dcb1c1908e6a686368af5b0d715b3251e4b8fcf3c8818ec75a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://172ab3fce08c8ba4095ff4095c89364a778644752bd7bb6c178d6e3ebcface69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfbb8e350ad18b78a6bcf6cfa4eb8f2fad90f970dba08a8b1b2026af6f255e83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c4b2bf0c88a16fd6b4b45a20730a92292895dc9f29ce756d347c302b25a8612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55da309288bc2c7ff1f22da0569d18155796700c200fe3ed508f6f8179756865\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55da309288bc2c7ff1f22da0569d18155796700c200fe3ed508f6f8179756865\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219c531368bb5379f3726d94ab0afc0f33eedbf91b0a4dbbe0757c6e232f79ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://219c531368bb5379f3726d94ab0afc0f33eedbf91b0a4dbbe0757c6e232f79ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f9da8f3d4f85ec83c16b71ecd983ff4f48049eb8e73461a2b46c4f66d71ed06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9da8f3d4f85ec83c16b71ecd983ff4f48049eb8e73461a2b46c4f66d71ed06e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:44Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:44 crc kubenswrapper[4755]: I1210 15:24:44.284423 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:44 crc kubenswrapper[4755]: I1210 15:24:44.284485 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:44 crc kubenswrapper[4755]: I1210 15:24:44.284496 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:44 crc kubenswrapper[4755]: I1210 15:24:44.284517 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:44 crc kubenswrapper[4755]: I1210 15:24:44.284529 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:44Z","lastTransitionTime":"2025-12-10T15:24:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:44 crc kubenswrapper[4755]: I1210 15:24:44.294341 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:44Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:44 crc kubenswrapper[4755]: I1210 15:24:44.306253 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c371b199c3643a68ea5eba935eb76a1b7e8a4027c9292a1324116ccfc14742ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:44Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:44 crc kubenswrapper[4755]: I1210 15:24:44.317166 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wv8fh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d150a22e-c59a-4376-a5c8-db4085ea0be0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58d106c5d1b9525ec821d009ca556449cd8d7f0e1b9c8ec7dd969df996e73625\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfw9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wv8fh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:44Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:44 crc kubenswrapper[4755]: I1210 15:24:44.327653 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cc57cd7-2dd9-45cb-bc66-4c0d7a0cf043\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7446bf39567cbd3f5a0a9a1252748e01144968fd0a67b39d8c32b326a13dec38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34c0f838adfa6e512d9529c4999cc194d978276c8736abccd0ef75d5b7e9b6e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34c0f838adfa6e512d9529c4999cc194d978276c8736abccd0ef75d5b7e9b6e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:44Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:44 crc kubenswrapper[4755]: I1210 15:24:44.342220 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e6db97-7e22-4fec-924e-20e90f463887\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef3871ddc16d2fc1b89a6f120390cf066424bd9ed4cca98f8dea4430c40c6c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08895c3ab20248c4bd70ccfae6442bdf8e76a602e605b043f0f00166624ada17\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5b2467645116856afa1e1e37501c7b453b03da193a96d1075886b85839a86ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16e58653dca08c9b33671279553975bca9507ffeeefea161119cec624813f167\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea18ef6c319336321d97c2a55449903b837b5a47da785da3ff5308244751943\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17ca0b6cc1b8b0bf9c34e03c494d1fc1594db013eccf862027ec8749ea51f6e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17ca0b6cc1b8b0bf9c34e03c494d1fc1594db013eccf862027ec8749ea51f6e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:44Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:44 crc kubenswrapper[4755]: I1210 15:24:44.356482 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:44Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:44 crc kubenswrapper[4755]: I1210 15:24:44.376918 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b1da51a-99c9-4f8e-920d-ce0973af6370\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6eb065dc6c0cc8914cb95553eb2683d894fb9a4e78ce7fac73bcce8d7f6cced9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e547993b9f2fa37bf924f909c47b62eb0cc02b659596b1cad9bbc42fdde8f9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9ba47683cc23d5b531a45f0658b6a9378650400b35b5372642b0430a5ac503f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a75407e83508af9adebb09c6466a966dd791d29f690c539656f9bd3396d7031\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://602b4e49987fa2cc6b54b822110aececbdddaf2bce8f27cce4ed906768d45791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://335bcab3a79f09796e97560365e1211fb30ddf288f4773c05ab353197add4365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0386a60f9d2d9c0cec943720b300e0cd71348b81b74234f19f1c51d34142b089\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://807b21c925067bde1df3babdc35da6c8805e10be110c0282376dbd6e8be69dcb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T15:24:13Z\\\",\\\"message\\\":\\\"s, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI1210 15:24:12.549599 6361 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI1210 15:24:12.549769 6361 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1210 15:24:12.549565 6361 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-zl2tx in node crc\\\\nI1210 15:24:12.549780 6361 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-zl2tx after 0 failed attempt(s)\\\\nI1210 15:24:12.549784 6361 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-zl2tx\\\\nI1210 15:24:12.549311 6361 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-apiserver/apiserver_TCP_cluster\\\\\\\", UUID:\\\\\\\"d71b38eb-32af-4c0f-9490-7c317c111e3a\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-apiserver/apiserver\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]st\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T15:24:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59bf59d1b7fbc365a916fbbceca7ae30b7ebc754b34f2f7a34c2e21e1e1d2166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://375efb27cf6bef06a6a0ffc8f6b2963e1b9ffbca18b3c8e7b402bc552a411f6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://375efb27cf6bef06a6a0ffc8f6b2963e1b9ffbca18b3c8e7b402bc552a411f6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6lfvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:44Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:44 crc kubenswrapper[4755]: I1210 15:24:44.387674 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:44 crc kubenswrapper[4755]: I1210 15:24:44.387716 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:44 crc kubenswrapper[4755]: I1210 15:24:44.387728 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:44 crc kubenswrapper[4755]: I1210 15:24:44.387745 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:44 crc kubenswrapper[4755]: I1210 15:24:44.387759 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:44Z","lastTransitionTime":"2025-12-10T15:24:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:44 crc kubenswrapper[4755]: I1210 15:24:44.392130 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zl2tx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"796da6d5-6ccd-4786-a03e-9a8e47a55031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e0f974f9ba614dcaef08cf7168b77eeee007dfe65cc4e32df9b8e45005ff4ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de63a123c46563bd8cd07e669d192bc8b019a889b9bdb7af1b988872c8f1fc48\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T15:24:30Z\\\",\\\"message\\\":\\\"2025-12-10T15:23:45+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5295941e-ea46-4d40-a35d-bd2fd673e118\\\\n2025-12-10T15:23:45+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5295941e-ea46-4d40-a35d-bd2fd673e118 to /host/opt/cni/bin/\\\\n2025-12-10T15:23:45Z [verbose] multus-daemon started\\\\n2025-12-10T15:23:45Z [verbose] Readiness Indicator file check\\\\n2025-12-10T15:24:30Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:24:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzg4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zl2tx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:44Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:44 crc kubenswrapper[4755]: I1210 15:24:44.403154 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b132a8b9-1c99-414d-8773-229bf36b305d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5a5a59e9f156fb791ec822c2d5efe3fc6ec0e84bfcb2b6f5da81396951984c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2mdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a40c4bdaa23a60a665b8f565720d79b68cac62d40246be94fc6cd314b1bb3656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2mdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ggt8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:44Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:44 crc kubenswrapper[4755]: I1210 15:24:44.417628 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81a321f6-db4b-46c6-bf24-1ad62da5992b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e68926ec899dad3c070810697fb078955e202f270724638719a7ea21d2debcb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://728ff8c02e5b0bea5514375b60c7a66f025e3c2a7163f0753c65d914b7873531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://865537b06aba1b617a1a16ef38c6b5be072501d1bbbd69e14eaaf2ce76c6b1da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54e0e014ecb31c499da20621a8b9e2ad6fdbdc2170a93636c828082b5b63b683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e0e014ecb31c499da20621a8b9e2ad6fdbdc2170a93636c828082b5b63b683\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:44Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:44 crc kubenswrapper[4755]: I1210 15:24:44.431549 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7d089a8bddfa1f80d29011c7a6bab0a300f7dd44bdb2864f86951ebbb9ebea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:44Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:44 crc kubenswrapper[4755]: I1210 15:24:44.446551 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01ddc4056319db1f69268dcae192c3cb9db6c25284305803ae7588e59f77c346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aae6bf174cdc9bde18d7c959e976454e73c1e67642f0158d365b79582f63f3cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:44Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:44 crc kubenswrapper[4755]: I1210 15:24:44.457924 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:44Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:44 crc kubenswrapper[4755]: I1210 15:24:44.470344 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aed6bb9-78c2-410b-9c58-b60ab22a7bd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70d014e7227746c46b30f8f5a1f307a422d2fa0d4b98d98bfe5ba6217223489e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://104d730ea666ffd2d639ba7fc8d1ac5b444586788a62656fd260cb249f82b1ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09dbec85547c7170bb9551e5657876814d48528e3047daf3547711a563d2b6ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8bf1de0d8bdc0a20bc42ba5097d849ae0f507e5fbc18fb17b4ff3650e46ff0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:44Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:44 crc kubenswrapper[4755]: I1210 15:24:44.486017 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n8c6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b56ae78-835a-45da-bc46-5adff2bdf9fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8957db8ccef7b3c449920471d345aa81ce9a7ab9be36b2350ec428aebec7ac3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b9a1485f6d0c0e53fd3505623b2788476bc00abea2f11650386eac40e449bcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9a1485f6d0c0e53fd3505623b2788476bc00abea2f11650386eac40e449bcc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a532015d196c588ad3dd6c43d9fa3772ba2a42bd1b2231f830b60adb0addab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a532015d196c588ad3dd6c43d9fa3772ba2a42bd1b2231f830b60adb0addab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://417e326f541e04df27d540c14ec572bb8f1d749a9e3dc88f483241d69f7d0679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://417e326f541e04df27d540c14ec572bb8f1d749a9e3dc88f483241d69f7d0679\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42867f7a1f6aceaf708ad650b7634bdafb3f850872c540d424916440014e7941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42867f7a1f6aceaf708ad650b7634bdafb3f850872c540d424916440014e7941\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f2d22bfc26958a3aad99de39f5c831d36c1ba4f563851baafc73735e80d5c91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f2d22bfc26958a3aad99de39f5c831d36c1ba4f563851baafc73735e80d5c91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://548d326fc5c14c16176d4fff3446d93c17322c11d2a34e9e42376b665de3848d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://548d326fc5c14c16176d4fff3446d93c17322c11d2a34e9e42376b665de3848d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n8c6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:44Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:44 crc kubenswrapper[4755]: I1210 15:24:44.490002 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:44 crc kubenswrapper[4755]: I1210 15:24:44.490044 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:44 crc kubenswrapper[4755]: I1210 15:24:44.490055 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:44 crc kubenswrapper[4755]: I1210 15:24:44.490072 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:44 crc kubenswrapper[4755]: I1210 15:24:44.490085 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:44Z","lastTransitionTime":"2025-12-10T15:24:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:44 crc kubenswrapper[4755]: I1210 15:24:44.496488 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qnmst" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1693fcf1-bef4-4f82-8dd8-f1797b03f5e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8fd6a2ab2b9557574951c0ebbccd663fe576262b0de2c3c655427c977f62d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hzdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qnmst\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:44Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:44 crc kubenswrapper[4755]: I1210 15:24:44.509402 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-82wnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0a2f42c-a60e-4350-9ebb-c28d3cbfdad7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7af643111a7a5d1d78b0412b1621b5ffac6389760ea9190e26e9e5d1704eed4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xbzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://915c918562f8a69a28cbcf29e427bfdd94477193b98d5a2da9ad45cb4b44fa03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xbzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-82wnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:44Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:44 crc kubenswrapper[4755]: I1210 15:24:44.592978 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:44 crc kubenswrapper[4755]: I1210 15:24:44.593260 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:44 crc kubenswrapper[4755]: I1210 15:24:44.593338 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:44 crc kubenswrapper[4755]: I1210 15:24:44.593429 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:44 crc kubenswrapper[4755]: I1210 15:24:44.593550 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:44Z","lastTransitionTime":"2025-12-10T15:24:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:44 crc kubenswrapper[4755]: I1210 15:24:44.697447 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:44 crc kubenswrapper[4755]: I1210 15:24:44.697504 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:44 crc kubenswrapper[4755]: I1210 15:24:44.697515 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:44 crc kubenswrapper[4755]: I1210 15:24:44.697533 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:44 crc kubenswrapper[4755]: I1210 15:24:44.697548 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:44Z","lastTransitionTime":"2025-12-10T15:24:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:44 crc kubenswrapper[4755]: I1210 15:24:44.800802 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:44 crc kubenswrapper[4755]: I1210 15:24:44.800862 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:44 crc kubenswrapper[4755]: I1210 15:24:44.800879 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:44 crc kubenswrapper[4755]: I1210 15:24:44.800915 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:44 crc kubenswrapper[4755]: I1210 15:24:44.800934 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:44Z","lastTransitionTime":"2025-12-10T15:24:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:44 crc kubenswrapper[4755]: I1210 15:24:44.904462 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:44 crc kubenswrapper[4755]: I1210 15:24:44.904559 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:44 crc kubenswrapper[4755]: I1210 15:24:44.904577 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:44 crc kubenswrapper[4755]: I1210 15:24:44.904602 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:44 crc kubenswrapper[4755]: I1210 15:24:44.904623 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:44Z","lastTransitionTime":"2025-12-10T15:24:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:45 crc kubenswrapper[4755]: I1210 15:24:45.007906 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:45 crc kubenswrapper[4755]: I1210 15:24:45.007977 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:45 crc kubenswrapper[4755]: I1210 15:24:45.008000 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:45 crc kubenswrapper[4755]: I1210 15:24:45.008029 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:45 crc kubenswrapper[4755]: I1210 15:24:45.008051 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:45Z","lastTransitionTime":"2025-12-10T15:24:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:45 crc kubenswrapper[4755]: I1210 15:24:45.111645 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:45 crc kubenswrapper[4755]: I1210 15:24:45.111719 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:45 crc kubenswrapper[4755]: I1210 15:24:45.111743 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:45 crc kubenswrapper[4755]: I1210 15:24:45.111785 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:45 crc kubenswrapper[4755]: I1210 15:24:45.111808 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:45Z","lastTransitionTime":"2025-12-10T15:24:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:45 crc kubenswrapper[4755]: I1210 15:24:45.219830 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:45 crc kubenswrapper[4755]: I1210 15:24:45.219880 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:45 crc kubenswrapper[4755]: I1210 15:24:45.219896 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:45 crc kubenswrapper[4755]: I1210 15:24:45.219921 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:45 crc kubenswrapper[4755]: I1210 15:24:45.219939 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:45Z","lastTransitionTime":"2025-12-10T15:24:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:45 crc kubenswrapper[4755]: I1210 15:24:45.254765 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6lfvk_4b1da51a-99c9-4f8e-920d-ce0973af6370/ovnkube-controller/3.log" Dec 10 15:24:45 crc kubenswrapper[4755]: I1210 15:24:45.255762 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6lfvk_4b1da51a-99c9-4f8e-920d-ce0973af6370/ovnkube-controller/2.log" Dec 10 15:24:45 crc kubenswrapper[4755]: I1210 15:24:45.261264 4755 generic.go:334] "Generic (PLEG): container finished" podID="4b1da51a-99c9-4f8e-920d-ce0973af6370" containerID="0386a60f9d2d9c0cec943720b300e0cd71348b81b74234f19f1c51d34142b089" exitCode=1 Dec 10 15:24:45 crc kubenswrapper[4755]: I1210 15:24:45.261333 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" event={"ID":"4b1da51a-99c9-4f8e-920d-ce0973af6370","Type":"ContainerDied","Data":"0386a60f9d2d9c0cec943720b300e0cd71348b81b74234f19f1c51d34142b089"} Dec 10 15:24:45 crc kubenswrapper[4755]: I1210 15:24:45.261388 4755 scope.go:117] "RemoveContainer" containerID="807b21c925067bde1df3babdc35da6c8805e10be110c0282376dbd6e8be69dcb" Dec 10 15:24:45 crc kubenswrapper[4755]: I1210 15:24:45.262042 4755 scope.go:117] "RemoveContainer" containerID="0386a60f9d2d9c0cec943720b300e0cd71348b81b74234f19f1c51d34142b089" Dec 10 15:24:45 crc kubenswrapper[4755]: E1210 15:24:45.262314 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-6lfvk_openshift-ovn-kubernetes(4b1da51a-99c9-4f8e-920d-ce0973af6370)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" podUID="4b1da51a-99c9-4f8e-920d-ce0973af6370" Dec 10 15:24:45 crc kubenswrapper[4755]: I1210 15:24:45.276954 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cc57cd7-2dd9-45cb-bc66-4c0d7a0cf043\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7446bf39567cbd3f5a0a9a1252748e01144968fd0a67b39d8c32b326a13dec38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34c0f838adfa6e512d9529c4999cc194d978276c8736abccd0ef75d5b7e9b6e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34c0f838adfa6e512d9529c4999cc194d978276c8736abccd0ef75d5b7e9b6e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:45Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:45 crc kubenswrapper[4755]: I1210 15:24:45.294348 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e6db97-7e22-4fec-924e-20e90f463887\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef3871ddc16d2fc1b89a6f120390cf066424bd9ed4cca98f8dea4430c40c6c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08895c3ab20248c4bd70ccfae6442bdf8e76a602e605b043f0f00166624ada17\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5b2467645116856afa1e1e37501c7b453b03da193a96d1075886b85839a86ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16e58653dca08c9b33671279553975bca9507ffeeefea161119cec624813f167\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea18ef6c319336321d97c2a55449903b837b5a47da785da3ff5308244751943\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17ca0b6cc1b8b0bf9c34e03c494d1fc1594db013eccf862027ec8749ea51f6e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17ca0b6cc1b8b0bf9c34e03c494d1fc1594db013eccf862027ec8749ea51f6e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:45Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:45 crc kubenswrapper[4755]: I1210 15:24:45.308495 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:45Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:45 crc kubenswrapper[4755]: I1210 15:24:45.321998 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:45 crc kubenswrapper[4755]: I1210 15:24:45.322030 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:45 crc kubenswrapper[4755]: I1210 15:24:45.322040 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:45 crc kubenswrapper[4755]: I1210 15:24:45.322054 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:45 crc kubenswrapper[4755]: I1210 15:24:45.322064 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:45Z","lastTransitionTime":"2025-12-10T15:24:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:45 crc kubenswrapper[4755]: I1210 15:24:45.329409 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b1da51a-99c9-4f8e-920d-ce0973af6370\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6eb065dc6c0cc8914cb95553eb2683d894fb9a4e78ce7fac73bcce8d7f6cced9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e547993b9f2fa37bf924f909c47b62eb0cc02b659596b1cad9bbc42fdde8f9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9ba47683cc23d5b531a45f0658b6a9378650400b35b5372642b0430a5ac503f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a75407e83508af9adebb09c6466a966dd791d29f690c539656f9bd3396d7031\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://602b4e49987fa2cc6b54b822110aececbdddaf2bce8f27cce4ed906768d45791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://335bcab3a79f09796e97560365e1211fb30ddf288f4773c05ab353197add4365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0386a60f9d2d9c0cec943720b300e0cd71348b81b74234f19f1c51d34142b089\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://807b21c925067bde1df3babdc35da6c8805e10be110c0282376dbd6e8be69dcb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T15:24:13Z\\\",\\\"message\\\":\\\"s, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI1210 15:24:12.549599 6361 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI1210 15:24:12.549769 6361 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1210 15:24:12.549565 6361 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-zl2tx in node crc\\\\nI1210 15:24:12.549780 6361 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-zl2tx after 0 failed attempt(s)\\\\nI1210 15:24:12.549784 6361 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-zl2tx\\\\nI1210 15:24:12.549311 6361 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-apiserver/apiserver_TCP_cluster\\\\\\\", UUID:\\\\\\\"d71b38eb-32af-4c0f-9490-7c317c111e3a\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-apiserver/apiserver\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]st\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T15:24:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0386a60f9d2d9c0cec943720b300e0cd71348b81b74234f19f1c51d34142b089\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T15:24:44Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1210 15:24:44.102698 6774 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1210 15:24:44.102724 6774 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1210 15:24:44.102737 6774 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1210 15:24:44.102750 6774 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1210 15:24:44.102755 6774 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1210 15:24:44.102760 6774 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1210 15:24:44.102777 6774 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1210 15:24:44.102789 6774 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1210 15:24:44.102799 6774 handler.go:208] Removed *v1.Node event handler 2\\\\nI1210 15:24:44.102813 6774 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1210 15:24:44.102822 6774 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1210 15:24:44.102847 6774 factory.go:656] Stopping watch factory\\\\nI1210 15:24:44.102860 6774 ovnkube.go:599] Stopped ovnkube\\\\nI1210 15:24:44.102878 6774 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1210 15:24:44.102878 6774 handler.go:208] Removed *v1.Namespace even\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T15:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59bf59d1b7fbc365a916fbbceca7ae30b7ebc754b34f2f7a34c2e21e1e1d2166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://375efb27cf6bef06a6a0ffc8f6b2963e1b9ffbca18b3c8e7b402bc552a411f6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://375efb27cf6bef06a6a0ffc8f6b2963e1b9ffbca18b3c8e7b402bc552a411f6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6lfvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:45Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:45 crc kubenswrapper[4755]: I1210 15:24:45.340103 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b132a8b9-1c99-414d-8773-229bf36b305d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5a5a59e9f156fb791ec822c2d5efe3fc6ec0e84bfcb2b6f5da81396951984c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2mdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a40c4bdaa23a60a665b8f565720d79b68cac62d40246be94fc6cd314b1bb3656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2mdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ggt8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:45Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:45 crc kubenswrapper[4755]: I1210 15:24:45.351578 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81a321f6-db4b-46c6-bf24-1ad62da5992b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e68926ec899dad3c070810697fb078955e202f270724638719a7ea21d2debcb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://728ff8c02e5b0bea5514375b60c7a66f025e3c2a7163f0753c65d914b7873531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://865537b06aba1b617a1a16ef38c6b5be072501d1bbbd69e14eaaf2ce76c6b1da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54e0e014ecb31c499da20621a8b9e2ad6fdbdc2170a93636c828082b5b63b683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e0e014ecb31c499da20621a8b9e2ad6fdbdc2170a93636c828082b5b63b683\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:45Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:45 crc kubenswrapper[4755]: I1210 15:24:45.365459 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7d089a8bddfa1f80d29011c7a6bab0a300f7dd44bdb2864f86951ebbb9ebea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:45Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:45 crc kubenswrapper[4755]: I1210 15:24:45.378949 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01ddc4056319db1f69268dcae192c3cb9db6c25284305803ae7588e59f77c346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aae6bf174cdc9bde18d7c959e976454e73c1e67642f0158d365b79582f63f3cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:45Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:45 crc kubenswrapper[4755]: I1210 15:24:45.390204 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:45Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:45 crc kubenswrapper[4755]: I1210 15:24:45.404329 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zl2tx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"796da6d5-6ccd-4786-a03e-9a8e47a55031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e0f974f9ba614dcaef08cf7168b77eeee007dfe65cc4e32df9b8e45005ff4ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de63a123c46563bd8cd07e669d192bc8b019a889b9bdb7af1b988872c8f1fc48\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T15:24:30Z\\\",\\\"message\\\":\\\"2025-12-10T15:23:45+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5295941e-ea46-4d40-a35d-bd2fd673e118\\\\n2025-12-10T15:23:45+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5295941e-ea46-4d40-a35d-bd2fd673e118 to /host/opt/cni/bin/\\\\n2025-12-10T15:23:45Z [verbose] multus-daemon started\\\\n2025-12-10T15:23:45Z [verbose] Readiness Indicator file check\\\\n2025-12-10T15:24:30Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:24:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzg4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zl2tx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:45Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:45 crc kubenswrapper[4755]: I1210 15:24:45.416622 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aed6bb9-78c2-410b-9c58-b60ab22a7bd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70d014e7227746c46b30f8f5a1f307a422d2fa0d4b98d98bfe5ba6217223489e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://104d730ea666ffd2d639ba7fc8d1ac5b444586788a62656fd260cb249f82b1ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09dbec85547c7170bb9551e5657876814d48528e3047daf3547711a563d2b6ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8bf1de0d8bdc0a20bc42ba5097d849ae0f507e5fbc18fb17b4ff3650e46ff0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:45Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:45 crc kubenswrapper[4755]: I1210 15:24:45.424409 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:45 crc kubenswrapper[4755]: I1210 15:24:45.424439 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:45 crc kubenswrapper[4755]: I1210 15:24:45.424448 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:45 crc kubenswrapper[4755]: I1210 15:24:45.424477 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:45 crc kubenswrapper[4755]: I1210 15:24:45.424488 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:45Z","lastTransitionTime":"2025-12-10T15:24:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:45 crc kubenswrapper[4755]: I1210 15:24:45.430364 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n8c6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b56ae78-835a-45da-bc46-5adff2bdf9fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8957db8ccef7b3c449920471d345aa81ce9a7ab9be36b2350ec428aebec7ac3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b9a1485f6d0c0e53fd3505623b2788476bc00abea2f11650386eac40e449bcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9a1485f6d0c0e53fd3505623b2788476bc00abea2f11650386eac40e449bcc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a532015d196c588ad3dd6c43d9fa3772ba2a42bd1b2231f830b60adb0addab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a532015d196c588ad3dd6c43d9fa3772ba2a42bd1b2231f830b60adb0addab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://417e326f541e04df27d540c14ec572bb8f1d749a9e3dc88f483241d69f7d0679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://417e326f541e04df27d540c14ec572bb8f1d749a9e3dc88f483241d69f7d0679\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42867f7a1f6aceaf708ad650b7634bdafb3f850872c540d424916440014e7941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42867f7a1f6aceaf708ad650b7634bdafb3f850872c540d424916440014e7941\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f2d22bfc26958a3aad99de39f5c831d36c1ba4f563851baafc73735e80d5c91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f2d22bfc26958a3aad99de39f5c831d36c1ba4f563851baafc73735e80d5c91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://548d326fc5c14c16176d4fff3446d93c17322c11d2a34e9e42376b665de3848d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://548d326fc5c14c16176d4fff3446d93c17322c11d2a34e9e42376b665de3848d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n8c6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:45Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:45 crc kubenswrapper[4755]: I1210 15:24:45.439150 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qnmst" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1693fcf1-bef4-4f82-8dd8-f1797b03f5e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8fd6a2ab2b9557574951c0ebbccd663fe576262b0de2c3c655427c977f62d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hzdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qnmst\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:45Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:45 crc kubenswrapper[4755]: I1210 15:24:45.448217 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-82wnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0a2f42c-a60e-4350-9ebb-c28d3cbfdad7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7af643111a7a5d1d78b0412b1621b5ffac6389760ea9190e26e9e5d1704eed4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xbzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://915c918562f8a69a28cbcf29e427bfdd94477193b98d5a2da9ad45cb4b44fa03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xbzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-82wnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:45Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:45 crc kubenswrapper[4755]: I1210 15:24:45.465039 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fa03060-a6e0-4aad-9aa1-43b1a0d00c85\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a823333ed5cb5de3988d25e50e4b7a0f9071c76fb39c22760a4f2acd5eb455d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d828f3f3b90a2dcb1c1908e6a686368af5b0d715b3251e4b8fcf3c8818ec75a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://172ab3fce08c8ba4095ff4095c89364a778644752bd7bb6c178d6e3ebcface69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfbb8e350ad18b78a6bcf6cfa4eb8f2fad90f970dba08a8b1b2026af6f255e83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c4b2bf0c88a16fd6b4b45a20730a92292895dc9f29ce756d347c302b25a8612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55da309288bc2c7ff1f22da0569d18155796700c200fe3ed508f6f8179756865\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55da309288bc2c7ff1f22da0569d18155796700c200fe3ed508f6f8179756865\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219c531368bb5379f3726d94ab0afc0f33eedbf91b0a4dbbe0757c6e232f79ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://219c531368bb5379f3726d94ab0afc0f33eedbf91b0a4dbbe0757c6e232f79ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f9da8f3d4f85ec83c16b71ecd983ff4f48049eb8e73461a2b46c4f66d71ed06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9da8f3d4f85ec83c16b71ecd983ff4f48049eb8e73461a2b46c4f66d71ed06e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:45Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:45 crc kubenswrapper[4755]: I1210 15:24:45.476804 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:45Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:45 crc kubenswrapper[4755]: I1210 15:24:45.486968 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c371b199c3643a68ea5eba935eb76a1b7e8a4027c9292a1324116ccfc14742ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:45Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:45 crc kubenswrapper[4755]: I1210 15:24:45.496815 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wv8fh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d150a22e-c59a-4376-a5c8-db4085ea0be0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58d106c5d1b9525ec821d009ca556449cd8d7f0e1b9c8ec7dd969df996e73625\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfw9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wv8fh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:45Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:45 crc kubenswrapper[4755]: I1210 15:24:45.508525 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q5ctz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17673130-8212-4f8f-8859-92774f0ee202\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcscn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcscn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q5ctz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:45Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:45 crc kubenswrapper[4755]: I1210 15:24:45.526932 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:45 crc kubenswrapper[4755]: I1210 15:24:45.526965 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:45 crc kubenswrapper[4755]: I1210 15:24:45.526973 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:45 crc kubenswrapper[4755]: I1210 15:24:45.526987 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:45 crc kubenswrapper[4755]: I1210 15:24:45.526996 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:45Z","lastTransitionTime":"2025-12-10T15:24:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:45 crc kubenswrapper[4755]: I1210 15:24:45.630255 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:45 crc kubenswrapper[4755]: I1210 15:24:45.630299 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:45 crc kubenswrapper[4755]: I1210 15:24:45.630307 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:45 crc kubenswrapper[4755]: I1210 15:24:45.630321 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:45 crc kubenswrapper[4755]: I1210 15:24:45.630329 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:45Z","lastTransitionTime":"2025-12-10T15:24:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:45 crc kubenswrapper[4755]: I1210 15:24:45.732676 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:45 crc kubenswrapper[4755]: I1210 15:24:45.732719 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:45 crc kubenswrapper[4755]: I1210 15:24:45.732732 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:45 crc kubenswrapper[4755]: I1210 15:24:45.732750 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:45 crc kubenswrapper[4755]: I1210 15:24:45.732762 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:45Z","lastTransitionTime":"2025-12-10T15:24:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:45 crc kubenswrapper[4755]: I1210 15:24:45.757097 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5ctz" Dec 10 15:24:45 crc kubenswrapper[4755]: I1210 15:24:45.757241 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 15:24:45 crc kubenswrapper[4755]: I1210 15:24:45.757412 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 15:24:45 crc kubenswrapper[4755]: E1210 15:24:45.757400 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5ctz" podUID="17673130-8212-4f8f-8859-92774f0ee202" Dec 10 15:24:45 crc kubenswrapper[4755]: I1210 15:24:45.757515 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 15:24:45 crc kubenswrapper[4755]: E1210 15:24:45.757557 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 15:24:45 crc kubenswrapper[4755]: E1210 15:24:45.757630 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 15:24:45 crc kubenswrapper[4755]: E1210 15:24:45.757802 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 15:24:45 crc kubenswrapper[4755]: I1210 15:24:45.834589 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:45 crc kubenswrapper[4755]: I1210 15:24:45.834640 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:45 crc kubenswrapper[4755]: I1210 15:24:45.834653 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:45 crc kubenswrapper[4755]: I1210 15:24:45.834670 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:45 crc kubenswrapper[4755]: I1210 15:24:45.834682 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:45Z","lastTransitionTime":"2025-12-10T15:24:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:45 crc kubenswrapper[4755]: I1210 15:24:45.937536 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:45 crc kubenswrapper[4755]: I1210 15:24:45.937582 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:45 crc kubenswrapper[4755]: I1210 15:24:45.937597 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:45 crc kubenswrapper[4755]: I1210 15:24:45.937616 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:45 crc kubenswrapper[4755]: I1210 15:24:45.937631 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:45Z","lastTransitionTime":"2025-12-10T15:24:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:46 crc kubenswrapper[4755]: I1210 15:24:46.039862 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:46 crc kubenswrapper[4755]: I1210 15:24:46.039908 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:46 crc kubenswrapper[4755]: I1210 15:24:46.039919 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:46 crc kubenswrapper[4755]: I1210 15:24:46.039938 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:46 crc kubenswrapper[4755]: I1210 15:24:46.039950 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:46Z","lastTransitionTime":"2025-12-10T15:24:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:46 crc kubenswrapper[4755]: I1210 15:24:46.142356 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:46 crc kubenswrapper[4755]: I1210 15:24:46.142388 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:46 crc kubenswrapper[4755]: I1210 15:24:46.142398 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:46 crc kubenswrapper[4755]: I1210 15:24:46.142414 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:46 crc kubenswrapper[4755]: I1210 15:24:46.142431 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:46Z","lastTransitionTime":"2025-12-10T15:24:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:46 crc kubenswrapper[4755]: I1210 15:24:46.246262 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:46 crc kubenswrapper[4755]: I1210 15:24:46.246305 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:46 crc kubenswrapper[4755]: I1210 15:24:46.246317 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:46 crc kubenswrapper[4755]: I1210 15:24:46.246335 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:46 crc kubenswrapper[4755]: I1210 15:24:46.246347 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:46Z","lastTransitionTime":"2025-12-10T15:24:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:46 crc kubenswrapper[4755]: I1210 15:24:46.267817 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6lfvk_4b1da51a-99c9-4f8e-920d-ce0973af6370/ovnkube-controller/3.log" Dec 10 15:24:46 crc kubenswrapper[4755]: I1210 15:24:46.275901 4755 scope.go:117] "RemoveContainer" containerID="0386a60f9d2d9c0cec943720b300e0cd71348b81b74234f19f1c51d34142b089" Dec 10 15:24:46 crc kubenswrapper[4755]: E1210 15:24:46.276770 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-6lfvk_openshift-ovn-kubernetes(4b1da51a-99c9-4f8e-920d-ce0973af6370)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" podUID="4b1da51a-99c9-4f8e-920d-ce0973af6370" Dec 10 15:24:46 crc kubenswrapper[4755]: I1210 15:24:46.290231 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aed6bb9-78c2-410b-9c58-b60ab22a7bd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70d014e7227746c46b30f8f5a1f307a422d2fa0d4b98d98bfe5ba6217223489e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://104d730ea666ffd2d639ba7fc8d1ac5b444586788a62656fd260cb249f82b1ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09dbec85547c7170bb9551e5657876814d48528e3047daf3547711a563d2b6ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8bf1de0d8bdc0a20bc42ba5097d849ae0f507e5fbc18fb17b4ff3650e46ff0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:46Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:46 crc kubenswrapper[4755]: I1210 15:24:46.306684 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n8c6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b56ae78-835a-45da-bc46-5adff2bdf9fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8957db8ccef7b3c449920471d345aa81ce9a7ab9be36b2350ec428aebec7ac3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b9a1485f6d0c0e53fd3505623b2788476bc00abea2f11650386eac40e449bcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9a1485f6d0c0e53fd3505623b2788476bc00abea2f11650386eac40e449bcc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a532015d196c588ad3dd6c43d9fa3772ba2a42bd1b2231f830b60adb0addab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a532015d196c588ad3dd6c43d9fa3772ba2a42bd1b2231f830b60adb0addab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://417e326f541e04df27d540c14ec572bb8f1d749a9e3dc88f483241d69f7d0679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://417e326f541e04df27d540c14ec572bb8f1d749a9e3dc88f483241d69f7d0679\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42867f7a1f6aceaf708ad650b7634bdafb3f850872c540d424916440014e7941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42867f7a1f6aceaf708ad650b7634bdafb3f850872c540d424916440014e7941\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f2d22bfc26958a3aad99de39f5c831d36c1ba4f563851baafc73735e80d5c91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f2d22bfc26958a3aad99de39f5c831d36c1ba4f563851baafc73735e80d5c91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://548d326fc5c14c16176d4fff3446d93c17322c11d2a34e9e42376b665de3848d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://548d326fc5c14c16176d4fff3446d93c17322c11d2a34e9e42376b665de3848d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n8c6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:46Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:46 crc kubenswrapper[4755]: I1210 15:24:46.319133 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qnmst" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1693fcf1-bef4-4f82-8dd8-f1797b03f5e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8fd6a2ab2b9557574951c0ebbccd663fe576262b0de2c3c655427c977f62d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hzdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qnmst\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:46Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:46 crc kubenswrapper[4755]: I1210 15:24:46.330084 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-82wnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0a2f42c-a60e-4350-9ebb-c28d3cbfdad7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7af643111a7a5d1d78b0412b1621b5ffac6389760ea9190e26e9e5d1704eed4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xbzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://915c918562f8a69a28cbcf29e427bfdd94477193b98d5a2da9ad45cb4b44fa03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xbzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-82wnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:46Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:46 crc kubenswrapper[4755]: I1210 15:24:46.348907 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:46 crc kubenswrapper[4755]: I1210 15:24:46.348962 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:46 crc kubenswrapper[4755]: I1210 15:24:46.348973 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:46 crc kubenswrapper[4755]: I1210 15:24:46.348989 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:46 crc kubenswrapper[4755]: I1210 15:24:46.349000 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:46Z","lastTransitionTime":"2025-12-10T15:24:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:46 crc kubenswrapper[4755]: I1210 15:24:46.353010 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fa03060-a6e0-4aad-9aa1-43b1a0d00c85\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a823333ed5cb5de3988d25e50e4b7a0f9071c76fb39c22760a4f2acd5eb455d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d828f3f3b90a2dcb1c1908e6a686368af5b0d715b3251e4b8fcf3c8818ec75a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://172ab3fce08c8ba4095ff4095c89364a778644752bd7bb6c178d6e3ebcface69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfbb8e350ad18b78a6bcf6cfa4eb8f2fad90f970dba08a8b1b2026af6f255e83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c4b2bf0c88a16fd6b4b45a20730a92292895dc9f29ce756d347c302b25a8612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55da309288bc2c7ff1f22da0569d18155796700c200fe3ed508f6f8179756865\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55da309288bc2c7ff1f22da0569d18155796700c200fe3ed508f6f8179756865\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219c531368bb5379f3726d94ab0afc0f33eedbf91b0a4dbbe0757c6e232f79ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://219c531368bb5379f3726d94ab0afc0f33eedbf91b0a4dbbe0757c6e232f79ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f9da8f3d4f85ec83c16b71ecd983ff4f48049eb8e73461a2b46c4f66d71ed06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9da8f3d4f85ec83c16b71ecd983ff4f48049eb8e73461a2b46c4f66d71ed06e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:46Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:46 crc kubenswrapper[4755]: I1210 15:24:46.365979 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:46Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:46 crc kubenswrapper[4755]: I1210 15:24:46.376503 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c371b199c3643a68ea5eba935eb76a1b7e8a4027c9292a1324116ccfc14742ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:46Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:46 crc kubenswrapper[4755]: I1210 15:24:46.388186 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wv8fh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d150a22e-c59a-4376-a5c8-db4085ea0be0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58d106c5d1b9525ec821d009ca556449cd8d7f0e1b9c8ec7dd969df996e73625\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfw9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wv8fh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:46Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:46 crc kubenswrapper[4755]: I1210 15:24:46.398968 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q5ctz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17673130-8212-4f8f-8859-92774f0ee202\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcscn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcscn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q5ctz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:46Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:46 crc kubenswrapper[4755]: I1210 15:24:46.411632 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cc57cd7-2dd9-45cb-bc66-4c0d7a0cf043\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7446bf39567cbd3f5a0a9a1252748e01144968fd0a67b39d8c32b326a13dec38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34c0f838adfa6e512d9529c4999cc194d978276c8736abccd0ef75d5b7e9b6e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34c0f838adfa6e512d9529c4999cc194d978276c8736abccd0ef75d5b7e9b6e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:46Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:46 crc kubenswrapper[4755]: I1210 15:24:46.426834 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e6db97-7e22-4fec-924e-20e90f463887\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef3871ddc16d2fc1b89a6f120390cf066424bd9ed4cca98f8dea4430c40c6c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08895c3ab20248c4bd70ccfae6442bdf8e76a602e605b043f0f00166624ada17\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5b2467645116856afa1e1e37501c7b453b03da193a96d1075886b85839a86ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16e58653dca08c9b33671279553975bca9507ffeeefea161119cec624813f167\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea18ef6c319336321d97c2a55449903b837b5a47da785da3ff5308244751943\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17ca0b6cc1b8b0bf9c34e03c494d1fc1594db013eccf862027ec8749ea51f6e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17ca0b6cc1b8b0bf9c34e03c494d1fc1594db013eccf862027ec8749ea51f6e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:46Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:46 crc kubenswrapper[4755]: I1210 15:24:46.441489 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:46Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:46 crc kubenswrapper[4755]: I1210 15:24:46.451410 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:46 crc kubenswrapper[4755]: I1210 15:24:46.451553 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:46 crc kubenswrapper[4755]: I1210 15:24:46.451573 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:46 crc kubenswrapper[4755]: I1210 15:24:46.451638 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:46 crc kubenswrapper[4755]: I1210 15:24:46.451655 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:46Z","lastTransitionTime":"2025-12-10T15:24:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:46 crc kubenswrapper[4755]: I1210 15:24:46.468271 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b1da51a-99c9-4f8e-920d-ce0973af6370\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6eb065dc6c0cc8914cb95553eb2683d894fb9a4e78ce7fac73bcce8d7f6cced9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e547993b9f2fa37bf924f909c47b62eb0cc02b659596b1cad9bbc42fdde8f9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9ba47683cc23d5b531a45f0658b6a9378650400b35b5372642b0430a5ac503f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a75407e83508af9adebb09c6466a966dd791d29f690c539656f9bd3396d7031\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://602b4e49987fa2cc6b54b822110aececbdddaf2bce8f27cce4ed906768d45791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://335bcab3a79f09796e97560365e1211fb30ddf288f4773c05ab353197add4365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0386a60f9d2d9c0cec943720b300e0cd71348b81b74234f19f1c51d34142b089\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0386a60f9d2d9c0cec943720b300e0cd71348b81b74234f19f1c51d34142b089\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T15:24:44Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1210 15:24:44.102698 6774 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1210 15:24:44.102724 6774 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1210 15:24:44.102737 6774 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1210 15:24:44.102750 6774 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1210 15:24:44.102755 6774 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1210 15:24:44.102760 6774 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1210 15:24:44.102777 6774 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1210 15:24:44.102789 6774 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1210 15:24:44.102799 6774 handler.go:208] Removed *v1.Node event handler 2\\\\nI1210 15:24:44.102813 6774 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1210 15:24:44.102822 6774 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1210 15:24:44.102847 6774 factory.go:656] Stopping watch factory\\\\nI1210 15:24:44.102860 6774 ovnkube.go:599] Stopped ovnkube\\\\nI1210 15:24:44.102878 6774 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1210 15:24:44.102878 6774 handler.go:208] Removed *v1.Namespace even\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T15:24:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-6lfvk_openshift-ovn-kubernetes(4b1da51a-99c9-4f8e-920d-ce0973af6370)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59bf59d1b7fbc365a916fbbceca7ae30b7ebc754b34f2f7a34c2e21e1e1d2166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://375efb27cf6bef06a6a0ffc8f6b2963e1b9ffbca18b3c8e7b402bc552a411f6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://375efb27cf6bef06a6a0ffc8f6b2963e1b9ffbca18b3c8e7b402bc552a411f6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6lfvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:46Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:46 crc kubenswrapper[4755]: I1210 15:24:46.482633 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81a321f6-db4b-46c6-bf24-1ad62da5992b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e68926ec899dad3c070810697fb078955e202f270724638719a7ea21d2debcb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://728ff8c02e5b0bea5514375b60c7a66f025e3c2a7163f0753c65d914b7873531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://865537b06aba1b617a1a16ef38c6b5be072501d1bbbd69e14eaaf2ce76c6b1da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54e0e014ecb31c499da20621a8b9e2ad6fdbdc2170a93636c828082b5b63b683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e0e014ecb31c499da20621a8b9e2ad6fdbdc2170a93636c828082b5b63b683\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:46Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:46 crc kubenswrapper[4755]: I1210 15:24:46.496931 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7d089a8bddfa1f80d29011c7a6bab0a300f7dd44bdb2864f86951ebbb9ebea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:46Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:46 crc kubenswrapper[4755]: I1210 15:24:46.509130 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01ddc4056319db1f69268dcae192c3cb9db6c25284305803ae7588e59f77c346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aae6bf174cdc9bde18d7c959e976454e73c1e67642f0158d365b79582f63f3cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:46Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:46 crc kubenswrapper[4755]: I1210 15:24:46.521452 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:46Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:46 crc kubenswrapper[4755]: I1210 15:24:46.534443 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zl2tx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"796da6d5-6ccd-4786-a03e-9a8e47a55031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e0f974f9ba614dcaef08cf7168b77eeee007dfe65cc4e32df9b8e45005ff4ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de63a123c46563bd8cd07e669d192bc8b019a889b9bdb7af1b988872c8f1fc48\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T15:24:30Z\\\",\\\"message\\\":\\\"2025-12-10T15:23:45+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5295941e-ea46-4d40-a35d-bd2fd673e118\\\\n2025-12-10T15:23:45+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5295941e-ea46-4d40-a35d-bd2fd673e118 to /host/opt/cni/bin/\\\\n2025-12-10T15:23:45Z [verbose] multus-daemon started\\\\n2025-12-10T15:23:45Z [verbose] Readiness Indicator file check\\\\n2025-12-10T15:24:30Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:24:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzg4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zl2tx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:46Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:46 crc kubenswrapper[4755]: I1210 15:24:46.548772 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b132a8b9-1c99-414d-8773-229bf36b305d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5a5a59e9f156fb791ec822c2d5efe3fc6ec0e84bfcb2b6f5da81396951984c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2mdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a40c4bdaa23a60a665b8f565720d79b68cac62d40246be94fc6cd314b1bb3656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2mdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ggt8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:46Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:46 crc kubenswrapper[4755]: I1210 15:24:46.554763 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:46 crc kubenswrapper[4755]: I1210 15:24:46.554803 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:46 crc kubenswrapper[4755]: I1210 15:24:46.554811 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:46 crc kubenswrapper[4755]: I1210 15:24:46.554826 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:46 crc kubenswrapper[4755]: I1210 15:24:46.554836 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:46Z","lastTransitionTime":"2025-12-10T15:24:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:46 crc kubenswrapper[4755]: I1210 15:24:46.657551 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:46 crc kubenswrapper[4755]: I1210 15:24:46.657608 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:46 crc kubenswrapper[4755]: I1210 15:24:46.657620 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:46 crc kubenswrapper[4755]: I1210 15:24:46.657638 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:46 crc kubenswrapper[4755]: I1210 15:24:46.657650 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:46Z","lastTransitionTime":"2025-12-10T15:24:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:46 crc kubenswrapper[4755]: I1210 15:24:46.759643 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:46 crc kubenswrapper[4755]: I1210 15:24:46.759679 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:46 crc kubenswrapper[4755]: I1210 15:24:46.759691 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:46 crc kubenswrapper[4755]: I1210 15:24:46.759706 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:46 crc kubenswrapper[4755]: I1210 15:24:46.759717 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:46Z","lastTransitionTime":"2025-12-10T15:24:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:46 crc kubenswrapper[4755]: I1210 15:24:46.862048 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:46 crc kubenswrapper[4755]: I1210 15:24:46.862091 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:46 crc kubenswrapper[4755]: I1210 15:24:46.862100 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:46 crc kubenswrapper[4755]: I1210 15:24:46.862118 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:46 crc kubenswrapper[4755]: I1210 15:24:46.862127 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:46Z","lastTransitionTime":"2025-12-10T15:24:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:46 crc kubenswrapper[4755]: I1210 15:24:46.964567 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:46 crc kubenswrapper[4755]: I1210 15:24:46.964612 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:46 crc kubenswrapper[4755]: I1210 15:24:46.964644 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:46 crc kubenswrapper[4755]: I1210 15:24:46.964677 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:46 crc kubenswrapper[4755]: I1210 15:24:46.964689 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:46Z","lastTransitionTime":"2025-12-10T15:24:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:47 crc kubenswrapper[4755]: I1210 15:24:47.067048 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:47 crc kubenswrapper[4755]: I1210 15:24:47.067097 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:47 crc kubenswrapper[4755]: I1210 15:24:47.067108 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:47 crc kubenswrapper[4755]: I1210 15:24:47.067126 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:47 crc kubenswrapper[4755]: I1210 15:24:47.067140 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:47Z","lastTransitionTime":"2025-12-10T15:24:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:47 crc kubenswrapper[4755]: I1210 15:24:47.169381 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:47 crc kubenswrapper[4755]: I1210 15:24:47.169429 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:47 crc kubenswrapper[4755]: I1210 15:24:47.169441 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:47 crc kubenswrapper[4755]: I1210 15:24:47.169456 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:47 crc kubenswrapper[4755]: I1210 15:24:47.169499 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:47Z","lastTransitionTime":"2025-12-10T15:24:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:47 crc kubenswrapper[4755]: I1210 15:24:47.281732 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:47 crc kubenswrapper[4755]: I1210 15:24:47.281762 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:47 crc kubenswrapper[4755]: I1210 15:24:47.281771 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:47 crc kubenswrapper[4755]: I1210 15:24:47.281784 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:47 crc kubenswrapper[4755]: I1210 15:24:47.281793 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:47Z","lastTransitionTime":"2025-12-10T15:24:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:47 crc kubenswrapper[4755]: I1210 15:24:47.384119 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:47 crc kubenswrapper[4755]: I1210 15:24:47.384156 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:47 crc kubenswrapper[4755]: I1210 15:24:47.384167 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:47 crc kubenswrapper[4755]: I1210 15:24:47.384185 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:47 crc kubenswrapper[4755]: I1210 15:24:47.384195 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:47Z","lastTransitionTime":"2025-12-10T15:24:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:47 crc kubenswrapper[4755]: I1210 15:24:47.486694 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:47 crc kubenswrapper[4755]: I1210 15:24:47.486766 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:47 crc kubenswrapper[4755]: I1210 15:24:47.486792 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:47 crc kubenswrapper[4755]: I1210 15:24:47.486823 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:47 crc kubenswrapper[4755]: I1210 15:24:47.486846 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:47Z","lastTransitionTime":"2025-12-10T15:24:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:47 crc kubenswrapper[4755]: I1210 15:24:47.590685 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:47 crc kubenswrapper[4755]: I1210 15:24:47.590774 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:47 crc kubenswrapper[4755]: I1210 15:24:47.590797 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:47 crc kubenswrapper[4755]: I1210 15:24:47.590825 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:47 crc kubenswrapper[4755]: I1210 15:24:47.590844 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:47Z","lastTransitionTime":"2025-12-10T15:24:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:47 crc kubenswrapper[4755]: I1210 15:24:47.692997 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:47 crc kubenswrapper[4755]: I1210 15:24:47.693089 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:47 crc kubenswrapper[4755]: I1210 15:24:47.693100 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:47 crc kubenswrapper[4755]: I1210 15:24:47.693125 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:47 crc kubenswrapper[4755]: I1210 15:24:47.693140 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:47Z","lastTransitionTime":"2025-12-10T15:24:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:47 crc kubenswrapper[4755]: I1210 15:24:47.710737 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 15:24:47 crc kubenswrapper[4755]: I1210 15:24:47.710895 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 15:24:47 crc kubenswrapper[4755]: E1210 15:24:47.710943 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 15:25:51.71090194 +0000 UTC m=+148.311785572 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:24:47 crc kubenswrapper[4755]: I1210 15:24:47.711020 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 15:24:47 crc kubenswrapper[4755]: E1210 15:24:47.711037 4755 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 10 15:24:47 crc kubenswrapper[4755]: E1210 15:24:47.711059 4755 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 10 15:24:47 crc kubenswrapper[4755]: E1210 15:24:47.711076 4755 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 15:24:47 crc kubenswrapper[4755]: I1210 15:24:47.711109 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 15:24:47 crc kubenswrapper[4755]: E1210 15:24:47.711132 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-10 15:25:51.711116095 +0000 UTC m=+148.311999727 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 15:24:47 crc kubenswrapper[4755]: I1210 15:24:47.711187 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 15:24:47 crc kubenswrapper[4755]: E1210 15:24:47.711247 4755 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 10 15:24:47 crc kubenswrapper[4755]: E1210 15:24:47.711346 4755 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 10 15:24:47 crc kubenswrapper[4755]: E1210 15:24:47.711351 4755 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 10 15:24:47 crc kubenswrapper[4755]: E1210 15:24:47.711373 4755 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 10 15:24:47 crc kubenswrapper[4755]: E1210 15:24:47.711386 4755 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 15:24:47 crc kubenswrapper[4755]: E1210 15:24:47.711357 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-10 15:25:51.711334391 +0000 UTC m=+148.312218243 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 10 15:24:47 crc kubenswrapper[4755]: E1210 15:24:47.711454 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-10 15:25:51.711444474 +0000 UTC m=+148.312328106 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 10 15:24:47 crc kubenswrapper[4755]: E1210 15:24:47.711484 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-10 15:25:51.711461694 +0000 UTC m=+148.312345326 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 15:24:47 crc kubenswrapper[4755]: I1210 15:24:47.743864 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:47 crc kubenswrapper[4755]: I1210 15:24:47.743913 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:47 crc kubenswrapper[4755]: I1210 15:24:47.743921 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:47 crc kubenswrapper[4755]: I1210 15:24:47.743946 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:47 crc kubenswrapper[4755]: I1210 15:24:47.743957 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:47Z","lastTransitionTime":"2025-12-10T15:24:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:47 crc kubenswrapper[4755]: I1210 15:24:47.756732 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 15:24:47 crc kubenswrapper[4755]: I1210 15:24:47.756821 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 15:24:47 crc kubenswrapper[4755]: I1210 15:24:47.756844 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 15:24:47 crc kubenswrapper[4755]: E1210 15:24:47.756962 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 15:24:47 crc kubenswrapper[4755]: E1210 15:24:47.757111 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 15:24:47 crc kubenswrapper[4755]: E1210 15:24:47.757181 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 15:24:47 crc kubenswrapper[4755]: I1210 15:24:47.757259 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5ctz" Dec 10 15:24:47 crc kubenswrapper[4755]: E1210 15:24:47.757380 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5ctz" podUID="17673130-8212-4f8f-8859-92774f0ee202" Dec 10 15:24:47 crc kubenswrapper[4755]: E1210 15:24:47.759119 4755 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:24:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:24:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:24:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:24:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ba232303-88d5-4931-b82e-34d9a0e5c06a\\\",\\\"systemUUID\\\":\\\"ebd59de0-c6b0-47c1-bc17-6f665dcf344d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:47Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:47 crc kubenswrapper[4755]: I1210 15:24:47.763227 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:47 crc kubenswrapper[4755]: I1210 15:24:47.763276 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:47 crc kubenswrapper[4755]: I1210 15:24:47.763286 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:47 crc kubenswrapper[4755]: I1210 15:24:47.763300 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:47 crc kubenswrapper[4755]: I1210 15:24:47.763310 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:47Z","lastTransitionTime":"2025-12-10T15:24:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:47 crc kubenswrapper[4755]: E1210 15:24:47.775990 4755 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:24:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:24:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:24:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:24:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ba232303-88d5-4931-b82e-34d9a0e5c06a\\\",\\\"systemUUID\\\":\\\"ebd59de0-c6b0-47c1-bc17-6f665dcf344d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:47Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:47 crc kubenswrapper[4755]: I1210 15:24:47.779324 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:47 crc kubenswrapper[4755]: I1210 15:24:47.779373 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:47 crc kubenswrapper[4755]: I1210 15:24:47.779382 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:47 crc kubenswrapper[4755]: I1210 15:24:47.779398 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:47 crc kubenswrapper[4755]: I1210 15:24:47.779412 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:47Z","lastTransitionTime":"2025-12-10T15:24:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:47 crc kubenswrapper[4755]: E1210 15:24:47.791500 4755 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:24:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:24:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:24:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:24:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ba232303-88d5-4931-b82e-34d9a0e5c06a\\\",\\\"systemUUID\\\":\\\"ebd59de0-c6b0-47c1-bc17-6f665dcf344d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:47Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:47 crc kubenswrapper[4755]: I1210 15:24:47.796352 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:47 crc kubenswrapper[4755]: I1210 15:24:47.796415 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:47 crc kubenswrapper[4755]: I1210 15:24:47.796427 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:47 crc kubenswrapper[4755]: I1210 15:24:47.796446 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:47 crc kubenswrapper[4755]: I1210 15:24:47.796460 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:47Z","lastTransitionTime":"2025-12-10T15:24:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:47 crc kubenswrapper[4755]: E1210 15:24:47.809675 4755 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:24:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:24:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:24:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:24:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ba232303-88d5-4931-b82e-34d9a0e5c06a\\\",\\\"systemUUID\\\":\\\"ebd59de0-c6b0-47c1-bc17-6f665dcf344d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:47Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:47 crc kubenswrapper[4755]: I1210 15:24:47.813480 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:47 crc kubenswrapper[4755]: I1210 15:24:47.813530 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:47 crc kubenswrapper[4755]: I1210 15:24:47.813545 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:47 crc kubenswrapper[4755]: I1210 15:24:47.813567 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:47 crc kubenswrapper[4755]: I1210 15:24:47.813583 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:47Z","lastTransitionTime":"2025-12-10T15:24:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:47 crc kubenswrapper[4755]: E1210 15:24:47.826146 4755 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:24:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:24:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:24:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T15:24:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ba232303-88d5-4931-b82e-34d9a0e5c06a\\\",\\\"systemUUID\\\":\\\"ebd59de0-c6b0-47c1-bc17-6f665dcf344d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:47Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:47 crc kubenswrapper[4755]: E1210 15:24:47.826264 4755 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 10 15:24:47 crc kubenswrapper[4755]: I1210 15:24:47.927676 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:47 crc kubenswrapper[4755]: I1210 15:24:47.927741 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:47 crc kubenswrapper[4755]: I1210 15:24:47.927755 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:47 crc kubenswrapper[4755]: I1210 15:24:47.927774 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:47 crc kubenswrapper[4755]: I1210 15:24:47.927789 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:47Z","lastTransitionTime":"2025-12-10T15:24:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:48 crc kubenswrapper[4755]: I1210 15:24:48.030397 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:48 crc kubenswrapper[4755]: I1210 15:24:48.030485 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:48 crc kubenswrapper[4755]: I1210 15:24:48.030502 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:48 crc kubenswrapper[4755]: I1210 15:24:48.030549 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:48 crc kubenswrapper[4755]: I1210 15:24:48.030565 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:48Z","lastTransitionTime":"2025-12-10T15:24:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:48 crc kubenswrapper[4755]: I1210 15:24:48.132807 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:48 crc kubenswrapper[4755]: I1210 15:24:48.132852 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:48 crc kubenswrapper[4755]: I1210 15:24:48.132864 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:48 crc kubenswrapper[4755]: I1210 15:24:48.132878 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:48 crc kubenswrapper[4755]: I1210 15:24:48.132888 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:48Z","lastTransitionTime":"2025-12-10T15:24:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:48 crc kubenswrapper[4755]: I1210 15:24:48.235028 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:48 crc kubenswrapper[4755]: I1210 15:24:48.235074 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:48 crc kubenswrapper[4755]: I1210 15:24:48.235087 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:48 crc kubenswrapper[4755]: I1210 15:24:48.235104 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:48 crc kubenswrapper[4755]: I1210 15:24:48.235116 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:48Z","lastTransitionTime":"2025-12-10T15:24:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:48 crc kubenswrapper[4755]: I1210 15:24:48.337834 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:48 crc kubenswrapper[4755]: I1210 15:24:48.337886 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:48 crc kubenswrapper[4755]: I1210 15:24:48.337905 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:48 crc kubenswrapper[4755]: I1210 15:24:48.337926 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:48 crc kubenswrapper[4755]: I1210 15:24:48.337941 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:48Z","lastTransitionTime":"2025-12-10T15:24:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:48 crc kubenswrapper[4755]: I1210 15:24:48.440299 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:48 crc kubenswrapper[4755]: I1210 15:24:48.440348 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:48 crc kubenswrapper[4755]: I1210 15:24:48.440358 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:48 crc kubenswrapper[4755]: I1210 15:24:48.440375 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:48 crc kubenswrapper[4755]: I1210 15:24:48.440386 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:48Z","lastTransitionTime":"2025-12-10T15:24:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:48 crc kubenswrapper[4755]: I1210 15:24:48.544056 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:48 crc kubenswrapper[4755]: I1210 15:24:48.544127 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:48 crc kubenswrapper[4755]: I1210 15:24:48.544150 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:48 crc kubenswrapper[4755]: I1210 15:24:48.544176 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:48 crc kubenswrapper[4755]: I1210 15:24:48.544194 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:48Z","lastTransitionTime":"2025-12-10T15:24:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:48 crc kubenswrapper[4755]: I1210 15:24:48.647276 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:48 crc kubenswrapper[4755]: I1210 15:24:48.647324 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:48 crc kubenswrapper[4755]: I1210 15:24:48.647339 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:48 crc kubenswrapper[4755]: I1210 15:24:48.647359 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:48 crc kubenswrapper[4755]: I1210 15:24:48.647375 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:48Z","lastTransitionTime":"2025-12-10T15:24:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:48 crc kubenswrapper[4755]: I1210 15:24:48.750307 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:48 crc kubenswrapper[4755]: I1210 15:24:48.750340 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:48 crc kubenswrapper[4755]: I1210 15:24:48.750351 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:48 crc kubenswrapper[4755]: I1210 15:24:48.750365 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:48 crc kubenswrapper[4755]: I1210 15:24:48.750376 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:48Z","lastTransitionTime":"2025-12-10T15:24:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:48 crc kubenswrapper[4755]: I1210 15:24:48.852838 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:48 crc kubenswrapper[4755]: I1210 15:24:48.852875 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:48 crc kubenswrapper[4755]: I1210 15:24:48.852888 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:48 crc kubenswrapper[4755]: I1210 15:24:48.852904 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:48 crc kubenswrapper[4755]: I1210 15:24:48.852917 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:48Z","lastTransitionTime":"2025-12-10T15:24:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:48 crc kubenswrapper[4755]: I1210 15:24:48.955505 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:48 crc kubenswrapper[4755]: I1210 15:24:48.955565 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:48 crc kubenswrapper[4755]: I1210 15:24:48.955581 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:48 crc kubenswrapper[4755]: I1210 15:24:48.955606 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:48 crc kubenswrapper[4755]: I1210 15:24:48.955621 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:48Z","lastTransitionTime":"2025-12-10T15:24:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:49 crc kubenswrapper[4755]: I1210 15:24:49.058679 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:49 crc kubenswrapper[4755]: I1210 15:24:49.058727 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:49 crc kubenswrapper[4755]: I1210 15:24:49.058737 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:49 crc kubenswrapper[4755]: I1210 15:24:49.058797 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:49 crc kubenswrapper[4755]: I1210 15:24:49.058813 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:49Z","lastTransitionTime":"2025-12-10T15:24:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:49 crc kubenswrapper[4755]: I1210 15:24:49.161034 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:49 crc kubenswrapper[4755]: I1210 15:24:49.161088 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:49 crc kubenswrapper[4755]: I1210 15:24:49.161098 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:49 crc kubenswrapper[4755]: I1210 15:24:49.161116 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:49 crc kubenswrapper[4755]: I1210 15:24:49.161127 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:49Z","lastTransitionTime":"2025-12-10T15:24:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:49 crc kubenswrapper[4755]: I1210 15:24:49.264605 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:49 crc kubenswrapper[4755]: I1210 15:24:49.264680 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:49 crc kubenswrapper[4755]: I1210 15:24:49.264709 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:49 crc kubenswrapper[4755]: I1210 15:24:49.264737 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:49 crc kubenswrapper[4755]: I1210 15:24:49.264753 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:49Z","lastTransitionTime":"2025-12-10T15:24:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:49 crc kubenswrapper[4755]: I1210 15:24:49.367569 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:49 crc kubenswrapper[4755]: I1210 15:24:49.367619 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:49 crc kubenswrapper[4755]: I1210 15:24:49.367633 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:49 crc kubenswrapper[4755]: I1210 15:24:49.367654 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:49 crc kubenswrapper[4755]: I1210 15:24:49.367722 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:49Z","lastTransitionTime":"2025-12-10T15:24:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:49 crc kubenswrapper[4755]: I1210 15:24:49.470956 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:49 crc kubenswrapper[4755]: I1210 15:24:49.471023 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:49 crc kubenswrapper[4755]: I1210 15:24:49.471035 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:49 crc kubenswrapper[4755]: I1210 15:24:49.471063 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:49 crc kubenswrapper[4755]: I1210 15:24:49.471075 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:49Z","lastTransitionTime":"2025-12-10T15:24:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:49 crc kubenswrapper[4755]: I1210 15:24:49.573781 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:49 crc kubenswrapper[4755]: I1210 15:24:49.574115 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:49 crc kubenswrapper[4755]: I1210 15:24:49.574126 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:49 crc kubenswrapper[4755]: I1210 15:24:49.574144 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:49 crc kubenswrapper[4755]: I1210 15:24:49.574156 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:49Z","lastTransitionTime":"2025-12-10T15:24:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:49 crc kubenswrapper[4755]: I1210 15:24:49.676381 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:49 crc kubenswrapper[4755]: I1210 15:24:49.676436 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:49 crc kubenswrapper[4755]: I1210 15:24:49.676452 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:49 crc kubenswrapper[4755]: I1210 15:24:49.676501 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:49 crc kubenswrapper[4755]: I1210 15:24:49.676516 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:49Z","lastTransitionTime":"2025-12-10T15:24:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:49 crc kubenswrapper[4755]: I1210 15:24:49.757105 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 15:24:49 crc kubenswrapper[4755]: I1210 15:24:49.757140 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 15:24:49 crc kubenswrapper[4755]: I1210 15:24:49.757106 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 15:24:49 crc kubenswrapper[4755]: E1210 15:24:49.757237 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 15:24:49 crc kubenswrapper[4755]: I1210 15:24:49.757250 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5ctz" Dec 10 15:24:49 crc kubenswrapper[4755]: E1210 15:24:49.757310 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 15:24:49 crc kubenswrapper[4755]: E1210 15:24:49.757412 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5ctz" podUID="17673130-8212-4f8f-8859-92774f0ee202" Dec 10 15:24:49 crc kubenswrapper[4755]: E1210 15:24:49.757554 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 15:24:49 crc kubenswrapper[4755]: I1210 15:24:49.779316 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:49 crc kubenswrapper[4755]: I1210 15:24:49.779361 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:49 crc kubenswrapper[4755]: I1210 15:24:49.779373 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:49 crc kubenswrapper[4755]: I1210 15:24:49.779389 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:49 crc kubenswrapper[4755]: I1210 15:24:49.779402 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:49Z","lastTransitionTime":"2025-12-10T15:24:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:49 crc kubenswrapper[4755]: I1210 15:24:49.882264 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:49 crc kubenswrapper[4755]: I1210 15:24:49.882325 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:49 crc kubenswrapper[4755]: I1210 15:24:49.882337 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:49 crc kubenswrapper[4755]: I1210 15:24:49.882355 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:49 crc kubenswrapper[4755]: I1210 15:24:49.882366 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:49Z","lastTransitionTime":"2025-12-10T15:24:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:49 crc kubenswrapper[4755]: I1210 15:24:49.984654 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:49 crc kubenswrapper[4755]: I1210 15:24:49.984712 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:49 crc kubenswrapper[4755]: I1210 15:24:49.984728 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:49 crc kubenswrapper[4755]: I1210 15:24:49.984743 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:49 crc kubenswrapper[4755]: I1210 15:24:49.984752 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:49Z","lastTransitionTime":"2025-12-10T15:24:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:50 crc kubenswrapper[4755]: I1210 15:24:50.087596 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:50 crc kubenswrapper[4755]: I1210 15:24:50.087649 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:50 crc kubenswrapper[4755]: I1210 15:24:50.087660 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:50 crc kubenswrapper[4755]: I1210 15:24:50.087678 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:50 crc kubenswrapper[4755]: I1210 15:24:50.087691 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:50Z","lastTransitionTime":"2025-12-10T15:24:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:50 crc kubenswrapper[4755]: I1210 15:24:50.191022 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:50 crc kubenswrapper[4755]: I1210 15:24:50.191113 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:50 crc kubenswrapper[4755]: I1210 15:24:50.191144 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:50 crc kubenswrapper[4755]: I1210 15:24:50.191173 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:50 crc kubenswrapper[4755]: I1210 15:24:50.191193 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:50Z","lastTransitionTime":"2025-12-10T15:24:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:50 crc kubenswrapper[4755]: I1210 15:24:50.293221 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:50 crc kubenswrapper[4755]: I1210 15:24:50.293284 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:50 crc kubenswrapper[4755]: I1210 15:24:50.293301 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:50 crc kubenswrapper[4755]: I1210 15:24:50.293326 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:50 crc kubenswrapper[4755]: I1210 15:24:50.293344 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:50Z","lastTransitionTime":"2025-12-10T15:24:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:50 crc kubenswrapper[4755]: I1210 15:24:50.397382 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:50 crc kubenswrapper[4755]: I1210 15:24:50.397457 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:50 crc kubenswrapper[4755]: I1210 15:24:50.397531 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:50 crc kubenswrapper[4755]: I1210 15:24:50.397565 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:50 crc kubenswrapper[4755]: I1210 15:24:50.397586 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:50Z","lastTransitionTime":"2025-12-10T15:24:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:50 crc kubenswrapper[4755]: I1210 15:24:50.499992 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:50 crc kubenswrapper[4755]: I1210 15:24:50.500035 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:50 crc kubenswrapper[4755]: I1210 15:24:50.500044 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:50 crc kubenswrapper[4755]: I1210 15:24:50.500058 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:50 crc kubenswrapper[4755]: I1210 15:24:50.500069 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:50Z","lastTransitionTime":"2025-12-10T15:24:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:50 crc kubenswrapper[4755]: I1210 15:24:50.602812 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:50 crc kubenswrapper[4755]: I1210 15:24:50.602884 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:50 crc kubenswrapper[4755]: I1210 15:24:50.602903 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:50 crc kubenswrapper[4755]: I1210 15:24:50.602930 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:50 crc kubenswrapper[4755]: I1210 15:24:50.602950 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:50Z","lastTransitionTime":"2025-12-10T15:24:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:50 crc kubenswrapper[4755]: I1210 15:24:50.705440 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:50 crc kubenswrapper[4755]: I1210 15:24:50.705544 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:50 crc kubenswrapper[4755]: I1210 15:24:50.705560 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:50 crc kubenswrapper[4755]: I1210 15:24:50.705580 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:50 crc kubenswrapper[4755]: I1210 15:24:50.705591 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:50Z","lastTransitionTime":"2025-12-10T15:24:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:50 crc kubenswrapper[4755]: I1210 15:24:50.808207 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:50 crc kubenswrapper[4755]: I1210 15:24:50.808267 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:50 crc kubenswrapper[4755]: I1210 15:24:50.808294 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:50 crc kubenswrapper[4755]: I1210 15:24:50.808309 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:50 crc kubenswrapper[4755]: I1210 15:24:50.808319 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:50Z","lastTransitionTime":"2025-12-10T15:24:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:50 crc kubenswrapper[4755]: I1210 15:24:50.912039 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:50 crc kubenswrapper[4755]: I1210 15:24:50.912113 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:50 crc kubenswrapper[4755]: I1210 15:24:50.912130 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:50 crc kubenswrapper[4755]: I1210 15:24:50.912155 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:50 crc kubenswrapper[4755]: I1210 15:24:50.912175 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:50Z","lastTransitionTime":"2025-12-10T15:24:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:51 crc kubenswrapper[4755]: I1210 15:24:51.015007 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:51 crc kubenswrapper[4755]: I1210 15:24:51.015085 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:51 crc kubenswrapper[4755]: I1210 15:24:51.015106 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:51 crc kubenswrapper[4755]: I1210 15:24:51.015135 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:51 crc kubenswrapper[4755]: I1210 15:24:51.015158 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:51Z","lastTransitionTime":"2025-12-10T15:24:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:51 crc kubenswrapper[4755]: I1210 15:24:51.118689 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:51 crc kubenswrapper[4755]: I1210 15:24:51.118746 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:51 crc kubenswrapper[4755]: I1210 15:24:51.118764 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:51 crc kubenswrapper[4755]: I1210 15:24:51.118786 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:51 crc kubenswrapper[4755]: I1210 15:24:51.118803 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:51Z","lastTransitionTime":"2025-12-10T15:24:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:51 crc kubenswrapper[4755]: I1210 15:24:51.222276 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:51 crc kubenswrapper[4755]: I1210 15:24:51.222367 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:51 crc kubenswrapper[4755]: I1210 15:24:51.222389 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:51 crc kubenswrapper[4755]: I1210 15:24:51.222430 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:51 crc kubenswrapper[4755]: I1210 15:24:51.222497 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:51Z","lastTransitionTime":"2025-12-10T15:24:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:51 crc kubenswrapper[4755]: I1210 15:24:51.326245 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:51 crc kubenswrapper[4755]: I1210 15:24:51.326298 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:51 crc kubenswrapper[4755]: I1210 15:24:51.326314 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:51 crc kubenswrapper[4755]: I1210 15:24:51.326338 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:51 crc kubenswrapper[4755]: I1210 15:24:51.326355 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:51Z","lastTransitionTime":"2025-12-10T15:24:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:51 crc kubenswrapper[4755]: I1210 15:24:51.429307 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:51 crc kubenswrapper[4755]: I1210 15:24:51.429349 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:51 crc kubenswrapper[4755]: I1210 15:24:51.429365 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:51 crc kubenswrapper[4755]: I1210 15:24:51.429389 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:51 crc kubenswrapper[4755]: I1210 15:24:51.429407 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:51Z","lastTransitionTime":"2025-12-10T15:24:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:51 crc kubenswrapper[4755]: I1210 15:24:51.532234 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:51 crc kubenswrapper[4755]: I1210 15:24:51.532320 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:51 crc kubenswrapper[4755]: I1210 15:24:51.532340 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:51 crc kubenswrapper[4755]: I1210 15:24:51.532365 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:51 crc kubenswrapper[4755]: I1210 15:24:51.532382 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:51Z","lastTransitionTime":"2025-12-10T15:24:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:51 crc kubenswrapper[4755]: I1210 15:24:51.635703 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:51 crc kubenswrapper[4755]: I1210 15:24:51.635768 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:51 crc kubenswrapper[4755]: I1210 15:24:51.635791 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:51 crc kubenswrapper[4755]: I1210 15:24:51.635823 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:51 crc kubenswrapper[4755]: I1210 15:24:51.635845 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:51Z","lastTransitionTime":"2025-12-10T15:24:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:51 crc kubenswrapper[4755]: I1210 15:24:51.739262 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:51 crc kubenswrapper[4755]: I1210 15:24:51.739333 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:51 crc kubenswrapper[4755]: I1210 15:24:51.739356 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:51 crc kubenswrapper[4755]: I1210 15:24:51.739380 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:51 crc kubenswrapper[4755]: I1210 15:24:51.739398 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:51Z","lastTransitionTime":"2025-12-10T15:24:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:51 crc kubenswrapper[4755]: I1210 15:24:51.756788 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 15:24:51 crc kubenswrapper[4755]: I1210 15:24:51.756928 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 15:24:51 crc kubenswrapper[4755]: I1210 15:24:51.756939 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 15:24:51 crc kubenswrapper[4755]: I1210 15:24:51.756804 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5ctz" Dec 10 15:24:51 crc kubenswrapper[4755]: E1210 15:24:51.757065 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 15:24:51 crc kubenswrapper[4755]: E1210 15:24:51.757227 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 15:24:51 crc kubenswrapper[4755]: E1210 15:24:51.757601 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5ctz" podUID="17673130-8212-4f8f-8859-92774f0ee202" Dec 10 15:24:51 crc kubenswrapper[4755]: E1210 15:24:51.757674 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 15:24:51 crc kubenswrapper[4755]: I1210 15:24:51.845798 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:51 crc kubenswrapper[4755]: I1210 15:24:51.845877 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:51 crc kubenswrapper[4755]: I1210 15:24:51.845905 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:51 crc kubenswrapper[4755]: I1210 15:24:51.846172 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:51 crc kubenswrapper[4755]: I1210 15:24:51.846206 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:51Z","lastTransitionTime":"2025-12-10T15:24:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:51 crc kubenswrapper[4755]: I1210 15:24:51.949790 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:51 crc kubenswrapper[4755]: I1210 15:24:51.949845 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:51 crc kubenswrapper[4755]: I1210 15:24:51.949860 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:51 crc kubenswrapper[4755]: I1210 15:24:51.949882 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:51 crc kubenswrapper[4755]: I1210 15:24:51.949897 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:51Z","lastTransitionTime":"2025-12-10T15:24:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:52 crc kubenswrapper[4755]: I1210 15:24:52.052961 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:52 crc kubenswrapper[4755]: I1210 15:24:52.053044 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:52 crc kubenswrapper[4755]: I1210 15:24:52.053061 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:52 crc kubenswrapper[4755]: I1210 15:24:52.053112 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:52 crc kubenswrapper[4755]: I1210 15:24:52.053131 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:52Z","lastTransitionTime":"2025-12-10T15:24:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:52 crc kubenswrapper[4755]: I1210 15:24:52.155952 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:52 crc kubenswrapper[4755]: I1210 15:24:52.155988 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:52 crc kubenswrapper[4755]: I1210 15:24:52.155996 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:52 crc kubenswrapper[4755]: I1210 15:24:52.156011 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:52 crc kubenswrapper[4755]: I1210 15:24:52.156023 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:52Z","lastTransitionTime":"2025-12-10T15:24:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:52 crc kubenswrapper[4755]: I1210 15:24:52.258326 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:52 crc kubenswrapper[4755]: I1210 15:24:52.258366 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:52 crc kubenswrapper[4755]: I1210 15:24:52.258379 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:52 crc kubenswrapper[4755]: I1210 15:24:52.258396 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:52 crc kubenswrapper[4755]: I1210 15:24:52.258409 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:52Z","lastTransitionTime":"2025-12-10T15:24:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:52 crc kubenswrapper[4755]: I1210 15:24:52.361259 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:52 crc kubenswrapper[4755]: I1210 15:24:52.361292 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:52 crc kubenswrapper[4755]: I1210 15:24:52.361303 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:52 crc kubenswrapper[4755]: I1210 15:24:52.361319 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:52 crc kubenswrapper[4755]: I1210 15:24:52.361330 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:52Z","lastTransitionTime":"2025-12-10T15:24:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:52 crc kubenswrapper[4755]: I1210 15:24:52.463686 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:52 crc kubenswrapper[4755]: I1210 15:24:52.463728 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:52 crc kubenswrapper[4755]: I1210 15:24:52.463740 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:52 crc kubenswrapper[4755]: I1210 15:24:52.463755 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:52 crc kubenswrapper[4755]: I1210 15:24:52.463766 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:52Z","lastTransitionTime":"2025-12-10T15:24:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:52 crc kubenswrapper[4755]: I1210 15:24:52.566185 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:52 crc kubenswrapper[4755]: I1210 15:24:52.566266 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:52 crc kubenswrapper[4755]: I1210 15:24:52.566292 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:52 crc kubenswrapper[4755]: I1210 15:24:52.566323 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:52 crc kubenswrapper[4755]: I1210 15:24:52.566348 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:52Z","lastTransitionTime":"2025-12-10T15:24:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:52 crc kubenswrapper[4755]: I1210 15:24:52.669619 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:52 crc kubenswrapper[4755]: I1210 15:24:52.669667 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:52 crc kubenswrapper[4755]: I1210 15:24:52.669679 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:52 crc kubenswrapper[4755]: I1210 15:24:52.669698 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:52 crc kubenswrapper[4755]: I1210 15:24:52.669710 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:52Z","lastTransitionTime":"2025-12-10T15:24:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:52 crc kubenswrapper[4755]: I1210 15:24:52.773137 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:52 crc kubenswrapper[4755]: I1210 15:24:52.773206 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:52 crc kubenswrapper[4755]: I1210 15:24:52.773228 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:52 crc kubenswrapper[4755]: I1210 15:24:52.773263 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:52 crc kubenswrapper[4755]: I1210 15:24:52.773285 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:52Z","lastTransitionTime":"2025-12-10T15:24:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:52 crc kubenswrapper[4755]: I1210 15:24:52.875555 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:52 crc kubenswrapper[4755]: I1210 15:24:52.875639 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:52 crc kubenswrapper[4755]: I1210 15:24:52.875654 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:52 crc kubenswrapper[4755]: I1210 15:24:52.875676 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:52 crc kubenswrapper[4755]: I1210 15:24:52.875692 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:52Z","lastTransitionTime":"2025-12-10T15:24:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:52 crc kubenswrapper[4755]: I1210 15:24:52.977579 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:52 crc kubenswrapper[4755]: I1210 15:24:52.977613 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:52 crc kubenswrapper[4755]: I1210 15:24:52.977624 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:52 crc kubenswrapper[4755]: I1210 15:24:52.977639 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:52 crc kubenswrapper[4755]: I1210 15:24:52.977650 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:52Z","lastTransitionTime":"2025-12-10T15:24:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:53 crc kubenswrapper[4755]: I1210 15:24:53.081290 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:53 crc kubenswrapper[4755]: I1210 15:24:53.081362 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:53 crc kubenswrapper[4755]: I1210 15:24:53.081374 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:53 crc kubenswrapper[4755]: I1210 15:24:53.081397 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:53 crc kubenswrapper[4755]: I1210 15:24:53.081411 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:53Z","lastTransitionTime":"2025-12-10T15:24:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:53 crc kubenswrapper[4755]: I1210 15:24:53.183481 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:53 crc kubenswrapper[4755]: I1210 15:24:53.183518 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:53 crc kubenswrapper[4755]: I1210 15:24:53.183531 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:53 crc kubenswrapper[4755]: I1210 15:24:53.183548 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:53 crc kubenswrapper[4755]: I1210 15:24:53.183562 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:53Z","lastTransitionTime":"2025-12-10T15:24:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:53 crc kubenswrapper[4755]: I1210 15:24:53.285498 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:53 crc kubenswrapper[4755]: I1210 15:24:53.285553 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:53 crc kubenswrapper[4755]: I1210 15:24:53.285563 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:53 crc kubenswrapper[4755]: I1210 15:24:53.285584 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:53 crc kubenswrapper[4755]: I1210 15:24:53.285598 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:53Z","lastTransitionTime":"2025-12-10T15:24:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:53 crc kubenswrapper[4755]: I1210 15:24:53.387461 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:53 crc kubenswrapper[4755]: I1210 15:24:53.387521 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:53 crc kubenswrapper[4755]: I1210 15:24:53.387534 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:53 crc kubenswrapper[4755]: I1210 15:24:53.387549 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:53 crc kubenswrapper[4755]: I1210 15:24:53.387560 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:53Z","lastTransitionTime":"2025-12-10T15:24:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:53 crc kubenswrapper[4755]: I1210 15:24:53.489888 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:53 crc kubenswrapper[4755]: I1210 15:24:53.489948 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:53 crc kubenswrapper[4755]: I1210 15:24:53.489960 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:53 crc kubenswrapper[4755]: I1210 15:24:53.489976 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:53 crc kubenswrapper[4755]: I1210 15:24:53.489990 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:53Z","lastTransitionTime":"2025-12-10T15:24:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:53 crc kubenswrapper[4755]: I1210 15:24:53.591905 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:53 crc kubenswrapper[4755]: I1210 15:24:53.592204 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:53 crc kubenswrapper[4755]: I1210 15:24:53.592224 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:53 crc kubenswrapper[4755]: I1210 15:24:53.592251 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:53 crc kubenswrapper[4755]: I1210 15:24:53.592262 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:53Z","lastTransitionTime":"2025-12-10T15:24:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:53 crc kubenswrapper[4755]: I1210 15:24:53.694427 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:53 crc kubenswrapper[4755]: I1210 15:24:53.694760 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:53 crc kubenswrapper[4755]: I1210 15:24:53.694831 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:53 crc kubenswrapper[4755]: I1210 15:24:53.694899 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:53 crc kubenswrapper[4755]: I1210 15:24:53.694969 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:53Z","lastTransitionTime":"2025-12-10T15:24:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:53 crc kubenswrapper[4755]: I1210 15:24:53.757089 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5ctz" Dec 10 15:24:53 crc kubenswrapper[4755]: I1210 15:24:53.757171 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 15:24:53 crc kubenswrapper[4755]: I1210 15:24:53.757233 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 15:24:53 crc kubenswrapper[4755]: I1210 15:24:53.757339 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 15:24:53 crc kubenswrapper[4755]: E1210 15:24:53.758782 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5ctz" podUID="17673130-8212-4f8f-8859-92774f0ee202" Dec 10 15:24:53 crc kubenswrapper[4755]: E1210 15:24:53.759084 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 15:24:53 crc kubenswrapper[4755]: E1210 15:24:53.759195 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 15:24:53 crc kubenswrapper[4755]: E1210 15:24:53.759325 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 15:24:53 crc kubenswrapper[4755]: I1210 15:24:53.770221 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81a321f6-db4b-46c6-bf24-1ad62da5992b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e68926ec899dad3c070810697fb078955e202f270724638719a7ea21d2debcb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://728ff8c02e5b0bea5514375b60c7a66f025e3c2a7163f0753c65d914b7873531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://865537b06aba1b617a1a16ef38c6b5be072501d1bbbd69e14eaaf2ce76c6b1da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54e0e014ecb31c499da20621a8b9e2ad6fdbdc2170a93636c828082b5b63b683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e0e014ecb31c499da20621a8b9e2ad6fdbdc2170a93636c828082b5b63b683\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:53Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:53 crc kubenswrapper[4755]: I1210 15:24:53.782643 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7d089a8bddfa1f80d29011c7a6bab0a300f7dd44bdb2864f86951ebbb9ebea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:53Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:53 crc kubenswrapper[4755]: I1210 15:24:53.794572 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01ddc4056319db1f69268dcae192c3cb9db6c25284305803ae7588e59f77c346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aae6bf174cdc9bde18d7c959e976454e73c1e67642f0158d365b79582f63f3cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:53Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:53 crc kubenswrapper[4755]: I1210 15:24:53.797917 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:53 crc kubenswrapper[4755]: I1210 15:24:53.797950 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:53 crc kubenswrapper[4755]: I1210 15:24:53.797960 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:53 crc kubenswrapper[4755]: I1210 15:24:53.797974 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:53 crc kubenswrapper[4755]: I1210 15:24:53.797983 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:53Z","lastTransitionTime":"2025-12-10T15:24:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:53 crc kubenswrapper[4755]: I1210 15:24:53.808447 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:53Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:53 crc kubenswrapper[4755]: I1210 15:24:53.820350 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zl2tx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"796da6d5-6ccd-4786-a03e-9a8e47a55031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e0f974f9ba614dcaef08cf7168b77eeee007dfe65cc4e32df9b8e45005ff4ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de63a123c46563bd8cd07e669d192bc8b019a889b9bdb7af1b988872c8f1fc48\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T15:24:30Z\\\",\\\"message\\\":\\\"2025-12-10T15:23:45+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5295941e-ea46-4d40-a35d-bd2fd673e118\\\\n2025-12-10T15:23:45+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5295941e-ea46-4d40-a35d-bd2fd673e118 to /host/opt/cni/bin/\\\\n2025-12-10T15:23:45Z [verbose] multus-daemon started\\\\n2025-12-10T15:23:45Z [verbose] Readiness Indicator file check\\\\n2025-12-10T15:24:30Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:24:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzg4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zl2tx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:53Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:53 crc kubenswrapper[4755]: I1210 15:24:53.830698 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b132a8b9-1c99-414d-8773-229bf36b305d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5a5a59e9f156fb791ec822c2d5efe3fc6ec0e84bfcb2b6f5da81396951984c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2mdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a40c4bdaa23a60a665b8f565720d79b68cac62d40246be94fc6cd314b1bb3656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2mdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ggt8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:53Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:53 crc kubenswrapper[4755]: I1210 15:24:53.847230 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aed6bb9-78c2-410b-9c58-b60ab22a7bd8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70d014e7227746c46b30f8f5a1f307a422d2fa0d4b98d98bfe5ba6217223489e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://104d730ea666ffd2d639ba7fc8d1ac5b444586788a62656fd260cb249f82b1ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09dbec85547c7170bb9551e5657876814d48528e3047daf3547711a563d2b6ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8bf1de0d8bdc0a20bc42ba5097d849ae0f507e5fbc18fb17b4ff3650e46ff0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:53Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:53 crc kubenswrapper[4755]: I1210 15:24:53.860156 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n8c6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b56ae78-835a-45da-bc46-5adff2bdf9fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8957db8ccef7b3c449920471d345aa81ce9a7ab9be36b2350ec428aebec7ac3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b9a1485f6d0c0e53fd3505623b2788476bc00abea2f11650386eac40e449bcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9a1485f6d0c0e53fd3505623b2788476bc00abea2f11650386eac40e449bcc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a532015d196c588ad3dd6c43d9fa3772ba2a42bd1b2231f830b60adb0addab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a532015d196c588ad3dd6c43d9fa3772ba2a42bd1b2231f830b60adb0addab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://417e326f541e04df27d540c14ec572bb8f1d749a9e3dc88f483241d69f7d0679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://417e326f541e04df27d540c14ec572bb8f1d749a9e3dc88f483241d69f7d0679\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42867f7a1f6aceaf708ad650b7634bdafb3f850872c540d424916440014e7941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42867f7a1f6aceaf708ad650b7634bdafb3f850872c540d424916440014e7941\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f2d22bfc26958a3aad99de39f5c831d36c1ba4f563851baafc73735e80d5c91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f2d22bfc26958a3aad99de39f5c831d36c1ba4f563851baafc73735e80d5c91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://548d326fc5c14c16176d4fff3446d93c17322c11d2a34e9e42376b665de3848d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://548d326fc5c14c16176d4fff3446d93c17322c11d2a34e9e42376b665de3848d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zbxwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n8c6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:53Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:53 crc kubenswrapper[4755]: I1210 15:24:53.869247 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qnmst" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1693fcf1-bef4-4f82-8dd8-f1797b03f5e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8fd6a2ab2b9557574951c0ebbccd663fe576262b0de2c3c655427c977f62d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hzdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qnmst\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:53Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:53 crc kubenswrapper[4755]: I1210 15:24:53.880126 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-82wnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0a2f42c-a60e-4350-9ebb-c28d3cbfdad7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7af643111a7a5d1d78b0412b1621b5ffac6389760ea9190e26e9e5d1704eed4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xbzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://915c918562f8a69a28cbcf29e427bfdd94477193b98d5a2da9ad45cb4b44fa03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xbzv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-82wnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:53Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:53 crc kubenswrapper[4755]: I1210 15:24:53.899766 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fa03060-a6e0-4aad-9aa1-43b1a0d00c85\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a823333ed5cb5de3988d25e50e4b7a0f9071c76fb39c22760a4f2acd5eb455d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d828f3f3b90a2dcb1c1908e6a686368af5b0d715b3251e4b8fcf3c8818ec75a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://172ab3fce08c8ba4095ff4095c89364a778644752bd7bb6c178d6e3ebcface69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfbb8e350ad18b78a6bcf6cfa4eb8f2fad90f970dba08a8b1b2026af6f255e83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c4b2bf0c88a16fd6b4b45a20730a92292895dc9f29ce756d347c302b25a8612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55da309288bc2c7ff1f22da0569d18155796700c200fe3ed508f6f8179756865\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55da309288bc2c7ff1f22da0569d18155796700c200fe3ed508f6f8179756865\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://219c531368bb5379f3726d94ab0afc0f33eedbf91b0a4dbbe0757c6e232f79ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://219c531368bb5379f3726d94ab0afc0f33eedbf91b0a4dbbe0757c6e232f79ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f9da8f3d4f85ec83c16b71ecd983ff4f48049eb8e73461a2b46c4f66d71ed06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9da8f3d4f85ec83c16b71ecd983ff4f48049eb8e73461a2b46c4f66d71ed06e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:53Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:53 crc kubenswrapper[4755]: I1210 15:24:53.900324 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:53 crc kubenswrapper[4755]: I1210 15:24:53.900369 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:53 crc kubenswrapper[4755]: I1210 15:24:53.900382 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:53 crc kubenswrapper[4755]: I1210 15:24:53.900398 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:53 crc kubenswrapper[4755]: I1210 15:24:53.900409 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:53Z","lastTransitionTime":"2025-12-10T15:24:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:53 crc kubenswrapper[4755]: I1210 15:24:53.911318 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:53Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:53 crc kubenswrapper[4755]: I1210 15:24:53.920511 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c371b199c3643a68ea5eba935eb76a1b7e8a4027c9292a1324116ccfc14742ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:53Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:53 crc kubenswrapper[4755]: I1210 15:24:53.929592 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wv8fh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d150a22e-c59a-4376-a5c8-db4085ea0be0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58d106c5d1b9525ec821d009ca556449cd8d7f0e1b9c8ec7dd969df996e73625\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfw9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wv8fh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:53Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:53 crc kubenswrapper[4755]: I1210 15:24:53.938019 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q5ctz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17673130-8212-4f8f-8859-92774f0ee202\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcscn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcscn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q5ctz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:53Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:53 crc kubenswrapper[4755]: I1210 15:24:53.947240 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cc57cd7-2dd9-45cb-bc66-4c0d7a0cf043\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7446bf39567cbd3f5a0a9a1252748e01144968fd0a67b39d8c32b326a13dec38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34c0f838adfa6e512d9529c4999cc194d978276c8736abccd0ef75d5b7e9b6e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34c0f838adfa6e512d9529c4999cc194d978276c8736abccd0ef75d5b7e9b6e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:53Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:53 crc kubenswrapper[4755]: I1210 15:24:53.957633 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e6db97-7e22-4fec-924e-20e90f463887\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef3871ddc16d2fc1b89a6f120390cf066424bd9ed4cca98f8dea4430c40c6c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08895c3ab20248c4bd70ccfae6442bdf8e76a602e605b043f0f00166624ada17\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5b2467645116856afa1e1e37501c7b453b03da193a96d1075886b85839a86ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16e58653dca08c9b33671279553975bca9507ffeeefea161119cec624813f167\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea18ef6c319336321d97c2a55449903b837b5a47da785da3ff5308244751943\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17ca0b6cc1b8b0bf9c34e03c494d1fc1594db013eccf862027ec8749ea51f6e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17ca0b6cc1b8b0bf9c34e03c494d1fc1594db013eccf862027ec8749ea51f6e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:53Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:53 crc kubenswrapper[4755]: I1210 15:24:53.967635 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:53Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:53 crc kubenswrapper[4755]: I1210 15:24:53.986737 4755 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b1da51a-99c9-4f8e-920d-ce0973af6370\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T15:23:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6eb065dc6c0cc8914cb95553eb2683d894fb9a4e78ce7fac73bcce8d7f6cced9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e547993b9f2fa37bf924f909c47b62eb0cc02b659596b1cad9bbc42fdde8f9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9ba47683cc23d5b531a45f0658b6a9378650400b35b5372642b0430a5ac503f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a75407e83508af9adebb09c6466a966dd791d29f690c539656f9bd3396d7031\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://602b4e49987fa2cc6b54b822110aececbdddaf2bce8f27cce4ed906768d45791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://335bcab3a79f09796e97560365e1211fb30ddf288f4773c05ab353197add4365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0386a60f9d2d9c0cec943720b300e0cd71348b81b74234f19f1c51d34142b089\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0386a60f9d2d9c0cec943720b300e0cd71348b81b74234f19f1c51d34142b089\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T15:24:44Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1210 15:24:44.102698 6774 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1210 15:24:44.102724 6774 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1210 15:24:44.102737 6774 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1210 15:24:44.102750 6774 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1210 15:24:44.102755 6774 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1210 15:24:44.102760 6774 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1210 15:24:44.102777 6774 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1210 15:24:44.102789 6774 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1210 15:24:44.102799 6774 handler.go:208] Removed *v1.Node event handler 2\\\\nI1210 15:24:44.102813 6774 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1210 15:24:44.102822 6774 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1210 15:24:44.102847 6774 factory.go:656] Stopping watch factory\\\\nI1210 15:24:44.102860 6774 ovnkube.go:599] Stopped ovnkube\\\\nI1210 15:24:44.102878 6774 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1210 15:24:44.102878 6774 handler.go:208] Removed *v1.Namespace even\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T15:24:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-6lfvk_openshift-ovn-kubernetes(4b1da51a-99c9-4f8e-920d-ce0973af6370)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59bf59d1b7fbc365a916fbbceca7ae30b7ebc754b34f2f7a34c2e21e1e1d2166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T15:23:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://375efb27cf6bef06a6a0ffc8f6b2963e1b9ffbca18b3c8e7b402bc552a411f6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://375efb27cf6bef06a6a0ffc8f6b2963e1b9ffbca18b3c8e7b402bc552a411f6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T15:23:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T15:23:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zmtm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T15:23:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6lfvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T15:24:53Z is after 2025-08-24T17:21:41Z" Dec 10 15:24:54 crc kubenswrapper[4755]: I1210 15:24:54.002562 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:54 crc kubenswrapper[4755]: I1210 15:24:54.002606 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:54 crc kubenswrapper[4755]: I1210 15:24:54.002622 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:54 crc kubenswrapper[4755]: I1210 15:24:54.002638 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:54 crc kubenswrapper[4755]: I1210 15:24:54.002650 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:54Z","lastTransitionTime":"2025-12-10T15:24:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:54 crc kubenswrapper[4755]: I1210 15:24:54.105289 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:54 crc kubenswrapper[4755]: I1210 15:24:54.105361 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:54 crc kubenswrapper[4755]: I1210 15:24:54.105376 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:54 crc kubenswrapper[4755]: I1210 15:24:54.105400 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:54 crc kubenswrapper[4755]: I1210 15:24:54.105416 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:54Z","lastTransitionTime":"2025-12-10T15:24:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:54 crc kubenswrapper[4755]: I1210 15:24:54.207503 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:54 crc kubenswrapper[4755]: I1210 15:24:54.207554 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:54 crc kubenswrapper[4755]: I1210 15:24:54.207568 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:54 crc kubenswrapper[4755]: I1210 15:24:54.207590 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:54 crc kubenswrapper[4755]: I1210 15:24:54.207604 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:54Z","lastTransitionTime":"2025-12-10T15:24:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:54 crc kubenswrapper[4755]: I1210 15:24:54.309095 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:54 crc kubenswrapper[4755]: I1210 15:24:54.309147 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:54 crc kubenswrapper[4755]: I1210 15:24:54.309159 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:54 crc kubenswrapper[4755]: I1210 15:24:54.309176 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:54 crc kubenswrapper[4755]: I1210 15:24:54.309188 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:54Z","lastTransitionTime":"2025-12-10T15:24:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:54 crc kubenswrapper[4755]: I1210 15:24:54.412120 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:54 crc kubenswrapper[4755]: I1210 15:24:54.412159 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:54 crc kubenswrapper[4755]: I1210 15:24:54.412169 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:54 crc kubenswrapper[4755]: I1210 15:24:54.412184 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:54 crc kubenswrapper[4755]: I1210 15:24:54.412194 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:54Z","lastTransitionTime":"2025-12-10T15:24:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:54 crc kubenswrapper[4755]: I1210 15:24:54.514847 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:54 crc kubenswrapper[4755]: I1210 15:24:54.514881 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:54 crc kubenswrapper[4755]: I1210 15:24:54.514890 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:54 crc kubenswrapper[4755]: I1210 15:24:54.514904 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:54 crc kubenswrapper[4755]: I1210 15:24:54.514912 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:54Z","lastTransitionTime":"2025-12-10T15:24:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:54 crc kubenswrapper[4755]: I1210 15:24:54.694330 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:54 crc kubenswrapper[4755]: I1210 15:24:54.694412 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:54 crc kubenswrapper[4755]: I1210 15:24:54.694435 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:54 crc kubenswrapper[4755]: I1210 15:24:54.694461 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:54 crc kubenswrapper[4755]: I1210 15:24:54.694563 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:54Z","lastTransitionTime":"2025-12-10T15:24:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:54 crc kubenswrapper[4755]: I1210 15:24:54.796527 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:54 crc kubenswrapper[4755]: I1210 15:24:54.796579 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:54 crc kubenswrapper[4755]: I1210 15:24:54.796590 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:54 crc kubenswrapper[4755]: I1210 15:24:54.796612 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:54 crc kubenswrapper[4755]: I1210 15:24:54.796626 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:54Z","lastTransitionTime":"2025-12-10T15:24:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:54 crc kubenswrapper[4755]: I1210 15:24:54.899122 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:54 crc kubenswrapper[4755]: I1210 15:24:54.899165 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:54 crc kubenswrapper[4755]: I1210 15:24:54.899176 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:54 crc kubenswrapper[4755]: I1210 15:24:54.899193 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:54 crc kubenswrapper[4755]: I1210 15:24:54.899205 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:54Z","lastTransitionTime":"2025-12-10T15:24:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:55 crc kubenswrapper[4755]: I1210 15:24:55.002502 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:55 crc kubenswrapper[4755]: I1210 15:24:55.002541 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:55 crc kubenswrapper[4755]: I1210 15:24:55.002553 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:55 crc kubenswrapper[4755]: I1210 15:24:55.002570 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:55 crc kubenswrapper[4755]: I1210 15:24:55.002580 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:55Z","lastTransitionTime":"2025-12-10T15:24:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:55 crc kubenswrapper[4755]: I1210 15:24:55.105000 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:55 crc kubenswrapper[4755]: I1210 15:24:55.105038 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:55 crc kubenswrapper[4755]: I1210 15:24:55.105052 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:55 crc kubenswrapper[4755]: I1210 15:24:55.105067 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:55 crc kubenswrapper[4755]: I1210 15:24:55.105079 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:55Z","lastTransitionTime":"2025-12-10T15:24:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:55 crc kubenswrapper[4755]: I1210 15:24:55.207524 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:55 crc kubenswrapper[4755]: I1210 15:24:55.207589 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:55 crc kubenswrapper[4755]: I1210 15:24:55.207602 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:55 crc kubenswrapper[4755]: I1210 15:24:55.207620 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:55 crc kubenswrapper[4755]: I1210 15:24:55.207632 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:55Z","lastTransitionTime":"2025-12-10T15:24:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:55 crc kubenswrapper[4755]: I1210 15:24:55.309542 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:55 crc kubenswrapper[4755]: I1210 15:24:55.309607 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:55 crc kubenswrapper[4755]: I1210 15:24:55.309619 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:55 crc kubenswrapper[4755]: I1210 15:24:55.309637 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:55 crc kubenswrapper[4755]: I1210 15:24:55.309652 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:55Z","lastTransitionTime":"2025-12-10T15:24:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:55 crc kubenswrapper[4755]: I1210 15:24:55.413206 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:55 crc kubenswrapper[4755]: I1210 15:24:55.413528 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:55 crc kubenswrapper[4755]: I1210 15:24:55.413545 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:55 crc kubenswrapper[4755]: I1210 15:24:55.413566 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:55 crc kubenswrapper[4755]: I1210 15:24:55.413588 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:55Z","lastTransitionTime":"2025-12-10T15:24:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:55 crc kubenswrapper[4755]: I1210 15:24:55.516684 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:55 crc kubenswrapper[4755]: I1210 15:24:55.516731 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:55 crc kubenswrapper[4755]: I1210 15:24:55.516746 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:55 crc kubenswrapper[4755]: I1210 15:24:55.516766 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:55 crc kubenswrapper[4755]: I1210 15:24:55.516780 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:55Z","lastTransitionTime":"2025-12-10T15:24:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:55 crc kubenswrapper[4755]: I1210 15:24:55.618996 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:55 crc kubenswrapper[4755]: I1210 15:24:55.619048 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:55 crc kubenswrapper[4755]: I1210 15:24:55.619060 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:55 crc kubenswrapper[4755]: I1210 15:24:55.619079 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:55 crc kubenswrapper[4755]: I1210 15:24:55.619092 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:55Z","lastTransitionTime":"2025-12-10T15:24:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:55 crc kubenswrapper[4755]: I1210 15:24:55.721088 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:55 crc kubenswrapper[4755]: I1210 15:24:55.721127 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:55 crc kubenswrapper[4755]: I1210 15:24:55.721139 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:55 crc kubenswrapper[4755]: I1210 15:24:55.721156 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:55 crc kubenswrapper[4755]: I1210 15:24:55.721167 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:55Z","lastTransitionTime":"2025-12-10T15:24:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:55 crc kubenswrapper[4755]: I1210 15:24:55.758733 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 15:24:55 crc kubenswrapper[4755]: I1210 15:24:55.758771 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 15:24:55 crc kubenswrapper[4755]: I1210 15:24:55.758825 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 15:24:55 crc kubenswrapper[4755]: E1210 15:24:55.758860 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 15:24:55 crc kubenswrapper[4755]: E1210 15:24:55.758920 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 15:24:55 crc kubenswrapper[4755]: E1210 15:24:55.759019 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 15:24:55 crc kubenswrapper[4755]: I1210 15:24:55.759166 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5ctz" Dec 10 15:24:55 crc kubenswrapper[4755]: E1210 15:24:55.759266 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5ctz" podUID="17673130-8212-4f8f-8859-92774f0ee202" Dec 10 15:24:55 crc kubenswrapper[4755]: I1210 15:24:55.823882 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:55 crc kubenswrapper[4755]: I1210 15:24:55.823930 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:55 crc kubenswrapper[4755]: I1210 15:24:55.823942 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:55 crc kubenswrapper[4755]: I1210 15:24:55.823958 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:55 crc kubenswrapper[4755]: I1210 15:24:55.823970 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:55Z","lastTransitionTime":"2025-12-10T15:24:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:55 crc kubenswrapper[4755]: I1210 15:24:55.926842 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:55 crc kubenswrapper[4755]: I1210 15:24:55.926889 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:55 crc kubenswrapper[4755]: I1210 15:24:55.926906 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:55 crc kubenswrapper[4755]: I1210 15:24:55.926923 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:55 crc kubenswrapper[4755]: I1210 15:24:55.926934 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:55Z","lastTransitionTime":"2025-12-10T15:24:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:56 crc kubenswrapper[4755]: I1210 15:24:56.030105 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:56 crc kubenswrapper[4755]: I1210 15:24:56.030148 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:56 crc kubenswrapper[4755]: I1210 15:24:56.030159 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:56 crc kubenswrapper[4755]: I1210 15:24:56.030180 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:56 crc kubenswrapper[4755]: I1210 15:24:56.030194 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:56Z","lastTransitionTime":"2025-12-10T15:24:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:56 crc kubenswrapper[4755]: I1210 15:24:56.133353 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:56 crc kubenswrapper[4755]: I1210 15:24:56.133662 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:56 crc kubenswrapper[4755]: I1210 15:24:56.133748 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:56 crc kubenswrapper[4755]: I1210 15:24:56.133834 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:56 crc kubenswrapper[4755]: I1210 15:24:56.133907 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:56Z","lastTransitionTime":"2025-12-10T15:24:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:56 crc kubenswrapper[4755]: I1210 15:24:56.237018 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:56 crc kubenswrapper[4755]: I1210 15:24:56.237091 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:56 crc kubenswrapper[4755]: I1210 15:24:56.237115 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:56 crc kubenswrapper[4755]: I1210 15:24:56.237145 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:56 crc kubenswrapper[4755]: I1210 15:24:56.237172 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:56Z","lastTransitionTime":"2025-12-10T15:24:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:56 crc kubenswrapper[4755]: I1210 15:24:56.339636 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:56 crc kubenswrapper[4755]: I1210 15:24:56.339733 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:56 crc kubenswrapper[4755]: I1210 15:24:56.339753 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:56 crc kubenswrapper[4755]: I1210 15:24:56.339778 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:56 crc kubenswrapper[4755]: I1210 15:24:56.339796 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:56Z","lastTransitionTime":"2025-12-10T15:24:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:56 crc kubenswrapper[4755]: I1210 15:24:56.442562 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:56 crc kubenswrapper[4755]: I1210 15:24:56.442613 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:56 crc kubenswrapper[4755]: I1210 15:24:56.442632 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:56 crc kubenswrapper[4755]: I1210 15:24:56.442650 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:56 crc kubenswrapper[4755]: I1210 15:24:56.442663 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:56Z","lastTransitionTime":"2025-12-10T15:24:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:56 crc kubenswrapper[4755]: I1210 15:24:56.546322 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:56 crc kubenswrapper[4755]: I1210 15:24:56.546380 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:56 crc kubenswrapper[4755]: I1210 15:24:56.546398 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:56 crc kubenswrapper[4755]: I1210 15:24:56.546423 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:56 crc kubenswrapper[4755]: I1210 15:24:56.546440 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:56Z","lastTransitionTime":"2025-12-10T15:24:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:56 crc kubenswrapper[4755]: I1210 15:24:56.648615 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:56 crc kubenswrapper[4755]: I1210 15:24:56.648658 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:56 crc kubenswrapper[4755]: I1210 15:24:56.648669 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:56 crc kubenswrapper[4755]: I1210 15:24:56.648688 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:56 crc kubenswrapper[4755]: I1210 15:24:56.648701 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:56Z","lastTransitionTime":"2025-12-10T15:24:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:56 crc kubenswrapper[4755]: I1210 15:24:56.751537 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:56 crc kubenswrapper[4755]: I1210 15:24:56.751598 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:56 crc kubenswrapper[4755]: I1210 15:24:56.751620 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:56 crc kubenswrapper[4755]: I1210 15:24:56.751647 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:56 crc kubenswrapper[4755]: I1210 15:24:56.751672 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:56Z","lastTransitionTime":"2025-12-10T15:24:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:56 crc kubenswrapper[4755]: I1210 15:24:56.855054 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:56 crc kubenswrapper[4755]: I1210 15:24:56.855101 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:56 crc kubenswrapper[4755]: I1210 15:24:56.855113 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:56 crc kubenswrapper[4755]: I1210 15:24:56.855131 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:56 crc kubenswrapper[4755]: I1210 15:24:56.855143 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:56Z","lastTransitionTime":"2025-12-10T15:24:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:56 crc kubenswrapper[4755]: I1210 15:24:56.957775 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:56 crc kubenswrapper[4755]: I1210 15:24:56.957814 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:56 crc kubenswrapper[4755]: I1210 15:24:56.957826 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:56 crc kubenswrapper[4755]: I1210 15:24:56.957844 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:56 crc kubenswrapper[4755]: I1210 15:24:56.957856 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:56Z","lastTransitionTime":"2025-12-10T15:24:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:57 crc kubenswrapper[4755]: I1210 15:24:57.060773 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:57 crc kubenswrapper[4755]: I1210 15:24:57.060821 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:57 crc kubenswrapper[4755]: I1210 15:24:57.060832 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:57 crc kubenswrapper[4755]: I1210 15:24:57.060851 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:57 crc kubenswrapper[4755]: I1210 15:24:57.060867 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:57Z","lastTransitionTime":"2025-12-10T15:24:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:57 crc kubenswrapper[4755]: I1210 15:24:57.162778 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:57 crc kubenswrapper[4755]: I1210 15:24:57.162801 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:57 crc kubenswrapper[4755]: I1210 15:24:57.162809 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:57 crc kubenswrapper[4755]: I1210 15:24:57.162825 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:57 crc kubenswrapper[4755]: I1210 15:24:57.162833 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:57Z","lastTransitionTime":"2025-12-10T15:24:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:57 crc kubenswrapper[4755]: I1210 15:24:57.265762 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:57 crc kubenswrapper[4755]: I1210 15:24:57.265824 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:57 crc kubenswrapper[4755]: I1210 15:24:57.265842 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:57 crc kubenswrapper[4755]: I1210 15:24:57.265867 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:57 crc kubenswrapper[4755]: I1210 15:24:57.265885 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:57Z","lastTransitionTime":"2025-12-10T15:24:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:57 crc kubenswrapper[4755]: I1210 15:24:57.368317 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:57 crc kubenswrapper[4755]: I1210 15:24:57.368367 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:57 crc kubenswrapper[4755]: I1210 15:24:57.368380 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:57 crc kubenswrapper[4755]: I1210 15:24:57.368399 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:57 crc kubenswrapper[4755]: I1210 15:24:57.368412 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:57Z","lastTransitionTime":"2025-12-10T15:24:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:57 crc kubenswrapper[4755]: I1210 15:24:57.471810 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:57 crc kubenswrapper[4755]: I1210 15:24:57.471862 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:57 crc kubenswrapper[4755]: I1210 15:24:57.471875 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:57 crc kubenswrapper[4755]: I1210 15:24:57.471892 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:57 crc kubenswrapper[4755]: I1210 15:24:57.471987 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:57Z","lastTransitionTime":"2025-12-10T15:24:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:57 crc kubenswrapper[4755]: I1210 15:24:57.574332 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:57 crc kubenswrapper[4755]: I1210 15:24:57.574378 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:57 crc kubenswrapper[4755]: I1210 15:24:57.574389 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:57 crc kubenswrapper[4755]: I1210 15:24:57.574408 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:57 crc kubenswrapper[4755]: I1210 15:24:57.574426 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:57Z","lastTransitionTime":"2025-12-10T15:24:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:57 crc kubenswrapper[4755]: I1210 15:24:57.677793 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:57 crc kubenswrapper[4755]: I1210 15:24:57.677832 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:57 crc kubenswrapper[4755]: I1210 15:24:57.677842 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:57 crc kubenswrapper[4755]: I1210 15:24:57.677858 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:57 crc kubenswrapper[4755]: I1210 15:24:57.677870 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:57Z","lastTransitionTime":"2025-12-10T15:24:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:57 crc kubenswrapper[4755]: I1210 15:24:57.757069 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 15:24:57 crc kubenswrapper[4755]: I1210 15:24:57.757146 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 15:24:57 crc kubenswrapper[4755]: E1210 15:24:57.757200 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 15:24:57 crc kubenswrapper[4755]: I1210 15:24:57.757239 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 15:24:57 crc kubenswrapper[4755]: I1210 15:24:57.757247 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5ctz" Dec 10 15:24:57 crc kubenswrapper[4755]: E1210 15:24:57.757459 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 15:24:57 crc kubenswrapper[4755]: E1210 15:24:57.757568 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5ctz" podUID="17673130-8212-4f8f-8859-92774f0ee202" Dec 10 15:24:57 crc kubenswrapper[4755]: E1210 15:24:57.757669 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 15:24:57 crc kubenswrapper[4755]: I1210 15:24:57.779820 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:57 crc kubenswrapper[4755]: I1210 15:24:57.779881 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:57 crc kubenswrapper[4755]: I1210 15:24:57.779898 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:57 crc kubenswrapper[4755]: I1210 15:24:57.779922 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:57 crc kubenswrapper[4755]: I1210 15:24:57.779963 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:57Z","lastTransitionTime":"2025-12-10T15:24:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:57 crc kubenswrapper[4755]: I1210 15:24:57.879242 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 15:24:57 crc kubenswrapper[4755]: I1210 15:24:57.879294 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 15:24:57 crc kubenswrapper[4755]: I1210 15:24:57.879310 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 15:24:57 crc kubenswrapper[4755]: I1210 15:24:57.879332 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 15:24:57 crc kubenswrapper[4755]: I1210 15:24:57.879348 4755 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T15:24:57Z","lastTransitionTime":"2025-12-10T15:24:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 15:24:57 crc kubenswrapper[4755]: I1210 15:24:57.928102 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-hsdkf"] Dec 10 15:24:57 crc kubenswrapper[4755]: I1210 15:24:57.929033 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hsdkf" Dec 10 15:24:57 crc kubenswrapper[4755]: I1210 15:24:57.932020 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Dec 10 15:24:57 crc kubenswrapper[4755]: I1210 15:24:57.932084 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 10 15:24:57 crc kubenswrapper[4755]: I1210 15:24:57.932576 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 10 15:24:57 crc kubenswrapper[4755]: I1210 15:24:57.932762 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 10 15:24:57 crc kubenswrapper[4755]: I1210 15:24:57.982377 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=72.982344289 podStartE2EDuration="1m12.982344289s" podCreationTimestamp="2025-12-10 15:23:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:24:57.967094154 +0000 UTC m=+94.567977796" watchObservedRunningTime="2025-12-10 15:24:57.982344289 +0000 UTC m=+94.583227961" Dec 10 15:24:58 crc kubenswrapper[4755]: I1210 15:24:58.009751 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-wv8fh" podStartSLOduration=75.00973066 podStartE2EDuration="1m15.00973066s" podCreationTimestamp="2025-12-10 15:23:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:24:58.009673399 +0000 UTC m=+94.610557061" watchObservedRunningTime="2025-12-10 15:24:58.00973066 +0000 UTC m=+94.610614302" Dec 10 15:24:58 crc kubenswrapper[4755]: I1210 15:24:58.023368 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/c45919b3-3bbb-40a2-a729-5c45cd981555-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-hsdkf\" (UID: \"c45919b3-3bbb-40a2-a729-5c45cd981555\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hsdkf" Dec 10 15:24:58 crc kubenswrapper[4755]: I1210 15:24:58.023434 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c45919b3-3bbb-40a2-a729-5c45cd981555-service-ca\") pod \"cluster-version-operator-5c965bbfc6-hsdkf\" (UID: \"c45919b3-3bbb-40a2-a729-5c45cd981555\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hsdkf" Dec 10 15:24:58 crc kubenswrapper[4755]: I1210 15:24:58.023490 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c45919b3-3bbb-40a2-a729-5c45cd981555-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-hsdkf\" (UID: \"c45919b3-3bbb-40a2-a729-5c45cd981555\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hsdkf" Dec 10 15:24:58 crc kubenswrapper[4755]: I1210 15:24:58.023516 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c45919b3-3bbb-40a2-a729-5c45cd981555-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-hsdkf\" (UID: \"c45919b3-3bbb-40a2-a729-5c45cd981555\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hsdkf" Dec 10 15:24:58 crc kubenswrapper[4755]: I1210 15:24:58.023538 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/c45919b3-3bbb-40a2-a729-5c45cd981555-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-hsdkf\" (UID: \"c45919b3-3bbb-40a2-a729-5c45cd981555\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hsdkf" Dec 10 15:24:58 crc kubenswrapper[4755]: I1210 15:24:58.045349 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=19.045328334 podStartE2EDuration="19.045328334s" podCreationTimestamp="2025-12-10 15:24:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:24:58.045118488 +0000 UTC m=+94.646002140" watchObservedRunningTime="2025-12-10 15:24:58.045328334 +0000 UTC m=+94.646211976" Dec 10 15:24:58 crc kubenswrapper[4755]: I1210 15:24:58.063561 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=76.063539836 podStartE2EDuration="1m16.063539836s" podCreationTimestamp="2025-12-10 15:23:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:24:58.063438653 +0000 UTC m=+94.664322295" watchObservedRunningTime="2025-12-10 15:24:58.063539836 +0000 UTC m=+94.664423478" Dec 10 15:24:58 crc kubenswrapper[4755]: I1210 15:24:58.117366 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=41.117345701 podStartE2EDuration="41.117345701s" podCreationTimestamp="2025-12-10 15:24:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:24:58.116388627 +0000 UTC m=+94.717272289" watchObservedRunningTime="2025-12-10 15:24:58.117345701 +0000 UTC m=+94.718229333" Dec 10 15:24:58 crc kubenswrapper[4755]: I1210 15:24:58.124554 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/c45919b3-3bbb-40a2-a729-5c45cd981555-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-hsdkf\" (UID: \"c45919b3-3bbb-40a2-a729-5c45cd981555\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hsdkf" Dec 10 15:24:58 crc kubenswrapper[4755]: I1210 15:24:58.124610 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c45919b3-3bbb-40a2-a729-5c45cd981555-service-ca\") pod \"cluster-version-operator-5c965bbfc6-hsdkf\" (UID: \"c45919b3-3bbb-40a2-a729-5c45cd981555\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hsdkf" Dec 10 15:24:58 crc kubenswrapper[4755]: I1210 15:24:58.124641 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c45919b3-3bbb-40a2-a729-5c45cd981555-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-hsdkf\" (UID: \"c45919b3-3bbb-40a2-a729-5c45cd981555\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hsdkf" Dec 10 15:24:58 crc kubenswrapper[4755]: I1210 15:24:58.124669 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c45919b3-3bbb-40a2-a729-5c45cd981555-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-hsdkf\" (UID: \"c45919b3-3bbb-40a2-a729-5c45cd981555\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hsdkf" Dec 10 15:24:58 crc kubenswrapper[4755]: I1210 15:24:58.124692 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/c45919b3-3bbb-40a2-a729-5c45cd981555-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-hsdkf\" (UID: \"c45919b3-3bbb-40a2-a729-5c45cd981555\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hsdkf" Dec 10 15:24:58 crc kubenswrapper[4755]: I1210 15:24:58.124747 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/c45919b3-3bbb-40a2-a729-5c45cd981555-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-hsdkf\" (UID: \"c45919b3-3bbb-40a2-a729-5c45cd981555\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hsdkf" Dec 10 15:24:58 crc kubenswrapper[4755]: I1210 15:24:58.124693 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/c45919b3-3bbb-40a2-a729-5c45cd981555-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-hsdkf\" (UID: \"c45919b3-3bbb-40a2-a729-5c45cd981555\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hsdkf" Dec 10 15:24:58 crc kubenswrapper[4755]: I1210 15:24:58.125722 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c45919b3-3bbb-40a2-a729-5c45cd981555-service-ca\") pod \"cluster-version-operator-5c965bbfc6-hsdkf\" (UID: \"c45919b3-3bbb-40a2-a729-5c45cd981555\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hsdkf" Dec 10 15:24:58 crc kubenswrapper[4755]: I1210 15:24:58.135869 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c45919b3-3bbb-40a2-a729-5c45cd981555-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-hsdkf\" (UID: \"c45919b3-3bbb-40a2-a729-5c45cd981555\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hsdkf" Dec 10 15:24:58 crc kubenswrapper[4755]: I1210 15:24:58.142594 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c45919b3-3bbb-40a2-a729-5c45cd981555-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-hsdkf\" (UID: \"c45919b3-3bbb-40a2-a729-5c45cd981555\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hsdkf" Dec 10 15:24:58 crc kubenswrapper[4755]: I1210 15:24:58.180739 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-zl2tx" podStartSLOduration=75.180720446 podStartE2EDuration="1m15.180720446s" podCreationTimestamp="2025-12-10 15:23:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:24:58.180156501 +0000 UTC m=+94.781040133" watchObservedRunningTime="2025-12-10 15:24:58.180720446 +0000 UTC m=+94.781604078" Dec 10 15:24:58 crc kubenswrapper[4755]: I1210 15:24:58.194159 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podStartSLOduration=75.194145094 podStartE2EDuration="1m15.194145094s" podCreationTimestamp="2025-12-10 15:23:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:24:58.19207213 +0000 UTC m=+94.792955762" watchObservedRunningTime="2025-12-10 15:24:58.194145094 +0000 UTC m=+94.795028726" Dec 10 15:24:58 crc kubenswrapper[4755]: I1210 15:24:58.206097 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=74.206083623 podStartE2EDuration="1m14.206083623s" podCreationTimestamp="2025-12-10 15:23:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:24:58.205691073 +0000 UTC m=+94.806574725" watchObservedRunningTime="2025-12-10 15:24:58.206083623 +0000 UTC m=+94.806967265" Dec 10 15:24:58 crc kubenswrapper[4755]: I1210 15:24:58.232242 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-n8c6s" podStartSLOduration=75.232224641 podStartE2EDuration="1m15.232224641s" podCreationTimestamp="2025-12-10 15:23:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:24:58.231158344 +0000 UTC m=+94.832041996" watchObservedRunningTime="2025-12-10 15:24:58.232224641 +0000 UTC m=+94.833108273" Dec 10 15:24:58 crc kubenswrapper[4755]: I1210 15:24:58.253442 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hsdkf" Dec 10 15:24:58 crc kubenswrapper[4755]: I1210 15:24:58.263123 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-qnmst" podStartSLOduration=75.263105562 podStartE2EDuration="1m15.263105562s" podCreationTimestamp="2025-12-10 15:23:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:24:58.244905121 +0000 UTC m=+94.845788753" watchObservedRunningTime="2025-12-10 15:24:58.263105562 +0000 UTC m=+94.863989184" Dec 10 15:24:58 crc kubenswrapper[4755]: I1210 15:24:58.316099 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hsdkf" event={"ID":"c45919b3-3bbb-40a2-a729-5c45cd981555","Type":"ContainerStarted","Data":"764b8ff21e11acf0a3f20e5f5118570e346154a2f16adafaea0ebba752c10a63"} Dec 10 15:24:59 crc kubenswrapper[4755]: I1210 15:24:59.319805 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hsdkf" event={"ID":"c45919b3-3bbb-40a2-a729-5c45cd981555","Type":"ContainerStarted","Data":"c6fc04c2589d06aba5937f7032588e6baedcd49cf48721cf5b5f9b62b3cfe3da"} Dec 10 15:24:59 crc kubenswrapper[4755]: I1210 15:24:59.336024 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hsdkf" podStartSLOduration=76.336004224 podStartE2EDuration="1m16.336004224s" podCreationTimestamp="2025-12-10 15:23:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:24:59.335400068 +0000 UTC m=+95.936283720" watchObservedRunningTime="2025-12-10 15:24:59.336004224 +0000 UTC m=+95.936887866" Dec 10 15:24:59 crc kubenswrapper[4755]: I1210 15:24:59.336177 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-82wnw" podStartSLOduration=75.336172559 podStartE2EDuration="1m15.336172559s" podCreationTimestamp="2025-12-10 15:23:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:24:58.2637635 +0000 UTC m=+94.864647132" watchObservedRunningTime="2025-12-10 15:24:59.336172559 +0000 UTC m=+95.937056201" Dec 10 15:24:59 crc kubenswrapper[4755]: I1210 15:24:59.757334 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5ctz" Dec 10 15:24:59 crc kubenswrapper[4755]: I1210 15:24:59.757430 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 15:24:59 crc kubenswrapper[4755]: I1210 15:24:59.757334 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 15:24:59 crc kubenswrapper[4755]: I1210 15:24:59.757530 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 15:24:59 crc kubenswrapper[4755]: E1210 15:24:59.757520 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5ctz" podUID="17673130-8212-4f8f-8859-92774f0ee202" Dec 10 15:24:59 crc kubenswrapper[4755]: E1210 15:24:59.757654 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 15:24:59 crc kubenswrapper[4755]: E1210 15:24:59.757711 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 15:24:59 crc kubenswrapper[4755]: E1210 15:24:59.757900 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 15:25:00 crc kubenswrapper[4755]: I1210 15:25:00.758172 4755 scope.go:117] "RemoveContainer" containerID="0386a60f9d2d9c0cec943720b300e0cd71348b81b74234f19f1c51d34142b089" Dec 10 15:25:00 crc kubenswrapper[4755]: E1210 15:25:00.758391 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-6lfvk_openshift-ovn-kubernetes(4b1da51a-99c9-4f8e-920d-ce0973af6370)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" podUID="4b1da51a-99c9-4f8e-920d-ce0973af6370" Dec 10 15:25:01 crc kubenswrapper[4755]: I1210 15:25:01.756612 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5ctz" Dec 10 15:25:01 crc kubenswrapper[4755]: E1210 15:25:01.756770 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5ctz" podUID="17673130-8212-4f8f-8859-92774f0ee202" Dec 10 15:25:01 crc kubenswrapper[4755]: I1210 15:25:01.756791 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 15:25:01 crc kubenswrapper[4755]: I1210 15:25:01.756859 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 15:25:01 crc kubenswrapper[4755]: E1210 15:25:01.756962 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 15:25:01 crc kubenswrapper[4755]: E1210 15:25:01.757131 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 15:25:01 crc kubenswrapper[4755]: I1210 15:25:01.757207 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 15:25:01 crc kubenswrapper[4755]: E1210 15:25:01.757328 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 15:25:02 crc kubenswrapper[4755]: I1210 15:25:02.058628 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/17673130-8212-4f8f-8859-92774f0ee202-metrics-certs\") pod \"network-metrics-daemon-q5ctz\" (UID: \"17673130-8212-4f8f-8859-92774f0ee202\") " pod="openshift-multus/network-metrics-daemon-q5ctz" Dec 10 15:25:02 crc kubenswrapper[4755]: E1210 15:25:02.058837 4755 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 10 15:25:02 crc kubenswrapper[4755]: E1210 15:25:02.058956 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/17673130-8212-4f8f-8859-92774f0ee202-metrics-certs podName:17673130-8212-4f8f-8859-92774f0ee202 nodeName:}" failed. No retries permitted until 2025-12-10 15:26:06.058930217 +0000 UTC m=+162.659813889 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/17673130-8212-4f8f-8859-92774f0ee202-metrics-certs") pod "network-metrics-daemon-q5ctz" (UID: "17673130-8212-4f8f-8859-92774f0ee202") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 10 15:25:03 crc kubenswrapper[4755]: I1210 15:25:03.757121 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 15:25:03 crc kubenswrapper[4755]: I1210 15:25:03.757137 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 15:25:03 crc kubenswrapper[4755]: I1210 15:25:03.757857 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5ctz" Dec 10 15:25:03 crc kubenswrapper[4755]: I1210 15:25:03.758146 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 15:25:03 crc kubenswrapper[4755]: E1210 15:25:03.758142 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 15:25:03 crc kubenswrapper[4755]: E1210 15:25:03.758224 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 15:25:03 crc kubenswrapper[4755]: E1210 15:25:03.758278 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5ctz" podUID="17673130-8212-4f8f-8859-92774f0ee202" Dec 10 15:25:03 crc kubenswrapper[4755]: E1210 15:25:03.758399 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 15:25:05 crc kubenswrapper[4755]: I1210 15:25:05.757701 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5ctz" Dec 10 15:25:05 crc kubenswrapper[4755]: I1210 15:25:05.758524 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 15:25:05 crc kubenswrapper[4755]: I1210 15:25:05.758630 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 15:25:05 crc kubenswrapper[4755]: E1210 15:25:05.758621 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5ctz" podUID="17673130-8212-4f8f-8859-92774f0ee202" Dec 10 15:25:05 crc kubenswrapper[4755]: I1210 15:25:05.758662 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 15:25:05 crc kubenswrapper[4755]: E1210 15:25:05.758739 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 15:25:05 crc kubenswrapper[4755]: E1210 15:25:05.758801 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 15:25:05 crc kubenswrapper[4755]: E1210 15:25:05.758908 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 15:25:07 crc kubenswrapper[4755]: I1210 15:25:07.756562 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 15:25:07 crc kubenswrapper[4755]: I1210 15:25:07.756611 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 15:25:07 crc kubenswrapper[4755]: E1210 15:25:07.756840 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 15:25:07 crc kubenswrapper[4755]: E1210 15:25:07.756927 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 15:25:07 crc kubenswrapper[4755]: I1210 15:25:07.757439 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 15:25:07 crc kubenswrapper[4755]: E1210 15:25:07.757682 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 15:25:07 crc kubenswrapper[4755]: I1210 15:25:07.757996 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5ctz" Dec 10 15:25:07 crc kubenswrapper[4755]: E1210 15:25:07.758179 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5ctz" podUID="17673130-8212-4f8f-8859-92774f0ee202" Dec 10 15:25:09 crc kubenswrapper[4755]: I1210 15:25:09.756717 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 15:25:09 crc kubenswrapper[4755]: I1210 15:25:09.756774 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 15:25:09 crc kubenswrapper[4755]: I1210 15:25:09.756822 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5ctz" Dec 10 15:25:09 crc kubenswrapper[4755]: E1210 15:25:09.756905 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 15:25:09 crc kubenswrapper[4755]: I1210 15:25:09.756934 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 15:25:09 crc kubenswrapper[4755]: E1210 15:25:09.757048 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5ctz" podUID="17673130-8212-4f8f-8859-92774f0ee202" Dec 10 15:25:09 crc kubenswrapper[4755]: E1210 15:25:09.757154 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 15:25:09 crc kubenswrapper[4755]: E1210 15:25:09.757217 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 15:25:11 crc kubenswrapper[4755]: I1210 15:25:11.756613 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 15:25:11 crc kubenswrapper[4755]: I1210 15:25:11.756702 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 15:25:11 crc kubenswrapper[4755]: I1210 15:25:11.756726 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 15:25:11 crc kubenswrapper[4755]: E1210 15:25:11.756805 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 15:25:11 crc kubenswrapper[4755]: E1210 15:25:11.756902 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 15:25:11 crc kubenswrapper[4755]: E1210 15:25:11.757643 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 15:25:11 crc kubenswrapper[4755]: I1210 15:25:11.757686 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5ctz" Dec 10 15:25:11 crc kubenswrapper[4755]: E1210 15:25:11.757781 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5ctz" podUID="17673130-8212-4f8f-8859-92774f0ee202" Dec 10 15:25:13 crc kubenswrapper[4755]: I1210 15:25:13.756705 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 15:25:13 crc kubenswrapper[4755]: I1210 15:25:13.756782 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 15:25:13 crc kubenswrapper[4755]: I1210 15:25:13.756829 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 15:25:13 crc kubenswrapper[4755]: E1210 15:25:13.756989 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 15:25:13 crc kubenswrapper[4755]: I1210 15:25:13.757007 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5ctz" Dec 10 15:25:13 crc kubenswrapper[4755]: E1210 15:25:13.758463 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 15:25:13 crc kubenswrapper[4755]: E1210 15:25:13.758650 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5ctz" podUID="17673130-8212-4f8f-8859-92774f0ee202" Dec 10 15:25:13 crc kubenswrapper[4755]: E1210 15:25:13.758687 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 15:25:15 crc kubenswrapper[4755]: I1210 15:25:15.757279 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 15:25:15 crc kubenswrapper[4755]: I1210 15:25:15.757369 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 15:25:15 crc kubenswrapper[4755]: E1210 15:25:15.757452 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 15:25:15 crc kubenswrapper[4755]: I1210 15:25:15.757512 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 15:25:15 crc kubenswrapper[4755]: E1210 15:25:15.757693 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 15:25:15 crc kubenswrapper[4755]: I1210 15:25:15.757729 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5ctz" Dec 10 15:25:15 crc kubenswrapper[4755]: E1210 15:25:15.757823 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 15:25:15 crc kubenswrapper[4755]: E1210 15:25:15.758291 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5ctz" podUID="17673130-8212-4f8f-8859-92774f0ee202" Dec 10 15:25:15 crc kubenswrapper[4755]: I1210 15:25:15.758721 4755 scope.go:117] "RemoveContainer" containerID="0386a60f9d2d9c0cec943720b300e0cd71348b81b74234f19f1c51d34142b089" Dec 10 15:25:15 crc kubenswrapper[4755]: E1210 15:25:15.758886 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-6lfvk_openshift-ovn-kubernetes(4b1da51a-99c9-4f8e-920d-ce0973af6370)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" podUID="4b1da51a-99c9-4f8e-920d-ce0973af6370" Dec 10 15:25:17 crc kubenswrapper[4755]: I1210 15:25:17.375797 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zl2tx_796da6d5-6ccd-4786-a03e-9a8e47a55031/kube-multus/1.log" Dec 10 15:25:17 crc kubenswrapper[4755]: I1210 15:25:17.376624 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zl2tx_796da6d5-6ccd-4786-a03e-9a8e47a55031/kube-multus/0.log" Dec 10 15:25:17 crc kubenswrapper[4755]: I1210 15:25:17.376653 4755 generic.go:334] "Generic (PLEG): container finished" podID="796da6d5-6ccd-4786-a03e-9a8e47a55031" containerID="2e0f974f9ba614dcaef08cf7168b77eeee007dfe65cc4e32df9b8e45005ff4ed" exitCode=1 Dec 10 15:25:17 crc kubenswrapper[4755]: I1210 15:25:17.376694 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-zl2tx" event={"ID":"796da6d5-6ccd-4786-a03e-9a8e47a55031","Type":"ContainerDied","Data":"2e0f974f9ba614dcaef08cf7168b77eeee007dfe65cc4e32df9b8e45005ff4ed"} Dec 10 15:25:17 crc kubenswrapper[4755]: I1210 15:25:17.376788 4755 scope.go:117] "RemoveContainer" containerID="de63a123c46563bd8cd07e669d192bc8b019a889b9bdb7af1b988872c8f1fc48" Dec 10 15:25:17 crc kubenswrapper[4755]: I1210 15:25:17.377512 4755 scope.go:117] "RemoveContainer" containerID="2e0f974f9ba614dcaef08cf7168b77eeee007dfe65cc4e32df9b8e45005ff4ed" Dec 10 15:25:17 crc kubenswrapper[4755]: E1210 15:25:17.377787 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-zl2tx_openshift-multus(796da6d5-6ccd-4786-a03e-9a8e47a55031)\"" pod="openshift-multus/multus-zl2tx" podUID="796da6d5-6ccd-4786-a03e-9a8e47a55031" Dec 10 15:25:17 crc kubenswrapper[4755]: I1210 15:25:17.757462 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 15:25:17 crc kubenswrapper[4755]: I1210 15:25:17.757546 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 15:25:17 crc kubenswrapper[4755]: I1210 15:25:17.757599 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 15:25:17 crc kubenswrapper[4755]: E1210 15:25:17.757739 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 15:25:17 crc kubenswrapper[4755]: I1210 15:25:17.757777 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5ctz" Dec 10 15:25:17 crc kubenswrapper[4755]: E1210 15:25:17.757882 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 15:25:17 crc kubenswrapper[4755]: E1210 15:25:17.757995 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5ctz" podUID="17673130-8212-4f8f-8859-92774f0ee202" Dec 10 15:25:17 crc kubenswrapper[4755]: E1210 15:25:17.758049 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 15:25:18 crc kubenswrapper[4755]: I1210 15:25:18.382659 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zl2tx_796da6d5-6ccd-4786-a03e-9a8e47a55031/kube-multus/1.log" Dec 10 15:25:19 crc kubenswrapper[4755]: I1210 15:25:19.766222 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 15:25:19 crc kubenswrapper[4755]: I1210 15:25:19.766311 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 15:25:19 crc kubenswrapper[4755]: I1210 15:25:19.766340 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5ctz" Dec 10 15:25:19 crc kubenswrapper[4755]: E1210 15:25:19.766570 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 15:25:19 crc kubenswrapper[4755]: I1210 15:25:19.766593 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 15:25:19 crc kubenswrapper[4755]: E1210 15:25:19.766356 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 15:25:19 crc kubenswrapper[4755]: E1210 15:25:19.766654 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5ctz" podUID="17673130-8212-4f8f-8859-92774f0ee202" Dec 10 15:25:19 crc kubenswrapper[4755]: E1210 15:25:19.766714 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 15:25:21 crc kubenswrapper[4755]: I1210 15:25:21.757692 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5ctz" Dec 10 15:25:21 crc kubenswrapper[4755]: I1210 15:25:21.757720 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 15:25:21 crc kubenswrapper[4755]: I1210 15:25:21.757720 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 15:25:21 crc kubenswrapper[4755]: I1210 15:25:21.757871 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 15:25:21 crc kubenswrapper[4755]: E1210 15:25:21.757878 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5ctz" podUID="17673130-8212-4f8f-8859-92774f0ee202" Dec 10 15:25:21 crc kubenswrapper[4755]: E1210 15:25:21.758045 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 15:25:21 crc kubenswrapper[4755]: E1210 15:25:21.758201 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 15:25:21 crc kubenswrapper[4755]: E1210 15:25:21.758288 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 15:25:23 crc kubenswrapper[4755]: I1210 15:25:23.757695 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 15:25:23 crc kubenswrapper[4755]: I1210 15:25:23.757713 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 15:25:23 crc kubenswrapper[4755]: I1210 15:25:23.757785 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 15:25:23 crc kubenswrapper[4755]: I1210 15:25:23.757867 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5ctz" Dec 10 15:25:23 crc kubenswrapper[4755]: E1210 15:25:23.758784 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 15:25:23 crc kubenswrapper[4755]: E1210 15:25:23.759080 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 15:25:23 crc kubenswrapper[4755]: E1210 15:25:23.759177 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 15:25:23 crc kubenswrapper[4755]: E1210 15:25:23.759239 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5ctz" podUID="17673130-8212-4f8f-8859-92774f0ee202" Dec 10 15:25:23 crc kubenswrapper[4755]: E1210 15:25:23.791073 4755 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Dec 10 15:25:24 crc kubenswrapper[4755]: E1210 15:25:24.971541 4755 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 10 15:25:25 crc kubenswrapper[4755]: I1210 15:25:25.757137 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5ctz" Dec 10 15:25:25 crc kubenswrapper[4755]: E1210 15:25:25.757279 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5ctz" podUID="17673130-8212-4f8f-8859-92774f0ee202" Dec 10 15:25:25 crc kubenswrapper[4755]: I1210 15:25:25.757579 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 15:25:25 crc kubenswrapper[4755]: I1210 15:25:25.757618 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 15:25:25 crc kubenswrapper[4755]: E1210 15:25:25.757658 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 15:25:25 crc kubenswrapper[4755]: E1210 15:25:25.757727 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 15:25:25 crc kubenswrapper[4755]: I1210 15:25:25.757764 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 15:25:25 crc kubenswrapper[4755]: E1210 15:25:25.757817 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 15:25:26 crc kubenswrapper[4755]: I1210 15:25:26.757981 4755 scope.go:117] "RemoveContainer" containerID="0386a60f9d2d9c0cec943720b300e0cd71348b81b74234f19f1c51d34142b089" Dec 10 15:25:27 crc kubenswrapper[4755]: I1210 15:25:27.413201 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6lfvk_4b1da51a-99c9-4f8e-920d-ce0973af6370/ovnkube-controller/3.log" Dec 10 15:25:27 crc kubenswrapper[4755]: I1210 15:25:27.416591 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" event={"ID":"4b1da51a-99c9-4f8e-920d-ce0973af6370","Type":"ContainerStarted","Data":"decd94009593b6ceb6559ae2b8598a9f4fdd922a3c94226d5086a7a25cc40280"} Dec 10 15:25:27 crc kubenswrapper[4755]: I1210 15:25:27.417105 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" Dec 10 15:25:27 crc kubenswrapper[4755]: I1210 15:25:27.449804 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" podStartSLOduration=104.449784805 podStartE2EDuration="1m44.449784805s" podCreationTimestamp="2025-12-10 15:23:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:25:27.449071296 +0000 UTC m=+124.049954968" watchObservedRunningTime="2025-12-10 15:25:27.449784805 +0000 UTC m=+124.050668437" Dec 10 15:25:27 crc kubenswrapper[4755]: I1210 15:25:27.597589 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-q5ctz"] Dec 10 15:25:27 crc kubenswrapper[4755]: I1210 15:25:27.597837 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5ctz" Dec 10 15:25:27 crc kubenswrapper[4755]: E1210 15:25:27.598018 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5ctz" podUID="17673130-8212-4f8f-8859-92774f0ee202" Dec 10 15:25:27 crc kubenswrapper[4755]: I1210 15:25:27.757284 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 15:25:27 crc kubenswrapper[4755]: I1210 15:25:27.757331 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 15:25:27 crc kubenswrapper[4755]: E1210 15:25:27.757420 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 15:25:27 crc kubenswrapper[4755]: I1210 15:25:27.757433 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 15:25:27 crc kubenswrapper[4755]: E1210 15:25:27.757682 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 15:25:27 crc kubenswrapper[4755]: E1210 15:25:27.757742 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 15:25:29 crc kubenswrapper[4755]: I1210 15:25:29.757589 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5ctz" Dec 10 15:25:29 crc kubenswrapper[4755]: I1210 15:25:29.757645 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 15:25:29 crc kubenswrapper[4755]: E1210 15:25:29.757761 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5ctz" podUID="17673130-8212-4f8f-8859-92774f0ee202" Dec 10 15:25:29 crc kubenswrapper[4755]: I1210 15:25:29.757812 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 15:25:29 crc kubenswrapper[4755]: I1210 15:25:29.757817 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 15:25:29 crc kubenswrapper[4755]: E1210 15:25:29.758126 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 15:25:29 crc kubenswrapper[4755]: E1210 15:25:29.758222 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 15:25:29 crc kubenswrapper[4755]: I1210 15:25:29.758234 4755 scope.go:117] "RemoveContainer" containerID="2e0f974f9ba614dcaef08cf7168b77eeee007dfe65cc4e32df9b8e45005ff4ed" Dec 10 15:25:29 crc kubenswrapper[4755]: E1210 15:25:29.758296 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 15:25:29 crc kubenswrapper[4755]: E1210 15:25:29.973678 4755 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 10 15:25:30 crc kubenswrapper[4755]: I1210 15:25:30.428378 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zl2tx_796da6d5-6ccd-4786-a03e-9a8e47a55031/kube-multus/1.log" Dec 10 15:25:30 crc kubenswrapper[4755]: I1210 15:25:30.428433 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-zl2tx" event={"ID":"796da6d5-6ccd-4786-a03e-9a8e47a55031","Type":"ContainerStarted","Data":"ddcd6ca2f982a307a418e96d428250bc2a8ea077d211a8856f484cd779d4fa36"} Dec 10 15:25:31 crc kubenswrapper[4755]: I1210 15:25:31.757529 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5ctz" Dec 10 15:25:31 crc kubenswrapper[4755]: I1210 15:25:31.757544 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 15:25:31 crc kubenswrapper[4755]: I1210 15:25:31.757592 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 15:25:31 crc kubenswrapper[4755]: E1210 15:25:31.757663 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5ctz" podUID="17673130-8212-4f8f-8859-92774f0ee202" Dec 10 15:25:31 crc kubenswrapper[4755]: I1210 15:25:31.757780 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 15:25:31 crc kubenswrapper[4755]: E1210 15:25:31.757853 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 15:25:31 crc kubenswrapper[4755]: E1210 15:25:31.757970 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 15:25:31 crc kubenswrapper[4755]: E1210 15:25:31.758154 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 15:25:33 crc kubenswrapper[4755]: I1210 15:25:33.756707 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 15:25:33 crc kubenswrapper[4755]: I1210 15:25:33.756726 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5ctz" Dec 10 15:25:33 crc kubenswrapper[4755]: I1210 15:25:33.756757 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 15:25:33 crc kubenswrapper[4755]: E1210 15:25:33.757819 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 15:25:33 crc kubenswrapper[4755]: I1210 15:25:33.757905 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 15:25:33 crc kubenswrapper[4755]: E1210 15:25:33.757982 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5ctz" podUID="17673130-8212-4f8f-8859-92774f0ee202" Dec 10 15:25:33 crc kubenswrapper[4755]: E1210 15:25:33.758122 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 15:25:33 crc kubenswrapper[4755]: E1210 15:25:33.758230 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 15:25:35 crc kubenswrapper[4755]: I1210 15:25:35.757162 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 15:25:35 crc kubenswrapper[4755]: I1210 15:25:35.757251 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 15:25:35 crc kubenswrapper[4755]: I1210 15:25:35.757336 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5ctz" Dec 10 15:25:35 crc kubenswrapper[4755]: I1210 15:25:35.757166 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 15:25:35 crc kubenswrapper[4755]: I1210 15:25:35.759193 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 10 15:25:35 crc kubenswrapper[4755]: I1210 15:25:35.759229 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 10 15:25:35 crc kubenswrapper[4755]: I1210 15:25:35.759230 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 10 15:25:35 crc kubenswrapper[4755]: I1210 15:25:35.759599 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 10 15:25:35 crc kubenswrapper[4755]: I1210 15:25:35.760957 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 10 15:25:35 crc kubenswrapper[4755]: I1210 15:25:35.761204 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.801028 4755 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.845261 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-6khhb"] Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.845943 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6khhb" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.846025 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-zcwtl"] Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.846707 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zcwtl" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.851263 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-f47gb"] Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.851854 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-f47gb" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.856999 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.858800 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gvpnh"] Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.858995 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.859292 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.859397 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gvpnh" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.860303 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.860587 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.860638 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-sxtxj"] Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.860791 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.861016 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.861127 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-sxtxj" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.863019 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-n66x6"] Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.863453 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-n66x6" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.863769 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-68s5b"] Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.864154 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-68s5b" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.864212 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.864324 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.865101 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.865266 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.865584 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.865734 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.868027 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.868628 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-26mg6"] Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.869496 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-n6qb5"] Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.869868 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-n6qb5" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.869868 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-xx85g"] Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.870960 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-xx85g" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.879434 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-26mg6" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.881769 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.890616 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.892906 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.897278 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.905287 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.905536 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.906019 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e2f7713d-7734-477f-81df-093aaa83837f-auth-proxy-config\") pod \"machine-approver-56656f9798-zcwtl\" (UID: \"e2f7713d-7734-477f-81df-093aaa83837f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zcwtl" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.906052 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctmxq\" (UniqueName: \"kubernetes.io/projected/e2f7713d-7734-477f-81df-093aaa83837f-kube-api-access-ctmxq\") pod \"machine-approver-56656f9798-zcwtl\" (UID: \"e2f7713d-7734-477f-81df-093aaa83837f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zcwtl" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.906074 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/1645de9b-f227-4d9d-885f-ffd58e5bef69-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-f47gb\" (UID: \"1645de9b-f227-4d9d-885f-ffd58e5bef69\") " pod="openshift-authentication/oauth-openshift-558db77b4-f47gb" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.906095 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txplh\" (UniqueName: \"kubernetes.io/projected/9336d238-f0fb-430a-acfe-4aa5c888ebc8-kube-api-access-txplh\") pod \"cluster-samples-operator-665b6dd947-gvpnh\" (UID: \"9336d238-f0fb-430a-acfe-4aa5c888ebc8\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gvpnh" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.906114 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/1645de9b-f227-4d9d-885f-ffd58e5bef69-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-f47gb\" (UID: \"1645de9b-f227-4d9d-885f-ffd58e5bef69\") " pod="openshift-authentication/oauth-openshift-558db77b4-f47gb" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.906141 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2f7713d-7734-477f-81df-093aaa83837f-config\") pod \"machine-approver-56656f9798-zcwtl\" (UID: \"e2f7713d-7734-477f-81df-093aaa83837f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zcwtl" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.906160 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7d2e3e9e-69f8-46f4-b825-ee369ab23de8-audit-policies\") pod \"apiserver-7bbb656c7d-6khhb\" (UID: \"7d2e3e9e-69f8-46f4-b825-ee369ab23de8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6khhb" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.906180 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72d8a3d3-6b10-4a2d-bb42-b96a0d6135a5-config\") pod \"console-operator-58897d9998-sxtxj\" (UID: \"72d8a3d3-6b10-4a2d-bb42-b96a0d6135a5\") " pod="openshift-console-operator/console-operator-58897d9998-sxtxj" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.906195 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/9336d238-f0fb-430a-acfe-4aa5c888ebc8-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-gvpnh\" (UID: \"9336d238-f0fb-430a-acfe-4aa5c888ebc8\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gvpnh" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.906216 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/1645de9b-f227-4d9d-885f-ffd58e5bef69-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-f47gb\" (UID: \"1645de9b-f227-4d9d-885f-ffd58e5bef69\") " pod="openshift-authentication/oauth-openshift-558db77b4-f47gb" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.906233 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1645de9b-f227-4d9d-885f-ffd58e5bef69-audit-dir\") pod \"oauth-openshift-558db77b4-f47gb\" (UID: \"1645de9b-f227-4d9d-885f-ffd58e5bef69\") " pod="openshift-authentication/oauth-openshift-558db77b4-f47gb" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.906250 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/1645de9b-f227-4d9d-885f-ffd58e5bef69-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-f47gb\" (UID: \"1645de9b-f227-4d9d-885f-ffd58e5bef69\") " pod="openshift-authentication/oauth-openshift-558db77b4-f47gb" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.906267 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/1645de9b-f227-4d9d-885f-ffd58e5bef69-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-f47gb\" (UID: \"1645de9b-f227-4d9d-885f-ffd58e5bef69\") " pod="openshift-authentication/oauth-openshift-558db77b4-f47gb" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.906283 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1645de9b-f227-4d9d-885f-ffd58e5bef69-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-f47gb\" (UID: \"1645de9b-f227-4d9d-885f-ffd58e5bef69\") " pod="openshift-authentication/oauth-openshift-558db77b4-f47gb" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.906300 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/1645de9b-f227-4d9d-885f-ffd58e5bef69-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-f47gb\" (UID: \"1645de9b-f227-4d9d-885f-ffd58e5bef69\") " pod="openshift-authentication/oauth-openshift-558db77b4-f47gb" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.906332 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/72d8a3d3-6b10-4a2d-bb42-b96a0d6135a5-serving-cert\") pod \"console-operator-58897d9998-sxtxj\" (UID: \"72d8a3d3-6b10-4a2d-bb42-b96a0d6135a5\") " pod="openshift-console-operator/console-operator-58897d9998-sxtxj" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.906351 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7d2e3e9e-69f8-46f4-b825-ee369ab23de8-etcd-client\") pod \"apiserver-7bbb656c7d-6khhb\" (UID: \"7d2e3e9e-69f8-46f4-b825-ee369ab23de8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6khhb" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.906369 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7d2e3e9e-69f8-46f4-b825-ee369ab23de8-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-6khhb\" (UID: \"7d2e3e9e-69f8-46f4-b825-ee369ab23de8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6khhb" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.906383 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/1645de9b-f227-4d9d-885f-ffd58e5bef69-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-f47gb\" (UID: \"1645de9b-f227-4d9d-885f-ffd58e5bef69\") " pod="openshift-authentication/oauth-openshift-558db77b4-f47gb" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.906396 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1645de9b-f227-4d9d-885f-ffd58e5bef69-audit-policies\") pod \"oauth-openshift-558db77b4-f47gb\" (UID: \"1645de9b-f227-4d9d-885f-ffd58e5bef69\") " pod="openshift-authentication/oauth-openshift-558db77b4-f47gb" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.906413 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7d2e3e9e-69f8-46f4-b825-ee369ab23de8-audit-dir\") pod \"apiserver-7bbb656c7d-6khhb\" (UID: \"7d2e3e9e-69f8-46f4-b825-ee369ab23de8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6khhb" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.906437 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/72d8a3d3-6b10-4a2d-bb42-b96a0d6135a5-trusted-ca\") pod \"console-operator-58897d9998-sxtxj\" (UID: \"72d8a3d3-6b10-4a2d-bb42-b96a0d6135a5\") " pod="openshift-console-operator/console-operator-58897d9998-sxtxj" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.906752 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.906758 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmscp\" (UniqueName: \"kubernetes.io/projected/72d8a3d3-6b10-4a2d-bb42-b96a0d6135a5-kube-api-access-bmscp\") pod \"console-operator-58897d9998-sxtxj\" (UID: \"72d8a3d3-6b10-4a2d-bb42-b96a0d6135a5\") " pod="openshift-console-operator/console-operator-58897d9998-sxtxj" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.906926 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.906925 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d2e3e9e-69f8-46f4-b825-ee369ab23de8-serving-cert\") pod \"apiserver-7bbb656c7d-6khhb\" (UID: \"7d2e3e9e-69f8-46f4-b825-ee369ab23de8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6khhb" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.906998 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7d2e3e9e-69f8-46f4-b825-ee369ab23de8-encryption-config\") pod \"apiserver-7bbb656c7d-6khhb\" (UID: \"7d2e3e9e-69f8-46f4-b825-ee369ab23de8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6khhb" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.907022 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65szh\" (UniqueName: \"kubernetes.io/projected/7d2e3e9e-69f8-46f4-b825-ee369ab23de8-kube-api-access-65szh\") pod \"apiserver-7bbb656c7d-6khhb\" (UID: \"7d2e3e9e-69f8-46f4-b825-ee369ab23de8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6khhb" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.907042 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/e2f7713d-7734-477f-81df-093aaa83837f-machine-approver-tls\") pod \"machine-approver-56656f9798-zcwtl\" (UID: \"e2f7713d-7734-477f-81df-093aaa83837f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zcwtl" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.907061 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7d2e3e9e-69f8-46f4-b825-ee369ab23de8-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-6khhb\" (UID: \"7d2e3e9e-69f8-46f4-b825-ee369ab23de8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6khhb" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.907081 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/1645de9b-f227-4d9d-885f-ffd58e5bef69-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-f47gb\" (UID: \"1645de9b-f227-4d9d-885f-ffd58e5bef69\") " pod="openshift-authentication/oauth-openshift-558db77b4-f47gb" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.907126 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/1645de9b-f227-4d9d-885f-ffd58e5bef69-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-f47gb\" (UID: \"1645de9b-f227-4d9d-885f-ffd58e5bef69\") " pod="openshift-authentication/oauth-openshift-558db77b4-f47gb" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.907154 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/1645de9b-f227-4d9d-885f-ffd58e5bef69-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-f47gb\" (UID: \"1645de9b-f227-4d9d-885f-ffd58e5bef69\") " pod="openshift-authentication/oauth-openshift-558db77b4-f47gb" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.907213 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhg7z\" (UniqueName: \"kubernetes.io/projected/1645de9b-f227-4d9d-885f-ffd58e5bef69-kube-api-access-bhg7z\") pod \"oauth-openshift-558db77b4-f47gb\" (UID: \"1645de9b-f227-4d9d-885f-ffd58e5bef69\") " pod="openshift-authentication/oauth-openshift-558db77b4-f47gb" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.907422 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.907568 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.907800 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.907976 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.908143 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.908260 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.908390 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.908550 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.908640 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.908670 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.908750 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.908776 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.908816 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.908866 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.908969 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.908994 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.909173 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.909281 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.909334 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.909374 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.909485 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.909535 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.910753 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.911213 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.911317 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.911675 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-k4nkz"] Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.912350 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-4bgvx"] Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.913243 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-k4nkz" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.913743 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4bgvx" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.913853 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.914148 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.914220 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.914481 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.914524 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.914637 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.914650 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.914667 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.914728 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.914774 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.914786 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.914896 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.917146 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-m9ntd"] Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.917806 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-m9ntd" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.918799 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-gbvgh"] Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.919503 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-gbvgh" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.920064 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.920319 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.920494 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.922364 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-kc8qq"] Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.922905 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-kc8qq" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.924897 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.925228 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.928028 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jkj94"] Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.928620 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.928898 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8vttb"] Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.929314 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8vttb" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.929606 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jkj94" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.930506 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-2ldzj"] Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.931573 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2ldzj" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.933547 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.933785 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.933837 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.933560 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.940545 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.974865 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.975609 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.975847 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qgjc4"] Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.976215 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.976629 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dhh5x"] Dec 10 15:25:38 crc kubenswrapper[4755]: I1210 15:25:38.983329 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.000892 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.001448 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.002359 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.002810 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.002896 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.002837 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.003148 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.003285 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.003394 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.003581 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dhh5x" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.003744 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qgjc4" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.003580 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.003600 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.004169 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.003607 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.004408 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.003636 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.004658 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.003719 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.003730 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.003844 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.004219 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.004323 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.005013 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7svz5"] Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.005807 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7svz5" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.005934 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-sw7vc"] Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.006407 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-sw7vc" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.006572 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.006729 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.006784 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.007670 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/51e9f48f-0b01-4f2b-8d00-77fd91f6c3b0-images\") pod \"machine-config-operator-74547568cd-2ldzj\" (UID: \"51e9f48f-0b01-4f2b-8d00-77fd91f6c3b0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2ldzj" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.007692 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/51e9f48f-0b01-4f2b-8d00-77fd91f6c3b0-auth-proxy-config\") pod \"machine-config-operator-74547568cd-2ldzj\" (UID: \"51e9f48f-0b01-4f2b-8d00-77fd91f6c3b0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2ldzj" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.007708 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8c1052c9-fa99-4f24-8fff-923dc489c08d-client-ca\") pod \"route-controller-manager-6576b87f9c-4bgvx\" (UID: \"8c1052c9-fa99-4f24-8fff-923dc489c08d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4bgvx" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.007725 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e846eda6-431b-4fdf-98dc-80e3fc6b122f-serving-cert\") pod \"apiserver-76f77b778f-xx85g\" (UID: \"e846eda6-431b-4fdf-98dc-80e3fc6b122f\") " pod="openshift-apiserver/apiserver-76f77b778f-xx85g" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.007749 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e2f7713d-7734-477f-81df-093aaa83837f-auth-proxy-config\") pod \"machine-approver-56656f9798-zcwtl\" (UID: \"e2f7713d-7734-477f-81df-093aaa83837f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zcwtl" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.007765 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ctmxq\" (UniqueName: \"kubernetes.io/projected/e2f7713d-7734-477f-81df-093aaa83837f-kube-api-access-ctmxq\") pod \"machine-approver-56656f9798-zcwtl\" (UID: \"e2f7713d-7734-477f-81df-093aaa83837f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zcwtl" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.007785 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxncl\" (UniqueName: \"kubernetes.io/projected/0f76f7be-c8e4-4943-81f2-8e416e747aec-kube-api-access-cxncl\") pod \"etcd-operator-b45778765-kc8qq\" (UID: \"0f76f7be-c8e4-4943-81f2-8e416e747aec\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kc8qq" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.007802 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gq8s\" (UniqueName: \"kubernetes.io/projected/ed5ac3ea-71b9-41f5-b629-ec016b6ef3c7-kube-api-access-6gq8s\") pod \"openshift-apiserver-operator-796bbdcf4f-k4nkz\" (UID: \"ed5ac3ea-71b9-41f5-b629-ec016b6ef3c7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-k4nkz" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.007819 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4e98c44e-5a60-49a0-9186-2367509dda97-serving-cert\") pod \"controller-manager-879f6c89f-m9ntd\" (UID: \"4e98c44e-5a60-49a0-9186-2367509dda97\") " pod="openshift-controller-manager/controller-manager-879f6c89f-m9ntd" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.007837 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/1645de9b-f227-4d9d-885f-ffd58e5bef69-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-f47gb\" (UID: \"1645de9b-f227-4d9d-885f-ffd58e5bef69\") " pod="openshift-authentication/oauth-openshift-558db77b4-f47gb" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.007853 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0f76f7be-c8e4-4943-81f2-8e416e747aec-etcd-client\") pod \"etcd-operator-b45778765-kc8qq\" (UID: \"0f76f7be-c8e4-4943-81f2-8e416e747aec\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kc8qq" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.007870 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d003362b-b35a-4f18-b387-62ec1490321a-service-ca-bundle\") pod \"authentication-operator-69f744f599-68s5b\" (UID: \"d003362b-b35a-4f18-b387-62ec1490321a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-68s5b" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.007885 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/dd11fb82-1556-4769-a1cc-11589b905b3f-available-featuregates\") pod \"openshift-config-operator-7777fb866f-26mg6\" (UID: \"dd11fb82-1556-4769-a1cc-11589b905b3f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-26mg6" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.007901 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e846eda6-431b-4fdf-98dc-80e3fc6b122f-trusted-ca-bundle\") pod \"apiserver-76f77b778f-xx85g\" (UID: \"e846eda6-431b-4fdf-98dc-80e3fc6b122f\") " pod="openshift-apiserver/apiserver-76f77b778f-xx85g" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.007918 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txplh\" (UniqueName: \"kubernetes.io/projected/9336d238-f0fb-430a-acfe-4aa5c888ebc8-kube-api-access-txplh\") pod \"cluster-samples-operator-665b6dd947-gvpnh\" (UID: \"9336d238-f0fb-430a-acfe-4aa5c888ebc8\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gvpnh" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.007933 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/1645de9b-f227-4d9d-885f-ffd58e5bef69-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-f47gb\" (UID: \"1645de9b-f227-4d9d-885f-ffd58e5bef69\") " pod="openshift-authentication/oauth-openshift-558db77b4-f47gb" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.007950 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/0f76f7be-c8e4-4943-81f2-8e416e747aec-etcd-service-ca\") pod \"etcd-operator-b45778765-kc8qq\" (UID: \"0f76f7be-c8e4-4943-81f2-8e416e747aec\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kc8qq" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.007966 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d003362b-b35a-4f18-b387-62ec1490321a-serving-cert\") pod \"authentication-operator-69f744f599-68s5b\" (UID: \"d003362b-b35a-4f18-b387-62ec1490321a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-68s5b" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.009535 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2f7713d-7734-477f-81df-093aaa83837f-config\") pod \"machine-approver-56656f9798-zcwtl\" (UID: \"e2f7713d-7734-477f-81df-093aaa83837f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zcwtl" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.009566 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4e98c44e-5a60-49a0-9186-2367509dda97-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-m9ntd\" (UID: \"4e98c44e-5a60-49a0-9186-2367509dda97\") " pod="openshift-controller-manager/controller-manager-879f6c89f-m9ntd" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.009585 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tj5fk\" (UniqueName: \"kubernetes.io/projected/c6d1d322-1622-41d5-afb0-c441b346b8bf-kube-api-access-tj5fk\") pod \"downloads-7954f5f757-gbvgh\" (UID: \"c6d1d322-1622-41d5-afb0-c441b346b8bf\") " pod="openshift-console/downloads-7954f5f757-gbvgh" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.009602 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7d2e3e9e-69f8-46f4-b825-ee369ab23de8-audit-policies\") pod \"apiserver-7bbb656c7d-6khhb\" (UID: \"7d2e3e9e-69f8-46f4-b825-ee369ab23de8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6khhb" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.009624 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d40dd21f-096a-4dca-a313-566508e33dd3-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-jkj94\" (UID: \"d40dd21f-096a-4dca-a313-566508e33dd3\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jkj94" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.009651 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/ee5f60d7-e2f5-4900-b238-e4ef9acf1de4-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-8vttb\" (UID: \"ee5f60d7-e2f5-4900-b238-e4ef9acf1de4\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8vttb" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.009667 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e846eda6-431b-4fdf-98dc-80e3fc6b122f-audit-dir\") pod \"apiserver-76f77b778f-xx85g\" (UID: \"e846eda6-431b-4fdf-98dc-80e3fc6b122f\") " pod="openshift-apiserver/apiserver-76f77b778f-xx85g" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.009690 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed5ac3ea-71b9-41f5-b629-ec016b6ef3c7-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-k4nkz\" (UID: \"ed5ac3ea-71b9-41f5-b629-ec016b6ef3c7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-k4nkz" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.009708 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72d8a3d3-6b10-4a2d-bb42-b96a0d6135a5-config\") pod \"console-operator-58897d9998-sxtxj\" (UID: \"72d8a3d3-6b10-4a2d-bb42-b96a0d6135a5\") " pod="openshift-console-operator/console-operator-58897d9998-sxtxj" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.009727 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/9336d238-f0fb-430a-acfe-4aa5c888ebc8-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-gvpnh\" (UID: \"9336d238-f0fb-430a-acfe-4aa5c888ebc8\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gvpnh" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.009743 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e846eda6-431b-4fdf-98dc-80e3fc6b122f-etcd-serving-ca\") pod \"apiserver-76f77b778f-xx85g\" (UID: \"e846eda6-431b-4fdf-98dc-80e3fc6b122f\") " pod="openshift-apiserver/apiserver-76f77b778f-xx85g" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.009769 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/1645de9b-f227-4d9d-885f-ffd58e5bef69-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-f47gb\" (UID: \"1645de9b-f227-4d9d-885f-ffd58e5bef69\") " pod="openshift-authentication/oauth-openshift-558db77b4-f47gb" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.009791 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/0f76f7be-c8e4-4943-81f2-8e416e747aec-etcd-ca\") pod \"etcd-operator-b45778765-kc8qq\" (UID: \"0f76f7be-c8e4-4943-81f2-8e416e747aec\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kc8qq" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.009809 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4e98c44e-5a60-49a0-9186-2367509dda97-client-ca\") pod \"controller-manager-879f6c89f-m9ntd\" (UID: \"4e98c44e-5a60-49a0-9186-2367509dda97\") " pod="openshift-controller-manager/controller-manager-879f6c89f-m9ntd" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.009828 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1645de9b-f227-4d9d-885f-ffd58e5bef69-audit-dir\") pod \"oauth-openshift-558db77b4-f47gb\" (UID: \"1645de9b-f227-4d9d-885f-ffd58e5bef69\") " pod="openshift-authentication/oauth-openshift-558db77b4-f47gb" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.009851 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd11fb82-1556-4769-a1cc-11589b905b3f-serving-cert\") pod \"openshift-config-operator-7777fb866f-26mg6\" (UID: \"dd11fb82-1556-4769-a1cc-11589b905b3f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-26mg6" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.009876 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/24e3bc3c-7e93-4c91-b0a2-85877004fafc-trusted-ca-bundle\") pod \"console-f9d7485db-n6qb5\" (UID: \"24e3bc3c-7e93-4c91-b0a2-85877004fafc\") " pod="openshift-console/console-f9d7485db-n6qb5" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.009896 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/1645de9b-f227-4d9d-885f-ffd58e5bef69-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-f47gb\" (UID: \"1645de9b-f227-4d9d-885f-ffd58e5bef69\") " pod="openshift-authentication/oauth-openshift-558db77b4-f47gb" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.009912 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d40dd21f-096a-4dca-a313-566508e33dd3-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-jkj94\" (UID: \"d40dd21f-096a-4dca-a313-566508e33dd3\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jkj94" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.009940 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/1645de9b-f227-4d9d-885f-ffd58e5bef69-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-f47gb\" (UID: \"1645de9b-f227-4d9d-885f-ffd58e5bef69\") " pod="openshift-authentication/oauth-openshift-558db77b4-f47gb" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.009965 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1645de9b-f227-4d9d-885f-ffd58e5bef69-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-f47gb\" (UID: \"1645de9b-f227-4d9d-885f-ffd58e5bef69\") " pod="openshift-authentication/oauth-openshift-558db77b4-f47gb" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.009992 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/1645de9b-f227-4d9d-885f-ffd58e5bef69-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-f47gb\" (UID: \"1645de9b-f227-4d9d-885f-ffd58e5bef69\") " pod="openshift-authentication/oauth-openshift-558db77b4-f47gb" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.010017 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncjkj\" (UniqueName: \"kubernetes.io/projected/51e9f48f-0b01-4f2b-8d00-77fd91f6c3b0-kube-api-access-ncjkj\") pod \"machine-config-operator-74547568cd-2ldzj\" (UID: \"51e9f48f-0b01-4f2b-8d00-77fd91f6c3b0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2ldzj" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.010041 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e846eda6-431b-4fdf-98dc-80e3fc6b122f-encryption-config\") pod \"apiserver-76f77b778f-xx85g\" (UID: \"e846eda6-431b-4fdf-98dc-80e3fc6b122f\") " pod="openshift-apiserver/apiserver-76f77b778f-xx85g" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.010076 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcbtb\" (UniqueName: \"kubernetes.io/projected/e846eda6-431b-4fdf-98dc-80e3fc6b122f-kube-api-access-mcbtb\") pod \"apiserver-76f77b778f-xx85g\" (UID: \"e846eda6-431b-4fdf-98dc-80e3fc6b122f\") " pod="openshift-apiserver/apiserver-76f77b778f-xx85g" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.010103 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dknwm\" (UniqueName: \"kubernetes.io/projected/d003362b-b35a-4f18-b387-62ec1490321a-kube-api-access-dknwm\") pod \"authentication-operator-69f744f599-68s5b\" (UID: \"d003362b-b35a-4f18-b387-62ec1490321a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-68s5b" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.010126 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwp84\" (UniqueName: \"kubernetes.io/projected/4e98c44e-5a60-49a0-9186-2367509dda97-kube-api-access-rwp84\") pod \"controller-manager-879f6c89f-m9ntd\" (UID: \"4e98c44e-5a60-49a0-9186-2367509dda97\") " pod="openshift-controller-manager/controller-manager-879f6c89f-m9ntd" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.010160 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/72d8a3d3-6b10-4a2d-bb42-b96a0d6135a5-serving-cert\") pod \"console-operator-58897d9998-sxtxj\" (UID: \"72d8a3d3-6b10-4a2d-bb42-b96a0d6135a5\") " pod="openshift-console-operator/console-operator-58897d9998-sxtxj" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.010185 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f76f7be-c8e4-4943-81f2-8e416e747aec-serving-cert\") pod \"etcd-operator-b45778765-kc8qq\" (UID: \"0f76f7be-c8e4-4943-81f2-8e416e747aec\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kc8qq" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.010207 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/24e3bc3c-7e93-4c91-b0a2-85877004fafc-service-ca\") pod \"console-f9d7485db-n6qb5\" (UID: \"24e3bc3c-7e93-4c91-b0a2-85877004fafc\") " pod="openshift-console/console-f9d7485db-n6qb5" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.010233 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e846eda6-431b-4fdf-98dc-80e3fc6b122f-etcd-client\") pod \"apiserver-76f77b778f-xx85g\" (UID: \"e846eda6-431b-4fdf-98dc-80e3fc6b122f\") " pod="openshift-apiserver/apiserver-76f77b778f-xx85g" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.010260 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed5ac3ea-71b9-41f5-b629-ec016b6ef3c7-config\") pod \"openshift-apiserver-operator-796bbdcf4f-k4nkz\" (UID: \"ed5ac3ea-71b9-41f5-b629-ec016b6ef3c7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-k4nkz" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.010284 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/24e3bc3c-7e93-4c91-b0a2-85877004fafc-oauth-serving-cert\") pod \"console-f9d7485db-n6qb5\" (UID: \"24e3bc3c-7e93-4c91-b0a2-85877004fafc\") " pod="openshift-console/console-f9d7485db-n6qb5" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.010309 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7d2e3e9e-69f8-46f4-b825-ee369ab23de8-etcd-client\") pod \"apiserver-7bbb656c7d-6khhb\" (UID: \"7d2e3e9e-69f8-46f4-b825-ee369ab23de8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6khhb" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.010333 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ee5f60d7-e2f5-4900-b238-e4ef9acf1de4-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-8vttb\" (UID: \"ee5f60d7-e2f5-4900-b238-e4ef9acf1de4\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8vttb" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.010354 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c4a5cf0a-76df-4855-a52d-22dbc07e8f7a-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-qgjc4\" (UID: \"c4a5cf0a-76df-4855-a52d-22dbc07e8f7a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qgjc4" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.010372 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f76f7be-c8e4-4943-81f2-8e416e747aec-config\") pod \"etcd-operator-b45778765-kc8qq\" (UID: \"0f76f7be-c8e4-4943-81f2-8e416e747aec\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kc8qq" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.010389 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4a5cf0a-76df-4855-a52d-22dbc07e8f7a-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-qgjc4\" (UID: \"c4a5cf0a-76df-4855-a52d-22dbc07e8f7a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qgjc4" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.010404 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d003362b-b35a-4f18-b387-62ec1490321a-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-68s5b\" (UID: \"d003362b-b35a-4f18-b387-62ec1490321a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-68s5b" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.010421 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7d2e3e9e-69f8-46f4-b825-ee369ab23de8-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-6khhb\" (UID: \"7d2e3e9e-69f8-46f4-b825-ee369ab23de8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6khhb" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.010439 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/1645de9b-f227-4d9d-885f-ffd58e5bef69-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-f47gb\" (UID: \"1645de9b-f227-4d9d-885f-ffd58e5bef69\") " pod="openshift-authentication/oauth-openshift-558db77b4-f47gb" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.010457 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1645de9b-f227-4d9d-885f-ffd58e5bef69-audit-policies\") pod \"oauth-openshift-558db77b4-f47gb\" (UID: \"1645de9b-f227-4d9d-885f-ffd58e5bef69\") " pod="openshift-authentication/oauth-openshift-558db77b4-f47gb" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.010493 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/51e9f48f-0b01-4f2b-8d00-77fd91f6c3b0-proxy-tls\") pod \"machine-config-operator-74547568cd-2ldzj\" (UID: \"51e9f48f-0b01-4f2b-8d00-77fd91f6c3b0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2ldzj" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.011120 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e2f7713d-7734-477f-81df-093aaa83837f-auth-proxy-config\") pod \"machine-approver-56656f9798-zcwtl\" (UID: \"e2f7713d-7734-477f-81df-093aaa83837f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zcwtl" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.011981 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7d2e3e9e-69f8-46f4-b825-ee369ab23de8-audit-policies\") pod \"apiserver-7bbb656c7d-6khhb\" (UID: \"7d2e3e9e-69f8-46f4-b825-ee369ab23de8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6khhb" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.012082 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1645de9b-f227-4d9d-885f-ffd58e5bef69-audit-dir\") pod \"oauth-openshift-558db77b4-f47gb\" (UID: \"1645de9b-f227-4d9d-885f-ffd58e5bef69\") " pod="openshift-authentication/oauth-openshift-558db77b4-f47gb" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.012499 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/1645de9b-f227-4d9d-885f-ffd58e5bef69-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-f47gb\" (UID: \"1645de9b-f227-4d9d-885f-ffd58e5bef69\") " pod="openshift-authentication/oauth-openshift-558db77b4-f47gb" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.015260 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/24e3bc3c-7e93-4c91-b0a2-85877004fafc-console-serving-cert\") pod \"console-f9d7485db-n6qb5\" (UID: \"24e3bc3c-7e93-4c91-b0a2-85877004fafc\") " pod="openshift-console/console-f9d7485db-n6qb5" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.015335 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/e846eda6-431b-4fdf-98dc-80e3fc6b122f-audit\") pod \"apiserver-76f77b778f-xx85g\" (UID: \"e846eda6-431b-4fdf-98dc-80e3fc6b122f\") " pod="openshift-apiserver/apiserver-76f77b778f-xx85g" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.015385 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7d2e3e9e-69f8-46f4-b825-ee369ab23de8-audit-dir\") pod \"apiserver-7bbb656c7d-6khhb\" (UID: \"7d2e3e9e-69f8-46f4-b825-ee369ab23de8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6khhb" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.010202 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-6hpgg"] Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.015413 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9xch\" (UniqueName: \"kubernetes.io/projected/24e3bc3c-7e93-4c91-b0a2-85877004fafc-kube-api-access-l9xch\") pod \"console-f9d7485db-n6qb5\" (UID: \"24e3bc3c-7e93-4c91-b0a2-85877004fafc\") " pod="openshift-console/console-f9d7485db-n6qb5" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.015446 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/228e9f52-aead-4cf5-af32-8b0b3aec8cf4-config\") pod \"machine-api-operator-5694c8668f-n66x6\" (UID: \"228e9f52-aead-4cf5-af32-8b0b3aec8cf4\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-n66x6" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.015500 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/72d8a3d3-6b10-4a2d-bb42-b96a0d6135a5-trusted-ca\") pod \"console-operator-58897d9998-sxtxj\" (UID: \"72d8a3d3-6b10-4a2d-bb42-b96a0d6135a5\") " pod="openshift-console-operator/console-operator-58897d9998-sxtxj" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.015519 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/24e3bc3c-7e93-4c91-b0a2-85877004fafc-console-oauth-config\") pod \"console-f9d7485db-n6qb5\" (UID: \"24e3bc3c-7e93-4c91-b0a2-85877004fafc\") " pod="openshift-console/console-f9d7485db-n6qb5" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.015566 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmscp\" (UniqueName: \"kubernetes.io/projected/72d8a3d3-6b10-4a2d-bb42-b96a0d6135a5-kube-api-access-bmscp\") pod \"console-operator-58897d9998-sxtxj\" (UID: \"72d8a3d3-6b10-4a2d-bb42-b96a0d6135a5\") " pod="openshift-console-operator/console-operator-58897d9998-sxtxj" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.015588 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n56zj\" (UniqueName: \"kubernetes.io/projected/d40dd21f-096a-4dca-a313-566508e33dd3-kube-api-access-n56zj\") pod \"kube-storage-version-migrator-operator-b67b599dd-jkj94\" (UID: \"d40dd21f-096a-4dca-a313-566508e33dd3\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jkj94" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.015607 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/24e3bc3c-7e93-4c91-b0a2-85877004fafc-console-config\") pod \"console-f9d7485db-n6qb5\" (UID: \"24e3bc3c-7e93-4c91-b0a2-85877004fafc\") " pod="openshift-console/console-f9d7485db-n6qb5" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.015626 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d003362b-b35a-4f18-b387-62ec1490321a-config\") pod \"authentication-operator-69f744f599-68s5b\" (UID: \"d003362b-b35a-4f18-b387-62ec1490321a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-68s5b" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.015650 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d2e3e9e-69f8-46f4-b825-ee369ab23de8-serving-cert\") pod \"apiserver-7bbb656c7d-6khhb\" (UID: \"7d2e3e9e-69f8-46f4-b825-ee369ab23de8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6khhb" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.015667 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c1052c9-fa99-4f24-8fff-923dc489c08d-serving-cert\") pod \"route-controller-manager-6576b87f9c-4bgvx\" (UID: \"8c1052c9-fa99-4f24-8fff-923dc489c08d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4bgvx" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.015695 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8h5xr\" (UniqueName: \"kubernetes.io/projected/8c1052c9-fa99-4f24-8fff-923dc489c08d-kube-api-access-8h5xr\") pod \"route-controller-manager-6576b87f9c-4bgvx\" (UID: \"8c1052c9-fa99-4f24-8fff-923dc489c08d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4bgvx" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.015827 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7d2e3e9e-69f8-46f4-b825-ee369ab23de8-encryption-config\") pod \"apiserver-7bbb656c7d-6khhb\" (UID: \"7d2e3e9e-69f8-46f4-b825-ee369ab23de8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6khhb" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.015828 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7d2e3e9e-69f8-46f4-b825-ee369ab23de8-audit-dir\") pod \"apiserver-7bbb656c7d-6khhb\" (UID: \"7d2e3e9e-69f8-46f4-b825-ee369ab23de8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6khhb" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.015851 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65szh\" (UniqueName: \"kubernetes.io/projected/7d2e3e9e-69f8-46f4-b825-ee369ab23de8-kube-api-access-65szh\") pod \"apiserver-7bbb656c7d-6khhb\" (UID: \"7d2e3e9e-69f8-46f4-b825-ee369ab23de8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6khhb" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.015873 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/e2f7713d-7734-477f-81df-093aaa83837f-machine-approver-tls\") pod \"machine-approver-56656f9798-zcwtl\" (UID: \"e2f7713d-7734-477f-81df-093aaa83837f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zcwtl" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.015891 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c1052c9-fa99-4f24-8fff-923dc489c08d-config\") pod \"route-controller-manager-6576b87f9c-4bgvx\" (UID: \"8c1052c9-fa99-4f24-8fff-923dc489c08d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4bgvx" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.015914 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e98c44e-5a60-49a0-9186-2367509dda97-config\") pod \"controller-manager-879f6c89f-m9ntd\" (UID: \"4e98c44e-5a60-49a0-9186-2367509dda97\") " pod="openshift-controller-manager/controller-manager-879f6c89f-m9ntd" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.015939 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7d2e3e9e-69f8-46f4-b825-ee369ab23de8-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-6khhb\" (UID: \"7d2e3e9e-69f8-46f4-b825-ee369ab23de8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6khhb" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.015961 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/1645de9b-f227-4d9d-885f-ffd58e5bef69-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-f47gb\" (UID: \"1645de9b-f227-4d9d-885f-ffd58e5bef69\") " pod="openshift-authentication/oauth-openshift-558db77b4-f47gb" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.016007 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-5bvrp"] Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.016404 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vnzg7"] Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.016588 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72d8a3d3-6b10-4a2d-bb42-b96a0d6135a5-config\") pod \"console-operator-58897d9998-sxtxj\" (UID: \"72d8a3d3-6b10-4a2d-bb42-b96a0d6135a5\") " pod="openshift-console-operator/console-operator-58897d9998-sxtxj" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.016846 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-skm6x"] Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.017025 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1645de9b-f227-4d9d-885f-ffd58e5bef69-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-f47gb\" (UID: \"1645de9b-f227-4d9d-885f-ffd58e5bef69\") " pod="openshift-authentication/oauth-openshift-558db77b4-f47gb" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.017132 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tvcd\" (UniqueName: \"kubernetes.io/projected/ee5f60d7-e2f5-4900-b238-e4ef9acf1de4-kube-api-access-4tvcd\") pod \"cluster-image-registry-operator-dc59b4c8b-8vttb\" (UID: \"ee5f60d7-e2f5-4900-b238-e4ef9acf1de4\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8vttb" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.017164 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e846eda6-431b-4fdf-98dc-80e3fc6b122f-config\") pod \"apiserver-76f77b778f-xx85g\" (UID: \"e846eda6-431b-4fdf-98dc-80e3fc6b122f\") " pod="openshift-apiserver/apiserver-76f77b778f-xx85g" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.017299 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-skm6x" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.017654 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-6hpgg" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.017852 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-5bvrp" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.018042 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vnzg7" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.018188 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/72d8a3d3-6b10-4a2d-bb42-b96a0d6135a5-trusted-ca\") pod \"console-operator-58897d9998-sxtxj\" (UID: \"72d8a3d3-6b10-4a2d-bb42-b96a0d6135a5\") " pod="openshift-console-operator/console-operator-58897d9998-sxtxj" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.018210 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/1645de9b-f227-4d9d-885f-ffd58e5bef69-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-f47gb\" (UID: \"1645de9b-f227-4d9d-885f-ffd58e5bef69\") " pod="openshift-authentication/oauth-openshift-558db77b4-f47gb" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.009355 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.018715 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1645de9b-f227-4d9d-885f-ffd58e5bef69-audit-policies\") pod \"oauth-openshift-558db77b4-f47gb\" (UID: \"1645de9b-f227-4d9d-885f-ffd58e5bef69\") " pod="openshift-authentication/oauth-openshift-558db77b4-f47gb" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.020239 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2f7713d-7734-477f-81df-093aaa83837f-config\") pod \"machine-approver-56656f9798-zcwtl\" (UID: \"e2f7713d-7734-477f-81df-093aaa83837f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zcwtl" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.010149 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.010155 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.011611 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.012352 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.021142 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/1645de9b-f227-4d9d-885f-ffd58e5bef69-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-f47gb\" (UID: \"1645de9b-f227-4d9d-885f-ffd58e5bef69\") " pod="openshift-authentication/oauth-openshift-558db77b4-f47gb" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.021877 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7d2e3e9e-69f8-46f4-b825-ee369ab23de8-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-6khhb\" (UID: \"7d2e3e9e-69f8-46f4-b825-ee369ab23de8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6khhb" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.022412 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/1645de9b-f227-4d9d-885f-ffd58e5bef69-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-f47gb\" (UID: \"1645de9b-f227-4d9d-885f-ffd58e5bef69\") " pod="openshift-authentication/oauth-openshift-558db77b4-f47gb" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.022910 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d2e3e9e-69f8-46f4-b825-ee369ab23de8-serving-cert\") pod \"apiserver-7bbb656c7d-6khhb\" (UID: \"7d2e3e9e-69f8-46f4-b825-ee369ab23de8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6khhb" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.023112 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7d2e3e9e-69f8-46f4-b825-ee369ab23de8-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-6khhb\" (UID: \"7d2e3e9e-69f8-46f4-b825-ee369ab23de8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6khhb" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.023257 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.023767 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/72d8a3d3-6b10-4a2d-bb42-b96a0d6135a5-serving-cert\") pod \"console-operator-58897d9998-sxtxj\" (UID: \"72d8a3d3-6b10-4a2d-bb42-b96a0d6135a5\") " pod="openshift-console-operator/console-operator-58897d9998-sxtxj" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.023840 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/228e9f52-aead-4cf5-af32-8b0b3aec8cf4-images\") pod \"machine-api-operator-5694c8668f-n66x6\" (UID: \"228e9f52-aead-4cf5-af32-8b0b3aec8cf4\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-n66x6" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.023873 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9w25\" (UniqueName: \"kubernetes.io/projected/228e9f52-aead-4cf5-af32-8b0b3aec8cf4-kube-api-access-l9w25\") pod \"machine-api-operator-5694c8668f-n66x6\" (UID: \"228e9f52-aead-4cf5-af32-8b0b3aec8cf4\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-n66x6" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.023943 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/1645de9b-f227-4d9d-885f-ffd58e5bef69-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-f47gb\" (UID: \"1645de9b-f227-4d9d-885f-ffd58e5bef69\") " pod="openshift-authentication/oauth-openshift-558db77b4-f47gb" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.024126 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/1645de9b-f227-4d9d-885f-ffd58e5bef69-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-f47gb\" (UID: \"1645de9b-f227-4d9d-885f-ffd58e5bef69\") " pod="openshift-authentication/oauth-openshift-558db77b4-f47gb" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.024194 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/228e9f52-aead-4cf5-af32-8b0b3aec8cf4-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-n66x6\" (UID: \"228e9f52-aead-4cf5-af32-8b0b3aec8cf4\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-n66x6" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.024232 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ee5f60d7-e2f5-4900-b238-e4ef9acf1de4-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-8vttb\" (UID: \"ee5f60d7-e2f5-4900-b238-e4ef9acf1de4\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8vttb" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.024255 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e846eda6-431b-4fdf-98dc-80e3fc6b122f-node-pullsecrets\") pod \"apiserver-76f77b778f-xx85g\" (UID: \"e846eda6-431b-4fdf-98dc-80e3fc6b122f\") " pod="openshift-apiserver/apiserver-76f77b778f-xx85g" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.024485 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhg7z\" (UniqueName: \"kubernetes.io/projected/1645de9b-f227-4d9d-885f-ffd58e5bef69-kube-api-access-bhg7z\") pod \"oauth-openshift-558db77b4-f47gb\" (UID: \"1645de9b-f227-4d9d-885f-ffd58e5bef69\") " pod="openshift-authentication/oauth-openshift-558db77b4-f47gb" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.024608 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c4a5cf0a-76df-4855-a52d-22dbc07e8f7a-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-qgjc4\" (UID: \"c4a5cf0a-76df-4855-a52d-22dbc07e8f7a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qgjc4" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.024832 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48gjk\" (UniqueName: \"kubernetes.io/projected/dd11fb82-1556-4769-a1cc-11589b905b3f-kube-api-access-48gjk\") pod \"openshift-config-operator-7777fb866f-26mg6\" (UID: \"dd11fb82-1556-4769-a1cc-11589b905b3f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-26mg6" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.024933 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/e846eda6-431b-4fdf-98dc-80e3fc6b122f-image-import-ca\") pod \"apiserver-76f77b778f-xx85g\" (UID: \"e846eda6-431b-4fdf-98dc-80e3fc6b122f\") " pod="openshift-apiserver/apiserver-76f77b778f-xx85g" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.025935 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7d2e3e9e-69f8-46f4-b825-ee369ab23de8-etcd-client\") pod \"apiserver-7bbb656c7d-6khhb\" (UID: \"7d2e3e9e-69f8-46f4-b825-ee369ab23de8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6khhb" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.026274 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/1645de9b-f227-4d9d-885f-ffd58e5bef69-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-f47gb\" (UID: \"1645de9b-f227-4d9d-885f-ffd58e5bef69\") " pod="openshift-authentication/oauth-openshift-558db77b4-f47gb" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.026293 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/e2f7713d-7734-477f-81df-093aaa83837f-machine-approver-tls\") pod \"machine-approver-56656f9798-zcwtl\" (UID: \"e2f7713d-7734-477f-81df-093aaa83837f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zcwtl" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.027297 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/1645de9b-f227-4d9d-885f-ffd58e5bef69-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-f47gb\" (UID: \"1645de9b-f227-4d9d-885f-ffd58e5bef69\") " pod="openshift-authentication/oauth-openshift-558db77b4-f47gb" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.028046 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/1645de9b-f227-4d9d-885f-ffd58e5bef69-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-f47gb\" (UID: \"1645de9b-f227-4d9d-885f-ffd58e5bef69\") " pod="openshift-authentication/oauth-openshift-558db77b4-f47gb" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.032853 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/1645de9b-f227-4d9d-885f-ffd58e5bef69-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-f47gb\" (UID: \"1645de9b-f227-4d9d-885f-ffd58e5bef69\") " pod="openshift-authentication/oauth-openshift-558db77b4-f47gb" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.040937 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7d2e3e9e-69f8-46f4-b825-ee369ab23de8-encryption-config\") pod \"apiserver-7bbb656c7d-6khhb\" (UID: \"7d2e3e9e-69f8-46f4-b825-ee369ab23de8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6khhb" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.044412 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.057555 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-sdrvv"] Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.058554 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4fdzm"] Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.059078 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4fdzm" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.059263 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.059292 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-j46lk"] Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.059635 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-sdrvv" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.059696 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.064064 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-j46lk" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.065736 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/1645de9b-f227-4d9d-885f-ffd58e5bef69-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-f47gb\" (UID: \"1645de9b-f227-4d9d-885f-ffd58e5bef69\") " pod="openshift-authentication/oauth-openshift-558db77b4-f47gb" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.066198 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/1645de9b-f227-4d9d-885f-ffd58e5bef69-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-f47gb\" (UID: \"1645de9b-f227-4d9d-885f-ffd58e5bef69\") " pod="openshift-authentication/oauth-openshift-558db77b4-f47gb" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.067175 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/9336d238-f0fb-430a-acfe-4aa5c888ebc8-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-gvpnh\" (UID: \"9336d238-f0fb-430a-acfe-4aa5c888ebc8\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gvpnh" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.075190 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.077996 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gvpnh"] Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.078046 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29422995-knlb5"] Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.079278 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29422995-knlb5" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.088542 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-ts8wv"] Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.089508 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-ts8wv" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.090162 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.106717 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-b4kk7"] Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.108019 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-vhv9h"] Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.111386 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-b4kk7" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.112873 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-g5hs8"] Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.113260 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-vhv9h" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.114317 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-plc7w"] Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.115148 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.115611 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-g5hs8" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.122203 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-n66x6"] Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.122584 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-plc7w" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.122933 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-6khhb"] Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.126281 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-mqq47"] Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.128567 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/22368f6c-56f1-4ef1-bada-fcad04f7b8a4-webhook-cert\") pod \"packageserver-d55dfcdfc-vnzg7\" (UID: \"22368f6c-56f1-4ef1-bada-fcad04f7b8a4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vnzg7" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.128605 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee6ce5dd-6c0d-4acd-b795-6cb930770bec-config\") pod \"service-ca-operator-777779d784-6hpgg\" (UID: \"ee6ce5dd-6c0d-4acd-b795-6cb930770bec\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6hpgg" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.128633 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/228e9f52-aead-4cf5-af32-8b0b3aec8cf4-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-n66x6\" (UID: \"228e9f52-aead-4cf5-af32-8b0b3aec8cf4\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-n66x6" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.128655 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ee5f60d7-e2f5-4900-b238-e4ef9acf1de4-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-8vttb\" (UID: \"ee5f60d7-e2f5-4900-b238-e4ef9acf1de4\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8vttb" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.128674 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/dd9b14b8-6d7c-4eeb-9748-a2e99daa4293-stats-auth\") pod \"router-default-5444994796-ts8wv\" (UID: \"dd9b14b8-6d7c-4eeb-9748-a2e99daa4293\") " pod="openshift-ingress/router-default-5444994796-ts8wv" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.128694 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7r8q8"] Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.129219 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7r8q8" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.129371 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-mqq47" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.128694 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c4a5cf0a-76df-4855-a52d-22dbc07e8f7a-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-qgjc4\" (UID: \"c4a5cf0a-76df-4855-a52d-22dbc07e8f7a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qgjc4" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.129530 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-68s5b"] Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.130525 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ee5f60d7-e2f5-4900-b238-e4ef9acf1de4-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-8vttb\" (UID: \"ee5f60d7-e2f5-4900-b238-e4ef9acf1de4\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8vttb" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.131526 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cvrkm"] Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.132425 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/228e9f52-aead-4cf5-af32-8b0b3aec8cf4-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-n66x6\" (UID: \"228e9f52-aead-4cf5-af32-8b0b3aec8cf4\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-n66x6" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.133692 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxncl\" (UniqueName: \"kubernetes.io/projected/0f76f7be-c8e4-4943-81f2-8e416e747aec-kube-api-access-cxncl\") pod \"etcd-operator-b45778765-kc8qq\" (UID: \"0f76f7be-c8e4-4943-81f2-8e416e747aec\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kc8qq" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.133767 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/51e9f48f-0b01-4f2b-8d00-77fd91f6c3b0-auth-proxy-config\") pod \"machine-config-operator-74547568cd-2ldzj\" (UID: \"51e9f48f-0b01-4f2b-8d00-77fd91f6c3b0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2ldzj" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.133804 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4e98c44e-5a60-49a0-9186-2367509dda97-serving-cert\") pod \"controller-manager-879f6c89f-m9ntd\" (UID: \"4e98c44e-5a60-49a0-9186-2367509dda97\") " pod="openshift-controller-manager/controller-manager-879f6c89f-m9ntd" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.133837 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/dd11fb82-1556-4769-a1cc-11589b905b3f-available-featuregates\") pod \"openshift-config-operator-7777fb866f-26mg6\" (UID: \"dd11fb82-1556-4769-a1cc-11589b905b3f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-26mg6" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.133869 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/27f3b602-6196-4cbc-bf90-e695403d20c7-metrics-tls\") pod \"ingress-operator-5b745b69d9-j46lk\" (UID: \"27f3b602-6196-4cbc-bf90-e695403d20c7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-j46lk" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.133930 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d003362b-b35a-4f18-b387-62ec1490321a-serving-cert\") pod \"authentication-operator-69f744f599-68s5b\" (UID: \"d003362b-b35a-4f18-b387-62ec1490321a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-68s5b" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.133958 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tj5fk\" (UniqueName: \"kubernetes.io/projected/c6d1d322-1622-41d5-afb0-c441b346b8bf-kube-api-access-tj5fk\") pod \"downloads-7954f5f757-gbvgh\" (UID: \"c6d1d322-1622-41d5-afb0-c441b346b8bf\") " pod="openshift-console/downloads-7954f5f757-gbvgh" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.133983 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd9b14b8-6d7c-4eeb-9748-a2e99daa4293-service-ca-bundle\") pod \"router-default-5444994796-ts8wv\" (UID: \"dd9b14b8-6d7c-4eeb-9748-a2e99daa4293\") " pod="openshift-ingress/router-default-5444994796-ts8wv" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.134020 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/ee5f60d7-e2f5-4900-b238-e4ef9acf1de4-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-8vttb\" (UID: \"ee5f60d7-e2f5-4900-b238-e4ef9acf1de4\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8vttb" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.134046 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e846eda6-431b-4fdf-98dc-80e3fc6b122f-audit-dir\") pod \"apiserver-76f77b778f-xx85g\" (UID: \"e846eda6-431b-4fdf-98dc-80e3fc6b122f\") " pod="openshift-apiserver/apiserver-76f77b778f-xx85g" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.134081 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed5ac3ea-71b9-41f5-b629-ec016b6ef3c7-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-k4nkz\" (UID: \"ed5ac3ea-71b9-41f5-b629-ec016b6ef3c7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-k4nkz" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.134129 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e846eda6-431b-4fdf-98dc-80e3fc6b122f-etcd-serving-ca\") pod \"apiserver-76f77b778f-xx85g\" (UID: \"e846eda6-431b-4fdf-98dc-80e3fc6b122f\") " pod="openshift-apiserver/apiserver-76f77b778f-xx85g" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.134155 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8d212083-766e-4b88-ba08-8570f05f6c94-secret-volume\") pod \"collect-profiles-29422995-knlb5\" (UID: \"8d212083-766e-4b88-ba08-8570f05f6c94\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29422995-knlb5" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.134198 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/0f76f7be-c8e4-4943-81f2-8e416e747aec-etcd-ca\") pod \"etcd-operator-b45778765-kc8qq\" (UID: \"0f76f7be-c8e4-4943-81f2-8e416e747aec\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kc8qq" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.134232 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6d5f332b-9d6b-40c2-8e63-47aa309ea740-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-7svz5\" (UID: \"6d5f332b-9d6b-40c2-8e63-47aa309ea740\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7svz5" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.134259 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd11fb82-1556-4769-a1cc-11589b905b3f-serving-cert\") pod \"openshift-config-operator-7777fb866f-26mg6\" (UID: \"dd11fb82-1556-4769-a1cc-11589b905b3f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-26mg6" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.134309 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcvkn\" (UniqueName: \"kubernetes.io/projected/3fd6e72e-c555-446b-ad32-bf71e8c1be54-kube-api-access-vcvkn\") pod \"package-server-manager-789f6589d5-dhh5x\" (UID: \"3fd6e72e-c555-446b-ad32-bf71e8c1be54\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dhh5x" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.134337 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbmrd\" (UniqueName: \"kubernetes.io/projected/e0d76e44-8053-4afa-b974-bb3f945c9c23-kube-api-access-pbmrd\") pod \"dns-operator-744455d44c-vhv9h\" (UID: \"e0d76e44-8053-4afa-b974-bb3f945c9c23\") " pod="openshift-dns-operator/dns-operator-744455d44c-vhv9h" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.134391 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/24e3bc3c-7e93-4c91-b0a2-85877004fafc-trusted-ca-bundle\") pod \"console-f9d7485db-n6qb5\" (UID: \"24e3bc3c-7e93-4c91-b0a2-85877004fafc\") " pod="openshift-console/console-f9d7485db-n6qb5" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.134450 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqmw2\" (UniqueName: \"kubernetes.io/projected/64e1b92e-9035-4439-abdc-86205e68c591-kube-api-access-zqmw2\") pod \"marketplace-operator-79b997595-5bvrp\" (UID: \"64e1b92e-9035-4439-abdc-86205e68c591\") " pod="openshift-marketplace/marketplace-operator-79b997595-5bvrp" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.134525 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncjkj\" (UniqueName: \"kubernetes.io/projected/51e9f48f-0b01-4f2b-8d00-77fd91f6c3b0-kube-api-access-ncjkj\") pod \"machine-config-operator-74547568cd-2ldzj\" (UID: \"51e9f48f-0b01-4f2b-8d00-77fd91f6c3b0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2ldzj" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.134551 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e846eda6-431b-4fdf-98dc-80e3fc6b122f-encryption-config\") pod \"apiserver-76f77b778f-xx85g\" (UID: \"e846eda6-431b-4fdf-98dc-80e3fc6b122f\") " pod="openshift-apiserver/apiserver-76f77b778f-xx85g" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.134591 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dknwm\" (UniqueName: \"kubernetes.io/projected/d003362b-b35a-4f18-b387-62ec1490321a-kube-api-access-dknwm\") pod \"authentication-operator-69f744f599-68s5b\" (UID: \"d003362b-b35a-4f18-b387-62ec1490321a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-68s5b" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.134617 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e0d76e44-8053-4afa-b974-bb3f945c9c23-metrics-tls\") pod \"dns-operator-744455d44c-vhv9h\" (UID: \"e0d76e44-8053-4afa-b974-bb3f945c9c23\") " pod="openshift-dns-operator/dns-operator-744455d44c-vhv9h" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.134643 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/27f3b602-6196-4cbc-bf90-e695403d20c7-trusted-ca\") pod \"ingress-operator-5b745b69d9-j46lk\" (UID: \"27f3b602-6196-4cbc-bf90-e695403d20c7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-j46lk" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.134667 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78gsp\" (UniqueName: \"kubernetes.io/projected/460a4f34-c415-4a61-8877-7ad9d851c0e5-kube-api-access-78gsp\") pod \"migrator-59844c95c7-sw7vc\" (UID: \"460a4f34-c415-4a61-8877-7ad9d851c0e5\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-sw7vc" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.134735 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/24e3bc3c-7e93-4c91-b0a2-85877004fafc-service-ca\") pod \"console-f9d7485db-n6qb5\" (UID: \"24e3bc3c-7e93-4c91-b0a2-85877004fafc\") " pod="openshift-console/console-f9d7485db-n6qb5" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.134767 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/51e9f48f-0b01-4f2b-8d00-77fd91f6c3b0-auth-proxy-config\") pod \"machine-config-operator-74547568cd-2ldzj\" (UID: \"51e9f48f-0b01-4f2b-8d00-77fd91f6c3b0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2ldzj" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.135616 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-xx85g"] Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.135641 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-f47gb"] Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.135657 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-455tj"] Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.135922 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/24e3bc3c-7e93-4c91-b0a2-85877004fafc-trusted-ca-bundle\") pod \"console-f9d7485db-n6qb5\" (UID: \"24e3bc3c-7e93-4c91-b0a2-85877004fafc\") " pod="openshift-console/console-f9d7485db-n6qb5" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.136023 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e846eda6-431b-4fdf-98dc-80e3fc6b122f-etcd-client\") pod \"apiserver-76f77b778f-xx85g\" (UID: \"e846eda6-431b-4fdf-98dc-80e3fc6b122f\") " pod="openshift-apiserver/apiserver-76f77b778f-xx85g" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.136068 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e846eda6-431b-4fdf-98dc-80e3fc6b122f-audit-dir\") pod \"apiserver-76f77b778f-xx85g\" (UID: \"e846eda6-431b-4fdf-98dc-80e3fc6b122f\") " pod="openshift-apiserver/apiserver-76f77b778f-xx85g" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.136071 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dd9b14b8-6d7c-4eeb-9748-a2e99daa4293-metrics-certs\") pod \"router-default-5444994796-ts8wv\" (UID: \"dd9b14b8-6d7c-4eeb-9748-a2e99daa4293\") " pod="openshift-ingress/router-default-5444994796-ts8wv" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.136174 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rw6tc\" (UniqueName: \"kubernetes.io/projected/dd9b14b8-6d7c-4eeb-9748-a2e99daa4293-kube-api-access-rw6tc\") pod \"router-default-5444994796-ts8wv\" (UID: \"dd9b14b8-6d7c-4eeb-9748-a2e99daa4293\") " pod="openshift-ingress/router-default-5444994796-ts8wv" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.136245 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/24e3bc3c-7e93-4c91-b0a2-85877004fafc-oauth-serving-cert\") pod \"console-f9d7485db-n6qb5\" (UID: \"24e3bc3c-7e93-4c91-b0a2-85877004fafc\") " pod="openshift-console/console-f9d7485db-n6qb5" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.136308 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f76f7be-c8e4-4943-81f2-8e416e747aec-config\") pod \"etcd-operator-b45778765-kc8qq\" (UID: \"0f76f7be-c8e4-4943-81f2-8e416e747aec\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kc8qq" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.136380 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d003362b-b35a-4f18-b387-62ec1490321a-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-68s5b\" (UID: \"d003362b-b35a-4f18-b387-62ec1490321a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-68s5b" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.136415 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/24e3bc3c-7e93-4c91-b0a2-85877004fafc-console-serving-cert\") pod \"console-f9d7485db-n6qb5\" (UID: \"24e3bc3c-7e93-4c91-b0a2-85877004fafc\") " pod="openshift-console/console-f9d7485db-n6qb5" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.136446 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-455tj" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.136490 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/e846eda6-431b-4fdf-98dc-80e3fc6b122f-audit\") pod \"apiserver-76f77b778f-xx85g\" (UID: \"e846eda6-431b-4fdf-98dc-80e3fc6b122f\") " pod="openshift-apiserver/apiserver-76f77b778f-xx85g" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.136525 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d5f332b-9d6b-40c2-8e63-47aa309ea740-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-7svz5\" (UID: \"6d5f332b-9d6b-40c2-8e63-47aa309ea740\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7svz5" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.136585 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d5f332b-9d6b-40c2-8e63-47aa309ea740-config\") pod \"kube-apiserver-operator-766d6c64bb-7svz5\" (UID: \"6d5f332b-9d6b-40c2-8e63-47aa309ea740\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7svz5" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.136614 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e45db753-a360-4857-9bd4-11f898ede4dc-config\") pod \"kube-controller-manager-operator-78b949d7b-4fdzm\" (UID: \"e45db753-a360-4857-9bd4-11f898ede4dc\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4fdzm" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.136680 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-sxtxj"] Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.136737 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cvrkm" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.136695 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/22368f6c-56f1-4ef1-bada-fcad04f7b8a4-apiservice-cert\") pod \"packageserver-d55dfcdfc-vnzg7\" (UID: \"22368f6c-56f1-4ef1-bada-fcad04f7b8a4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vnzg7" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.136981 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/24e3bc3c-7e93-4c91-b0a2-85877004fafc-service-ca\") pod \"console-f9d7485db-n6qb5\" (UID: \"24e3bc3c-7e93-4c91-b0a2-85877004fafc\") " pod="openshift-console/console-f9d7485db-n6qb5" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.137382 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n56zj\" (UniqueName: \"kubernetes.io/projected/d40dd21f-096a-4dca-a313-566508e33dd3-kube-api-access-n56zj\") pod \"kube-storage-version-migrator-operator-b67b599dd-jkj94\" (UID: \"d40dd21f-096a-4dca-a313-566508e33dd3\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jkj94" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.137422 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c1052c9-fa99-4f24-8fff-923dc489c08d-serving-cert\") pod \"route-controller-manager-6576b87f9c-4bgvx\" (UID: \"8c1052c9-fa99-4f24-8fff-923dc489c08d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4bgvx" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.137449 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/dd9b14b8-6d7c-4eeb-9748-a2e99daa4293-default-certificate\") pod \"router-default-5444994796-ts8wv\" (UID: \"dd9b14b8-6d7c-4eeb-9748-a2e99daa4293\") " pod="openshift-ingress/router-default-5444994796-ts8wv" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.137454 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/24e3bc3c-7e93-4c91-b0a2-85877004fafc-oauth-serving-cert\") pod \"console-f9d7485db-n6qb5\" (UID: \"24e3bc3c-7e93-4c91-b0a2-85877004fafc\") " pod="openshift-console/console-f9d7485db-n6qb5" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.137508 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e45db753-a360-4857-9bd4-11f898ede4dc-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-4fdzm\" (UID: \"e45db753-a360-4857-9bd4-11f898ede4dc\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4fdzm" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.137539 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c1052c9-fa99-4f24-8fff-923dc489c08d-config\") pod \"route-controller-manager-6576b87f9c-4bgvx\" (UID: \"8c1052c9-fa99-4f24-8fff-923dc489c08d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4bgvx" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.137565 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e98c44e-5a60-49a0-9186-2367509dda97-config\") pod \"controller-manager-879f6c89f-m9ntd\" (UID: \"4e98c44e-5a60-49a0-9186-2367509dda97\") " pod="openshift-controller-manager/controller-manager-879f6c89f-m9ntd" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.137595 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/011a7c6a-048f-4db1-ac1f-259b44dd28bc-signing-key\") pod \"service-ca-9c57cc56f-skm6x\" (UID: \"011a7c6a-048f-4db1-ac1f-259b44dd28bc\") " pod="openshift-service-ca/service-ca-9c57cc56f-skm6x" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.137621 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3fd6e72e-c555-446b-ad32-bf71e8c1be54-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-dhh5x\" (UID: \"3fd6e72e-c555-446b-ad32-bf71e8c1be54\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dhh5x" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.137666 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9gpg\" (UniqueName: \"kubernetes.io/projected/d91d7b48-9096-4f3e-8260-2d762173eb80-kube-api-access-f9gpg\") pod \"control-plane-machine-set-operator-78cbb6b69f-sdrvv\" (UID: \"d91d7b48-9096-4f3e-8260-2d762173eb80\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-sdrvv" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.137735 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4e98c44e-5a60-49a0-9186-2367509dda97-serving-cert\") pod \"controller-manager-879f6c89f-m9ntd\" (UID: \"4e98c44e-5a60-49a0-9186-2367509dda97\") " pod="openshift-controller-manager/controller-manager-879f6c89f-m9ntd" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.137822 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/228e9f52-aead-4cf5-af32-8b0b3aec8cf4-images\") pod \"machine-api-operator-5694c8668f-n66x6\" (UID: \"228e9f52-aead-4cf5-af32-8b0b3aec8cf4\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-n66x6" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.137855 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/64e1b92e-9035-4439-abdc-86205e68c591-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-5bvrp\" (UID: \"64e1b92e-9035-4439-abdc-86205e68c591\") " pod="openshift-marketplace/marketplace-operator-79b997595-5bvrp" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.137915 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e846eda6-431b-4fdf-98dc-80e3fc6b122f-node-pullsecrets\") pod \"apiserver-76f77b778f-xx85g\" (UID: \"e846eda6-431b-4fdf-98dc-80e3fc6b122f\") " pod="openshift-apiserver/apiserver-76f77b778f-xx85g" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.137937 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/27f3b602-6196-4cbc-bf90-e695403d20c7-bound-sa-token\") pod \"ingress-operator-5b745b69d9-j46lk\" (UID: \"27f3b602-6196-4cbc-bf90-e695403d20c7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-j46lk" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.137982 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48gjk\" (UniqueName: \"kubernetes.io/projected/dd11fb82-1556-4769-a1cc-11589b905b3f-kube-api-access-48gjk\") pod \"openshift-config-operator-7777fb866f-26mg6\" (UID: \"dd11fb82-1556-4769-a1cc-11589b905b3f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-26mg6" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.138009 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/e846eda6-431b-4fdf-98dc-80e3fc6b122f-image-import-ca\") pod \"apiserver-76f77b778f-xx85g\" (UID: \"e846eda6-431b-4fdf-98dc-80e3fc6b122f\") " pod="openshift-apiserver/apiserver-76f77b778f-xx85g" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.138032 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8c1052c9-fa99-4f24-8fff-923dc489c08d-client-ca\") pod \"route-controller-manager-6576b87f9c-4bgvx\" (UID: \"8c1052c9-fa99-4f24-8fff-923dc489c08d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4bgvx" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.138038 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd11fb82-1556-4769-a1cc-11589b905b3f-serving-cert\") pod \"openshift-config-operator-7777fb866f-26mg6\" (UID: \"dd11fb82-1556-4769-a1cc-11589b905b3f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-26mg6" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.138075 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/0f76f7be-c8e4-4943-81f2-8e416e747aec-etcd-ca\") pod \"etcd-operator-b45778765-kc8qq\" (UID: \"0f76f7be-c8e4-4943-81f2-8e416e747aec\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kc8qq" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.138053 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e846eda6-431b-4fdf-98dc-80e3fc6b122f-serving-cert\") pod \"apiserver-76f77b778f-xx85g\" (UID: \"e846eda6-431b-4fdf-98dc-80e3fc6b122f\") " pod="openshift-apiserver/apiserver-76f77b778f-xx85g" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.138126 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f76f7be-c8e4-4943-81f2-8e416e747aec-config\") pod \"etcd-operator-b45778765-kc8qq\" (UID: \"0f76f7be-c8e4-4943-81f2-8e416e747aec\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kc8qq" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.138230 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/51e9f48f-0b01-4f2b-8d00-77fd91f6c3b0-images\") pod \"machine-config-operator-74547568cd-2ldzj\" (UID: \"51e9f48f-0b01-4f2b-8d00-77fd91f6c3b0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2ldzj" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.138256 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gq8s\" (UniqueName: \"kubernetes.io/projected/ed5ac3ea-71b9-41f5-b629-ec016b6ef3c7-kube-api-access-6gq8s\") pod \"openshift-apiserver-operator-796bbdcf4f-k4nkz\" (UID: \"ed5ac3ea-71b9-41f5-b629-ec016b6ef3c7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-k4nkz" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.138284 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/011a7c6a-048f-4db1-ac1f-259b44dd28bc-signing-cabundle\") pod \"service-ca-9c57cc56f-skm6x\" (UID: \"011a7c6a-048f-4db1-ac1f-259b44dd28bc\") " pod="openshift-service-ca/service-ca-9c57cc56f-skm6x" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.138308 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8d212083-766e-4b88-ba08-8570f05f6c94-config-volume\") pod \"collect-profiles-29422995-knlb5\" (UID: \"8d212083-766e-4b88-ba08-8570f05f6c94\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29422995-knlb5" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.138329 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee6ce5dd-6c0d-4acd-b795-6cb930770bec-serving-cert\") pod \"service-ca-operator-777779d784-6hpgg\" (UID: \"ee6ce5dd-6c0d-4acd-b795-6cb930770bec\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6hpgg" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.138351 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0f76f7be-c8e4-4943-81f2-8e416e747aec-etcd-client\") pod \"etcd-operator-b45778765-kc8qq\" (UID: \"0f76f7be-c8e4-4943-81f2-8e416e747aec\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kc8qq" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.138370 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d003362b-b35a-4f18-b387-62ec1490321a-service-ca-bundle\") pod \"authentication-operator-69f744f599-68s5b\" (UID: \"d003362b-b35a-4f18-b387-62ec1490321a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-68s5b" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.138388 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e846eda6-431b-4fdf-98dc-80e3fc6b122f-trusted-ca-bundle\") pod \"apiserver-76f77b778f-xx85g\" (UID: \"e846eda6-431b-4fdf-98dc-80e3fc6b122f\") " pod="openshift-apiserver/apiserver-76f77b778f-xx85g" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.138408 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bh2t\" (UniqueName: \"kubernetes.io/projected/ee6ce5dd-6c0d-4acd-b795-6cb930770bec-kube-api-access-2bh2t\") pod \"service-ca-operator-777779d784-6hpgg\" (UID: \"ee6ce5dd-6c0d-4acd-b795-6cb930770bec\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6hpgg" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.138434 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcr9j\" (UniqueName: \"kubernetes.io/projected/22368f6c-56f1-4ef1-bada-fcad04f7b8a4-kube-api-access-fcr9j\") pod \"packageserver-d55dfcdfc-vnzg7\" (UID: \"22368f6c-56f1-4ef1-bada-fcad04f7b8a4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vnzg7" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.138455 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/d91d7b48-9096-4f3e-8260-2d762173eb80-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-sdrvv\" (UID: \"d91d7b48-9096-4f3e-8260-2d762173eb80\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-sdrvv" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.138547 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/0f76f7be-c8e4-4943-81f2-8e416e747aec-etcd-service-ca\") pod \"etcd-operator-b45778765-kc8qq\" (UID: \"0f76f7be-c8e4-4943-81f2-8e416e747aec\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kc8qq" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.138570 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4e98c44e-5a60-49a0-9186-2367509dda97-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-m9ntd\" (UID: \"4e98c44e-5a60-49a0-9186-2367509dda97\") " pod="openshift-controller-manager/controller-manager-879f6c89f-m9ntd" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.138635 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.138748 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d003362b-b35a-4f18-b387-62ec1490321a-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-68s5b\" (UID: \"d003362b-b35a-4f18-b387-62ec1490321a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-68s5b" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.139375 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/dd11fb82-1556-4769-a1cc-11589b905b3f-available-featuregates\") pod \"openshift-config-operator-7777fb866f-26mg6\" (UID: \"dd11fb82-1556-4769-a1cc-11589b905b3f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-26mg6" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.139877 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-n6qb5"] Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.141102 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e846eda6-431b-4fdf-98dc-80e3fc6b122f-etcd-serving-ca\") pod \"apiserver-76f77b778f-xx85g\" (UID: \"e846eda6-431b-4fdf-98dc-80e3fc6b122f\") " pod="openshift-apiserver/apiserver-76f77b778f-xx85g" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.141572 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c1052c9-fa99-4f24-8fff-923dc489c08d-serving-cert\") pod \"route-controller-manager-6576b87f9c-4bgvx\" (UID: \"8c1052c9-fa99-4f24-8fff-923dc489c08d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4bgvx" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.141720 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/24e3bc3c-7e93-4c91-b0a2-85877004fafc-console-serving-cert\") pod \"console-f9d7485db-n6qb5\" (UID: \"24e3bc3c-7e93-4c91-b0a2-85877004fafc\") " pod="openshift-console/console-f9d7485db-n6qb5" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.142245 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e846eda6-431b-4fdf-98dc-80e3fc6b122f-node-pullsecrets\") pod \"apiserver-76f77b778f-xx85g\" (UID: \"e846eda6-431b-4fdf-98dc-80e3fc6b122f\") " pod="openshift-apiserver/apiserver-76f77b778f-xx85g" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.142798 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e846eda6-431b-4fdf-98dc-80e3fc6b122f-serving-cert\") pod \"apiserver-76f77b778f-xx85g\" (UID: \"e846eda6-431b-4fdf-98dc-80e3fc6b122f\") " pod="openshift-apiserver/apiserver-76f77b778f-xx85g" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.142969 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/228e9f52-aead-4cf5-af32-8b0b3aec8cf4-images\") pod \"machine-api-operator-5694c8668f-n66x6\" (UID: \"228e9f52-aead-4cf5-af32-8b0b3aec8cf4\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-n66x6" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.143148 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/0f76f7be-c8e4-4943-81f2-8e416e747aec-etcd-service-ca\") pod \"etcd-operator-b45778765-kc8qq\" (UID: \"0f76f7be-c8e4-4943-81f2-8e416e747aec\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kc8qq" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.143656 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c1052c9-fa99-4f24-8fff-923dc489c08d-config\") pod \"route-controller-manager-6576b87f9c-4bgvx\" (UID: \"8c1052c9-fa99-4f24-8fff-923dc489c08d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4bgvx" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.143689 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d003362b-b35a-4f18-b387-62ec1490321a-serving-cert\") pod \"authentication-operator-69f744f599-68s5b\" (UID: \"d003362b-b35a-4f18-b387-62ec1490321a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-68s5b" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.144099 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94gk7\" (UniqueName: \"kubernetes.io/projected/27f3b602-6196-4cbc-bf90-e695403d20c7-kube-api-access-94gk7\") pod \"ingress-operator-5b745b69d9-j46lk\" (UID: \"27f3b602-6196-4cbc-bf90-e695403d20c7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-j46lk" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.143243 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e98c44e-5a60-49a0-9186-2367509dda97-config\") pod \"controller-manager-879f6c89f-m9ntd\" (UID: \"4e98c44e-5a60-49a0-9186-2367509dda97\") " pod="openshift-controller-manager/controller-manager-879f6c89f-m9ntd" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.146183 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d40dd21f-096a-4dca-a313-566508e33dd3-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-jkj94\" (UID: \"d40dd21f-096a-4dca-a313-566508e33dd3\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jkj94" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.146247 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2b2r7\" (UniqueName: \"kubernetes.io/projected/011a7c6a-048f-4db1-ac1f-259b44dd28bc-kube-api-access-2b2r7\") pod \"service-ca-9c57cc56f-skm6x\" (UID: \"011a7c6a-048f-4db1-ac1f-259b44dd28bc\") " pod="openshift-service-ca/service-ca-9c57cc56f-skm6x" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.146789 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/22368f6c-56f1-4ef1-bada-fcad04f7b8a4-tmpfs\") pod \"packageserver-d55dfcdfc-vnzg7\" (UID: \"22368f6c-56f1-4ef1-bada-fcad04f7b8a4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vnzg7" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.146874 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4e98c44e-5a60-49a0-9186-2367509dda97-client-ca\") pod \"controller-manager-879f6c89f-m9ntd\" (UID: \"4e98c44e-5a60-49a0-9186-2367509dda97\") " pod="openshift-controller-manager/controller-manager-879f6c89f-m9ntd" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.147025 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d40dd21f-096a-4dca-a313-566508e33dd3-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-jkj94\" (UID: \"d40dd21f-096a-4dca-a313-566508e33dd3\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jkj94" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.147297 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcbtb\" (UniqueName: \"kubernetes.io/projected/e846eda6-431b-4fdf-98dc-80e3fc6b122f-kube-api-access-mcbtb\") pod \"apiserver-76f77b778f-xx85g\" (UID: \"e846eda6-431b-4fdf-98dc-80e3fc6b122f\") " pod="openshift-apiserver/apiserver-76f77b778f-xx85g" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.147534 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwp84\" (UniqueName: \"kubernetes.io/projected/4e98c44e-5a60-49a0-9186-2367509dda97-kube-api-access-rwp84\") pod \"controller-manager-879f6c89f-m9ntd\" (UID: \"4e98c44e-5a60-49a0-9186-2367509dda97\") " pod="openshift-controller-manager/controller-manager-879f6c89f-m9ntd" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.147568 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xb9ll\" (UniqueName: \"kubernetes.io/projected/8d212083-766e-4b88-ba08-8570f05f6c94-kube-api-access-xb9ll\") pod \"collect-profiles-29422995-knlb5\" (UID: \"8d212083-766e-4b88-ba08-8570f05f6c94\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29422995-knlb5" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.147775 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f76f7be-c8e4-4943-81f2-8e416e747aec-serving-cert\") pod \"etcd-operator-b45778765-kc8qq\" (UID: \"0f76f7be-c8e4-4943-81f2-8e416e747aec\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kc8qq" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.147907 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed5ac3ea-71b9-41f5-b629-ec016b6ef3c7-config\") pod \"openshift-apiserver-operator-796bbdcf4f-k4nkz\" (UID: \"ed5ac3ea-71b9-41f5-b629-ec016b6ef3c7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-k4nkz" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.148152 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ee5f60d7-e2f5-4900-b238-e4ef9acf1de4-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-8vttb\" (UID: \"ee5f60d7-e2f5-4900-b238-e4ef9acf1de4\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8vttb" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.148237 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c4a5cf0a-76df-4855-a52d-22dbc07e8f7a-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-qgjc4\" (UID: \"c4a5cf0a-76df-4855-a52d-22dbc07e8f7a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qgjc4" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.148341 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4a5cf0a-76df-4855-a52d-22dbc07e8f7a-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-qgjc4\" (UID: \"c4a5cf0a-76df-4855-a52d-22dbc07e8f7a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qgjc4" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.148392 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5h9k\" (UniqueName: \"kubernetes.io/projected/96d9c114-e67a-43a5-b081-4a1de76cc870-kube-api-access-f5h9k\") pod \"multus-admission-controller-857f4d67dd-b4kk7\" (UID: \"96d9c114-e67a-43a5-b081-4a1de76cc870\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-b4kk7" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.148560 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/51e9f48f-0b01-4f2b-8d00-77fd91f6c3b0-proxy-tls\") pod \"machine-config-operator-74547568cd-2ldzj\" (UID: \"51e9f48f-0b01-4f2b-8d00-77fd91f6c3b0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2ldzj" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.148663 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/e846eda6-431b-4fdf-98dc-80e3fc6b122f-audit\") pod \"apiserver-76f77b778f-xx85g\" (UID: \"e846eda6-431b-4fdf-98dc-80e3fc6b122f\") " pod="openshift-apiserver/apiserver-76f77b778f-xx85g" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.148698 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/228e9f52-aead-4cf5-af32-8b0b3aec8cf4-config\") pod \"machine-api-operator-5694c8668f-n66x6\" (UID: \"228e9f52-aead-4cf5-af32-8b0b3aec8cf4\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-n66x6" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.148712 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d003362b-b35a-4f18-b387-62ec1490321a-service-ca-bundle\") pod \"authentication-operator-69f744f599-68s5b\" (UID: \"d003362b-b35a-4f18-b387-62ec1490321a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-68s5b" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.148761 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9xch\" (UniqueName: \"kubernetes.io/projected/24e3bc3c-7e93-4c91-b0a2-85877004fafc-kube-api-access-l9xch\") pod \"console-f9d7485db-n6qb5\" (UID: \"24e3bc3c-7e93-4c91-b0a2-85877004fafc\") " pod="openshift-console/console-f9d7485db-n6qb5" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.149108 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/24e3bc3c-7e93-4c91-b0a2-85877004fafc-console-oauth-config\") pod \"console-f9d7485db-n6qb5\" (UID: \"24e3bc3c-7e93-4c91-b0a2-85877004fafc\") " pod="openshift-console/console-f9d7485db-n6qb5" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.149151 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/96d9c114-e67a-43a5-b081-4a1de76cc870-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-b4kk7\" (UID: \"96d9c114-e67a-43a5-b081-4a1de76cc870\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-b4kk7" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.149184 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/64e1b92e-9035-4439-abdc-86205e68c591-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-5bvrp\" (UID: \"64e1b92e-9035-4439-abdc-86205e68c591\") " pod="openshift-marketplace/marketplace-operator-79b997595-5bvrp" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.149226 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/e846eda6-431b-4fdf-98dc-80e3fc6b122f-image-import-ca\") pod \"apiserver-76f77b778f-xx85g\" (UID: \"e846eda6-431b-4fdf-98dc-80e3fc6b122f\") " pod="openshift-apiserver/apiserver-76f77b778f-xx85g" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.149244 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e45db753-a360-4857-9bd4-11f898ede4dc-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-4fdzm\" (UID: \"e45db753-a360-4857-9bd4-11f898ede4dc\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4fdzm" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.149282 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/24e3bc3c-7e93-4c91-b0a2-85877004fafc-console-config\") pod \"console-f9d7485db-n6qb5\" (UID: \"24e3bc3c-7e93-4c91-b0a2-85877004fafc\") " pod="openshift-console/console-f9d7485db-n6qb5" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.149607 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d003362b-b35a-4f18-b387-62ec1490321a-config\") pod \"authentication-operator-69f744f599-68s5b\" (UID: \"d003362b-b35a-4f18-b387-62ec1490321a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-68s5b" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.149812 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8h5xr\" (UniqueName: \"kubernetes.io/projected/8c1052c9-fa99-4f24-8fff-923dc489c08d-kube-api-access-8h5xr\") pod \"route-controller-manager-6576b87f9c-4bgvx\" (UID: \"8c1052c9-fa99-4f24-8fff-923dc489c08d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4bgvx" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.149927 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4tvcd\" (UniqueName: \"kubernetes.io/projected/ee5f60d7-e2f5-4900-b238-e4ef9acf1de4-kube-api-access-4tvcd\") pod \"cluster-image-registry-operator-dc59b4c8b-8vttb\" (UID: \"ee5f60d7-e2f5-4900-b238-e4ef9acf1de4\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8vttb" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.149984 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e846eda6-431b-4fdf-98dc-80e3fc6b122f-config\") pod \"apiserver-76f77b778f-xx85g\" (UID: \"e846eda6-431b-4fdf-98dc-80e3fc6b122f\") " pod="openshift-apiserver/apiserver-76f77b778f-xx85g" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.150051 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.150084 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9w25\" (UniqueName: \"kubernetes.io/projected/228e9f52-aead-4cf5-af32-8b0b3aec8cf4-kube-api-access-l9w25\") pod \"machine-api-operator-5694c8668f-n66x6\" (UID: \"228e9f52-aead-4cf5-af32-8b0b3aec8cf4\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-n66x6" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.150168 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/24e3bc3c-7e93-4c91-b0a2-85877004fafc-console-config\") pod \"console-f9d7485db-n6qb5\" (UID: \"24e3bc3c-7e93-4c91-b0a2-85877004fafc\") " pod="openshift-console/console-f9d7485db-n6qb5" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.150646 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d003362b-b35a-4f18-b387-62ec1490321a-config\") pod \"authentication-operator-69f744f599-68s5b\" (UID: \"d003362b-b35a-4f18-b387-62ec1490321a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-68s5b" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.150909 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4e98c44e-5a60-49a0-9186-2367509dda97-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-m9ntd\" (UID: \"4e98c44e-5a60-49a0-9186-2367509dda97\") " pod="openshift-controller-manager/controller-manager-879f6c89f-m9ntd" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.151426 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0f76f7be-c8e4-4943-81f2-8e416e747aec-etcd-client\") pod \"etcd-operator-b45778765-kc8qq\" (UID: \"0f76f7be-c8e4-4943-81f2-8e416e747aec\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kc8qq" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.151463 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e846eda6-431b-4fdf-98dc-80e3fc6b122f-etcd-client\") pod \"apiserver-76f77b778f-xx85g\" (UID: \"e846eda6-431b-4fdf-98dc-80e3fc6b122f\") " pod="openshift-apiserver/apiserver-76f77b778f-xx85g" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.151508 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-6hpgg"] Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.151861 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed5ac3ea-71b9-41f5-b629-ec016b6ef3c7-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-k4nkz\" (UID: \"ed5ac3ea-71b9-41f5-b629-ec016b6ef3c7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-k4nkz" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.152325 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed5ac3ea-71b9-41f5-b629-ec016b6ef3c7-config\") pod \"openshift-apiserver-operator-796bbdcf4f-k4nkz\" (UID: \"ed5ac3ea-71b9-41f5-b629-ec016b6ef3c7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-k4nkz" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.152446 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/228e9f52-aead-4cf5-af32-8b0b3aec8cf4-config\") pod \"machine-api-operator-5694c8668f-n66x6\" (UID: \"228e9f52-aead-4cf5-af32-8b0b3aec8cf4\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-n66x6" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.152887 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d40dd21f-096a-4dca-a313-566508e33dd3-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-jkj94\" (UID: \"d40dd21f-096a-4dca-a313-566508e33dd3\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jkj94" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.153000 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8c1052c9-fa99-4f24-8fff-923dc489c08d-client-ca\") pod \"route-controller-manager-6576b87f9c-4bgvx\" (UID: \"8c1052c9-fa99-4f24-8fff-923dc489c08d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4bgvx" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.153024 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jkj94"] Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.153030 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e846eda6-431b-4fdf-98dc-80e3fc6b122f-trusted-ca-bundle\") pod \"apiserver-76f77b778f-xx85g\" (UID: \"e846eda6-431b-4fdf-98dc-80e3fc6b122f\") " pod="openshift-apiserver/apiserver-76f77b778f-xx85g" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.153254 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4e98c44e-5a60-49a0-9186-2367509dda97-client-ca\") pod \"controller-manager-879f6c89f-m9ntd\" (UID: \"4e98c44e-5a60-49a0-9186-2367509dda97\") " pod="openshift-controller-manager/controller-manager-879f6c89f-m9ntd" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.153335 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e846eda6-431b-4fdf-98dc-80e3fc6b122f-encryption-config\") pod \"apiserver-76f77b778f-xx85g\" (UID: \"e846eda6-431b-4fdf-98dc-80e3fc6b122f\") " pod="openshift-apiserver/apiserver-76f77b778f-xx85g" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.153420 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e846eda6-431b-4fdf-98dc-80e3fc6b122f-config\") pod \"apiserver-76f77b778f-xx85g\" (UID: \"e846eda6-431b-4fdf-98dc-80e3fc6b122f\") " pod="openshift-apiserver/apiserver-76f77b778f-xx85g" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.153657 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/ee5f60d7-e2f5-4900-b238-e4ef9acf1de4-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-8vttb\" (UID: \"ee5f60d7-e2f5-4900-b238-e4ef9acf1de4\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8vttb" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.154071 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-5bvrp"] Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.155227 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f76f7be-c8e4-4943-81f2-8e416e747aec-serving-cert\") pod \"etcd-operator-b45778765-kc8qq\" (UID: \"0f76f7be-c8e4-4943-81f2-8e416e747aec\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kc8qq" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.155433 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/24e3bc3c-7e93-4c91-b0a2-85877004fafc-console-oauth-config\") pod \"console-f9d7485db-n6qb5\" (UID: \"24e3bc3c-7e93-4c91-b0a2-85877004fafc\") " pod="openshift-console/console-f9d7485db-n6qb5" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.155864 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d40dd21f-096a-4dca-a313-566508e33dd3-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-jkj94\" (UID: \"d40dd21f-096a-4dca-a313-566508e33dd3\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jkj94" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.155902 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qgjc4"] Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.157224 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-2ldzj"] Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.159696 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4fdzm"] Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.161178 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-sw7vc"] Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.162646 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-4bgvx"] Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.164160 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-k4nkz"] Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.165432 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-m9ntd"] Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.166952 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-skm6x"] Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.168536 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-gbvgh"] Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.168649 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.170059 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29422995-knlb5"] Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.171109 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7svz5"] Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.172830 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8vttb"] Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.174568 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-26mg6"] Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.175743 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vnzg7"] Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.177764 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-sdrvv"] Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.179351 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-g5hs8"] Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.180442 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-wl9hp"] Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.181683 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-kc8qq"] Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.181791 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-wl9hp" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.182651 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dhh5x"] Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.183633 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-rbjcx"] Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.184987 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-j46lk"] Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.185136 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-rbjcx" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.185748 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-wl9hp"] Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.186789 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cvrkm"] Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.193421 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.196668 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-b4kk7"] Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.197352 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/51e9f48f-0b01-4f2b-8d00-77fd91f6c3b0-proxy-tls\") pod \"machine-config-operator-74547568cd-2ldzj\" (UID: \"51e9f48f-0b01-4f2b-8d00-77fd91f6c3b0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2ldzj" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.198652 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-plc7w"] Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.200070 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7r8q8"] Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.201162 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-vhv9h"] Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.202499 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-mqq47"] Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.204136 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-rbjcx"] Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.205553 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-pw2tq"] Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.206500 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pw2tq" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.208094 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.208183 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-pw2tq"] Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.209631 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/51e9f48f-0b01-4f2b-8d00-77fd91f6c3b0-images\") pod \"machine-config-operator-74547568cd-2ldzj\" (UID: \"51e9f48f-0b01-4f2b-8d00-77fd91f6c3b0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2ldzj" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.248610 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.251138 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d5f332b-9d6b-40c2-8e63-47aa309ea740-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-7svz5\" (UID: \"6d5f332b-9d6b-40c2-8e63-47aa309ea740\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7svz5" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.251181 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d5f332b-9d6b-40c2-8e63-47aa309ea740-config\") pod \"kube-apiserver-operator-766d6c64bb-7svz5\" (UID: \"6d5f332b-9d6b-40c2-8e63-47aa309ea740\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7svz5" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.251201 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e45db753-a360-4857-9bd4-11f898ede4dc-config\") pod \"kube-controller-manager-operator-78b949d7b-4fdzm\" (UID: \"e45db753-a360-4857-9bd4-11f898ede4dc\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4fdzm" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.251233 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/22368f6c-56f1-4ef1-bada-fcad04f7b8a4-apiservice-cert\") pod \"packageserver-d55dfcdfc-vnzg7\" (UID: \"22368f6c-56f1-4ef1-bada-fcad04f7b8a4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vnzg7" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.251412 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/dd9b14b8-6d7c-4eeb-9748-a2e99daa4293-default-certificate\") pod \"router-default-5444994796-ts8wv\" (UID: \"dd9b14b8-6d7c-4eeb-9748-a2e99daa4293\") " pod="openshift-ingress/router-default-5444994796-ts8wv" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.251767 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94689167-954d-4350-a1b8-e6125e32bd1f-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-g5hs8\" (UID: \"94689167-954d-4350-a1b8-e6125e32bd1f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-g5hs8" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.251904 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e45db753-a360-4857-9bd4-11f898ede4dc-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-4fdzm\" (UID: \"e45db753-a360-4857-9bd4-11f898ede4dc\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4fdzm" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.251936 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/011a7c6a-048f-4db1-ac1f-259b44dd28bc-signing-key\") pod \"service-ca-9c57cc56f-skm6x\" (UID: \"011a7c6a-048f-4db1-ac1f-259b44dd28bc\") " pod="openshift-service-ca/service-ca-9c57cc56f-skm6x" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.252070 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3fd6e72e-c555-446b-ad32-bf71e8c1be54-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-dhh5x\" (UID: \"3fd6e72e-c555-446b-ad32-bf71e8c1be54\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dhh5x" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.252102 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9gpg\" (UniqueName: \"kubernetes.io/projected/d91d7b48-9096-4f3e-8260-2d762173eb80-kube-api-access-f9gpg\") pod \"control-plane-machine-set-operator-78cbb6b69f-sdrvv\" (UID: \"d91d7b48-9096-4f3e-8260-2d762173eb80\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-sdrvv" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.252124 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/64e1b92e-9035-4439-abdc-86205e68c591-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-5bvrp\" (UID: \"64e1b92e-9035-4439-abdc-86205e68c591\") " pod="openshift-marketplace/marketplace-operator-79b997595-5bvrp" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.252143 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/30281ea2-4ba0-44c5-981d-8961073236a8-metrics-tls\") pod \"dns-default-wl9hp\" (UID: \"30281ea2-4ba0-44c5-981d-8961073236a8\") " pod="openshift-dns/dns-default-wl9hp" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.252170 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/27f3b602-6196-4cbc-bf90-e695403d20c7-bound-sa-token\") pod \"ingress-operator-5b745b69d9-j46lk\" (UID: \"27f3b602-6196-4cbc-bf90-e695403d20c7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-j46lk" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.252339 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/011a7c6a-048f-4db1-ac1f-259b44dd28bc-signing-cabundle\") pod \"service-ca-9c57cc56f-skm6x\" (UID: \"011a7c6a-048f-4db1-ac1f-259b44dd28bc\") " pod="openshift-service-ca/service-ca-9c57cc56f-skm6x" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.252389 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8d212083-766e-4b88-ba08-8570f05f6c94-config-volume\") pod \"collect-profiles-29422995-knlb5\" (UID: \"8d212083-766e-4b88-ba08-8570f05f6c94\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29422995-knlb5" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.252454 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/30281ea2-4ba0-44c5-981d-8961073236a8-config-volume\") pod \"dns-default-wl9hp\" (UID: \"30281ea2-4ba0-44c5-981d-8961073236a8\") " pod="openshift-dns/dns-default-wl9hp" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.252488 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee6ce5dd-6c0d-4acd-b795-6cb930770bec-serving-cert\") pod \"service-ca-operator-777779d784-6hpgg\" (UID: \"ee6ce5dd-6c0d-4acd-b795-6cb930770bec\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6hpgg" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.252554 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bh2t\" (UniqueName: \"kubernetes.io/projected/ee6ce5dd-6c0d-4acd-b795-6cb930770bec-kube-api-access-2bh2t\") pod \"service-ca-operator-777779d784-6hpgg\" (UID: \"ee6ce5dd-6c0d-4acd-b795-6cb930770bec\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6hpgg" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.252581 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcr9j\" (UniqueName: \"kubernetes.io/projected/22368f6c-56f1-4ef1-bada-fcad04f7b8a4-kube-api-access-fcr9j\") pod \"packageserver-d55dfcdfc-vnzg7\" (UID: \"22368f6c-56f1-4ef1-bada-fcad04f7b8a4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vnzg7" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.252601 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/d91d7b48-9096-4f3e-8260-2d762173eb80-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-sdrvv\" (UID: \"d91d7b48-9096-4f3e-8260-2d762173eb80\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-sdrvv" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.252646 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94gk7\" (UniqueName: \"kubernetes.io/projected/27f3b602-6196-4cbc-bf90-e695403d20c7-kube-api-access-94gk7\") pod \"ingress-operator-5b745b69d9-j46lk\" (UID: \"27f3b602-6196-4cbc-bf90-e695403d20c7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-j46lk" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.252676 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2b2r7\" (UniqueName: \"kubernetes.io/projected/011a7c6a-048f-4db1-ac1f-259b44dd28bc-kube-api-access-2b2r7\") pod \"service-ca-9c57cc56f-skm6x\" (UID: \"011a7c6a-048f-4db1-ac1f-259b44dd28bc\") " pod="openshift-service-ca/service-ca-9c57cc56f-skm6x" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.252696 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vp8j7\" (UniqueName: \"kubernetes.io/projected/30281ea2-4ba0-44c5-981d-8961073236a8-kube-api-access-vp8j7\") pod \"dns-default-wl9hp\" (UID: \"30281ea2-4ba0-44c5-981d-8961073236a8\") " pod="openshift-dns/dns-default-wl9hp" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.252717 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/22368f6c-56f1-4ef1-bada-fcad04f7b8a4-tmpfs\") pod \"packageserver-d55dfcdfc-vnzg7\" (UID: \"22368f6c-56f1-4ef1-bada-fcad04f7b8a4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vnzg7" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.252770 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xb9ll\" (UniqueName: \"kubernetes.io/projected/8d212083-766e-4b88-ba08-8570f05f6c94-kube-api-access-xb9ll\") pod \"collect-profiles-29422995-knlb5\" (UID: \"8d212083-766e-4b88-ba08-8570f05f6c94\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29422995-knlb5" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.252829 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5h9k\" (UniqueName: \"kubernetes.io/projected/96d9c114-e67a-43a5-b081-4a1de76cc870-kube-api-access-f5h9k\") pod \"multus-admission-controller-857f4d67dd-b4kk7\" (UID: \"96d9c114-e67a-43a5-b081-4a1de76cc870\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-b4kk7" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.252859 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e45db753-a360-4857-9bd4-11f898ede4dc-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-4fdzm\" (UID: \"e45db753-a360-4857-9bd4-11f898ede4dc\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4fdzm" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.252885 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/96d9c114-e67a-43a5-b081-4a1de76cc870-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-b4kk7\" (UID: \"96d9c114-e67a-43a5-b081-4a1de76cc870\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-b4kk7" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.252938 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/64e1b92e-9035-4439-abdc-86205e68c591-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-5bvrp\" (UID: \"64e1b92e-9035-4439-abdc-86205e68c591\") " pod="openshift-marketplace/marketplace-operator-79b997595-5bvrp" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.252979 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9z9hd\" (UniqueName: \"kubernetes.io/projected/0470027c-6ca8-4404-a366-997bf288e1d0-kube-api-access-9z9hd\") pod \"csi-hostpathplugin-rbjcx\" (UID: \"0470027c-6ca8-4404-a366-997bf288e1d0\") " pod="hostpath-provisioner/csi-hostpathplugin-rbjcx" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.253024 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/dd9b14b8-6d7c-4eeb-9748-a2e99daa4293-stats-auth\") pod \"router-default-5444994796-ts8wv\" (UID: \"dd9b14b8-6d7c-4eeb-9748-a2e99daa4293\") " pod="openshift-ingress/router-default-5444994796-ts8wv" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.253041 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/22368f6c-56f1-4ef1-bada-fcad04f7b8a4-webhook-cert\") pod \"packageserver-d55dfcdfc-vnzg7\" (UID: \"22368f6c-56f1-4ef1-bada-fcad04f7b8a4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vnzg7" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.253059 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee6ce5dd-6c0d-4acd-b795-6cb930770bec-config\") pod \"service-ca-operator-777779d784-6hpgg\" (UID: \"ee6ce5dd-6c0d-4acd-b795-6cb930770bec\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6hpgg" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.253095 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0470027c-6ca8-4404-a366-997bf288e1d0-registration-dir\") pod \"csi-hostpathplugin-rbjcx\" (UID: \"0470027c-6ca8-4404-a366-997bf288e1d0\") " pod="hostpath-provisioner/csi-hostpathplugin-rbjcx" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.253115 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/94689167-954d-4350-a1b8-e6125e32bd1f-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-g5hs8\" (UID: \"94689167-954d-4350-a1b8-e6125e32bd1f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-g5hs8" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.253137 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/27f3b602-6196-4cbc-bf90-e695403d20c7-metrics-tls\") pod \"ingress-operator-5b745b69d9-j46lk\" (UID: \"27f3b602-6196-4cbc-bf90-e695403d20c7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-j46lk" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.253165 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/0470027c-6ca8-4404-a366-997bf288e1d0-mountpoint-dir\") pod \"csi-hostpathplugin-rbjcx\" (UID: \"0470027c-6ca8-4404-a366-997bf288e1d0\") " pod="hostpath-provisioner/csi-hostpathplugin-rbjcx" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.253194 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd9b14b8-6d7c-4eeb-9748-a2e99daa4293-service-ca-bundle\") pod \"router-default-5444994796-ts8wv\" (UID: \"dd9b14b8-6d7c-4eeb-9748-a2e99daa4293\") " pod="openshift-ingress/router-default-5444994796-ts8wv" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.253354 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/0470027c-6ca8-4404-a366-997bf288e1d0-plugins-dir\") pod \"csi-hostpathplugin-rbjcx\" (UID: \"0470027c-6ca8-4404-a366-997bf288e1d0\") " pod="hostpath-provisioner/csi-hostpathplugin-rbjcx" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.253427 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8d212083-766e-4b88-ba08-8570f05f6c94-secret-volume\") pod \"collect-profiles-29422995-knlb5\" (UID: \"8d212083-766e-4b88-ba08-8570f05f6c94\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29422995-knlb5" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.253453 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/22368f6c-56f1-4ef1-bada-fcad04f7b8a4-tmpfs\") pod \"packageserver-d55dfcdfc-vnzg7\" (UID: \"22368f6c-56f1-4ef1-bada-fcad04f7b8a4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vnzg7" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.253495 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6d5f332b-9d6b-40c2-8e63-47aa309ea740-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-7svz5\" (UID: \"6d5f332b-9d6b-40c2-8e63-47aa309ea740\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7svz5" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.253527 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/0470027c-6ca8-4404-a366-997bf288e1d0-csi-data-dir\") pod \"csi-hostpathplugin-rbjcx\" (UID: \"0470027c-6ca8-4404-a366-997bf288e1d0\") " pod="hostpath-provisioner/csi-hostpathplugin-rbjcx" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.253586 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcvkn\" (UniqueName: \"kubernetes.io/projected/3fd6e72e-c555-446b-ad32-bf71e8c1be54-kube-api-access-vcvkn\") pod \"package-server-manager-789f6589d5-dhh5x\" (UID: \"3fd6e72e-c555-446b-ad32-bf71e8c1be54\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dhh5x" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.253624 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbmrd\" (UniqueName: \"kubernetes.io/projected/e0d76e44-8053-4afa-b974-bb3f945c9c23-kube-api-access-pbmrd\") pod \"dns-operator-744455d44c-vhv9h\" (UID: \"e0d76e44-8053-4afa-b974-bb3f945c9c23\") " pod="openshift-dns-operator/dns-operator-744455d44c-vhv9h" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.253660 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdqp2\" (UniqueName: \"kubernetes.io/projected/94689167-954d-4350-a1b8-e6125e32bd1f-kube-api-access-hdqp2\") pod \"openshift-controller-manager-operator-756b6f6bc6-g5hs8\" (UID: \"94689167-954d-4350-a1b8-e6125e32bd1f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-g5hs8" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.253718 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0470027c-6ca8-4404-a366-997bf288e1d0-socket-dir\") pod \"csi-hostpathplugin-rbjcx\" (UID: \"0470027c-6ca8-4404-a366-997bf288e1d0\") " pod="hostpath-provisioner/csi-hostpathplugin-rbjcx" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.253759 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqmw2\" (UniqueName: \"kubernetes.io/projected/64e1b92e-9035-4439-abdc-86205e68c591-kube-api-access-zqmw2\") pod \"marketplace-operator-79b997595-5bvrp\" (UID: \"64e1b92e-9035-4439-abdc-86205e68c591\") " pod="openshift-marketplace/marketplace-operator-79b997595-5bvrp" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.253823 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e0d76e44-8053-4afa-b974-bb3f945c9c23-metrics-tls\") pod \"dns-operator-744455d44c-vhv9h\" (UID: \"e0d76e44-8053-4afa-b974-bb3f945c9c23\") " pod="openshift-dns-operator/dns-operator-744455d44c-vhv9h" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.253852 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/27f3b602-6196-4cbc-bf90-e695403d20c7-trusted-ca\") pod \"ingress-operator-5b745b69d9-j46lk\" (UID: \"27f3b602-6196-4cbc-bf90-e695403d20c7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-j46lk" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.253871 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rw6tc\" (UniqueName: \"kubernetes.io/projected/dd9b14b8-6d7c-4eeb-9748-a2e99daa4293-kube-api-access-rw6tc\") pod \"router-default-5444994796-ts8wv\" (UID: \"dd9b14b8-6d7c-4eeb-9748-a2e99daa4293\") " pod="openshift-ingress/router-default-5444994796-ts8wv" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.253893 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78gsp\" (UniqueName: \"kubernetes.io/projected/460a4f34-c415-4a61-8877-7ad9d851c0e5-kube-api-access-78gsp\") pod \"migrator-59844c95c7-sw7vc\" (UID: \"460a4f34-c415-4a61-8877-7ad9d851c0e5\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-sw7vc" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.253931 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dd9b14b8-6d7c-4eeb-9748-a2e99daa4293-metrics-certs\") pod \"router-default-5444994796-ts8wv\" (UID: \"dd9b14b8-6d7c-4eeb-9748-a2e99daa4293\") " pod="openshift-ingress/router-default-5444994796-ts8wv" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.268500 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.289034 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.295938 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3fd6e72e-c555-446b-ad32-bf71e8c1be54-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-dhh5x\" (UID: \"3fd6e72e-c555-446b-ad32-bf71e8c1be54\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dhh5x" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.308319 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.328573 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.348675 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.354761 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9z9hd\" (UniqueName: \"kubernetes.io/projected/0470027c-6ca8-4404-a366-997bf288e1d0-kube-api-access-9z9hd\") pod \"csi-hostpathplugin-rbjcx\" (UID: \"0470027c-6ca8-4404-a366-997bf288e1d0\") " pod="hostpath-provisioner/csi-hostpathplugin-rbjcx" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.354835 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0470027c-6ca8-4404-a366-997bf288e1d0-registration-dir\") pod \"csi-hostpathplugin-rbjcx\" (UID: \"0470027c-6ca8-4404-a366-997bf288e1d0\") " pod="hostpath-provisioner/csi-hostpathplugin-rbjcx" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.354861 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/94689167-954d-4350-a1b8-e6125e32bd1f-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-g5hs8\" (UID: \"94689167-954d-4350-a1b8-e6125e32bd1f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-g5hs8" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.354887 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/0470027c-6ca8-4404-a366-997bf288e1d0-mountpoint-dir\") pod \"csi-hostpathplugin-rbjcx\" (UID: \"0470027c-6ca8-4404-a366-997bf288e1d0\") " pod="hostpath-provisioner/csi-hostpathplugin-rbjcx" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.354917 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/0470027c-6ca8-4404-a366-997bf288e1d0-plugins-dir\") pod \"csi-hostpathplugin-rbjcx\" (UID: \"0470027c-6ca8-4404-a366-997bf288e1d0\") " pod="hostpath-provisioner/csi-hostpathplugin-rbjcx" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.354958 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdqp2\" (UniqueName: \"kubernetes.io/projected/94689167-954d-4350-a1b8-e6125e32bd1f-kube-api-access-hdqp2\") pod \"openshift-controller-manager-operator-756b6f6bc6-g5hs8\" (UID: \"94689167-954d-4350-a1b8-e6125e32bd1f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-g5hs8" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.354974 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/0470027c-6ca8-4404-a366-997bf288e1d0-csi-data-dir\") pod \"csi-hostpathplugin-rbjcx\" (UID: \"0470027c-6ca8-4404-a366-997bf288e1d0\") " pod="hostpath-provisioner/csi-hostpathplugin-rbjcx" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.354989 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0470027c-6ca8-4404-a366-997bf288e1d0-socket-dir\") pod \"csi-hostpathplugin-rbjcx\" (UID: \"0470027c-6ca8-4404-a366-997bf288e1d0\") " pod="hostpath-provisioner/csi-hostpathplugin-rbjcx" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.355028 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/0470027c-6ca8-4404-a366-997bf288e1d0-mountpoint-dir\") pod \"csi-hostpathplugin-rbjcx\" (UID: \"0470027c-6ca8-4404-a366-997bf288e1d0\") " pod="hostpath-provisioner/csi-hostpathplugin-rbjcx" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.355085 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0470027c-6ca8-4404-a366-997bf288e1d0-registration-dir\") pod \"csi-hostpathplugin-rbjcx\" (UID: \"0470027c-6ca8-4404-a366-997bf288e1d0\") " pod="hostpath-provisioner/csi-hostpathplugin-rbjcx" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.355101 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0470027c-6ca8-4404-a366-997bf288e1d0-socket-dir\") pod \"csi-hostpathplugin-rbjcx\" (UID: \"0470027c-6ca8-4404-a366-997bf288e1d0\") " pod="hostpath-provisioner/csi-hostpathplugin-rbjcx" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.355142 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/0470027c-6ca8-4404-a366-997bf288e1d0-plugins-dir\") pod \"csi-hostpathplugin-rbjcx\" (UID: \"0470027c-6ca8-4404-a366-997bf288e1d0\") " pod="hostpath-provisioner/csi-hostpathplugin-rbjcx" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.355154 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94689167-954d-4350-a1b8-e6125e32bd1f-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-g5hs8\" (UID: \"94689167-954d-4350-a1b8-e6125e32bd1f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-g5hs8" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.355243 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/30281ea2-4ba0-44c5-981d-8961073236a8-metrics-tls\") pod \"dns-default-wl9hp\" (UID: \"30281ea2-4ba0-44c5-981d-8961073236a8\") " pod="openshift-dns/dns-default-wl9hp" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.355266 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/0470027c-6ca8-4404-a366-997bf288e1d0-csi-data-dir\") pod \"csi-hostpathplugin-rbjcx\" (UID: \"0470027c-6ca8-4404-a366-997bf288e1d0\") " pod="hostpath-provisioner/csi-hostpathplugin-rbjcx" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.355388 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/30281ea2-4ba0-44c5-981d-8961073236a8-config-volume\") pod \"dns-default-wl9hp\" (UID: \"30281ea2-4ba0-44c5-981d-8961073236a8\") " pod="openshift-dns/dns-default-wl9hp" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.355495 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vp8j7\" (UniqueName: \"kubernetes.io/projected/30281ea2-4ba0-44c5-981d-8961073236a8-kube-api-access-vp8j7\") pod \"dns-default-wl9hp\" (UID: \"30281ea2-4ba0-44c5-981d-8961073236a8\") " pod="openshift-dns/dns-default-wl9hp" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.368083 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.372008 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c4a5cf0a-76df-4855-a52d-22dbc07e8f7a-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-qgjc4\" (UID: \"c4a5cf0a-76df-4855-a52d-22dbc07e8f7a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qgjc4" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.388637 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.399766 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4a5cf0a-76df-4855-a52d-22dbc07e8f7a-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-qgjc4\" (UID: \"c4a5cf0a-76df-4855-a52d-22dbc07e8f7a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qgjc4" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.409129 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.415160 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d5f332b-9d6b-40c2-8e63-47aa309ea740-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-7svz5\" (UID: \"6d5f332b-9d6b-40c2-8e63-47aa309ea740\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7svz5" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.428154 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.448726 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.469112 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.472759 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d5f332b-9d6b-40c2-8e63-47aa309ea740-config\") pod \"kube-apiserver-operator-766d6c64bb-7svz5\" (UID: \"6d5f332b-9d6b-40c2-8e63-47aa309ea740\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7svz5" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.488572 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.508535 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.529408 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.561601 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctmxq\" (UniqueName: \"kubernetes.io/projected/e2f7713d-7734-477f-81df-093aaa83837f-kube-api-access-ctmxq\") pod \"machine-approver-56656f9798-zcwtl\" (UID: \"e2f7713d-7734-477f-81df-093aaa83837f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zcwtl" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.586429 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txplh\" (UniqueName: \"kubernetes.io/projected/9336d238-f0fb-430a-acfe-4aa5c888ebc8-kube-api-access-txplh\") pod \"cluster-samples-operator-665b6dd947-gvpnh\" (UID: \"9336d238-f0fb-430a-acfe-4aa5c888ebc8\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gvpnh" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.588112 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.608408 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.628273 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.648232 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.656254 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/22368f6c-56f1-4ef1-bada-fcad04f7b8a4-apiservice-cert\") pod \"packageserver-d55dfcdfc-vnzg7\" (UID: \"22368f6c-56f1-4ef1-bada-fcad04f7b8a4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vnzg7" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.656390 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/22368f6c-56f1-4ef1-bada-fcad04f7b8a4-webhook-cert\") pod \"packageserver-d55dfcdfc-vnzg7\" (UID: \"22368f6c-56f1-4ef1-bada-fcad04f7b8a4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vnzg7" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.669064 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.688565 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.695097 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/011a7c6a-048f-4db1-ac1f-259b44dd28bc-signing-key\") pod \"service-ca-9c57cc56f-skm6x\" (UID: \"011a7c6a-048f-4db1-ac1f-259b44dd28bc\") " pod="openshift-service-ca/service-ca-9c57cc56f-skm6x" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.708977 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.714076 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/011a7c6a-048f-4db1-ac1f-259b44dd28bc-signing-cabundle\") pod \"service-ca-9c57cc56f-skm6x\" (UID: \"011a7c6a-048f-4db1-ac1f-259b44dd28bc\") " pod="openshift-service-ca/service-ca-9c57cc56f-skm6x" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.742732 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmscp\" (UniqueName: \"kubernetes.io/projected/72d8a3d3-6b10-4a2d-bb42-b96a0d6135a5-kube-api-access-bmscp\") pod \"console-operator-58897d9998-sxtxj\" (UID: \"72d8a3d3-6b10-4a2d-bb42-b96a0d6135a5\") " pod="openshift-console-operator/console-operator-58897d9998-sxtxj" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.749677 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.768852 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.811995 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.815256 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/64e1b92e-9035-4439-abdc-86205e68c591-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-5bvrp\" (UID: \"64e1b92e-9035-4439-abdc-86205e68c591\") " pod="openshift-marketplace/marketplace-operator-79b997595-5bvrp" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.815393 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.819885 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zcwtl" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.825870 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee6ce5dd-6c0d-4acd-b795-6cb930770bec-serving-cert\") pod \"service-ca-operator-777779d784-6hpgg\" (UID: \"ee6ce5dd-6c0d-4acd-b795-6cb930770bec\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6hpgg" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.828251 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 10 15:25:39 crc kubenswrapper[4755]: W1210 15:25:39.835370 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode2f7713d_7734_477f_81df_093aaa83837f.slice/crio-383b75c3fc57dd2d2bcd36c1a2bbd594aaebe06750604fa55194997f630c30d2 WatchSource:0}: Error finding container 383b75c3fc57dd2d2bcd36c1a2bbd594aaebe06750604fa55194997f630c30d2: Status 404 returned error can't find the container with id 383b75c3fc57dd2d2bcd36c1a2bbd594aaebe06750604fa55194997f630c30d2 Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.848786 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.873072 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.874055 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/64e1b92e-9035-4439-abdc-86205e68c591-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-5bvrp\" (UID: \"64e1b92e-9035-4439-abdc-86205e68c591\") " pod="openshift-marketplace/marketplace-operator-79b997595-5bvrp" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.879207 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gvpnh" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.887758 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.908042 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.914507 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-sxtxj" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.914931 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee6ce5dd-6c0d-4acd-b795-6cb930770bec-config\") pod \"service-ca-operator-777779d784-6hpgg\" (UID: \"ee6ce5dd-6c0d-4acd-b795-6cb930770bec\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6hpgg" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.944516 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65szh\" (UniqueName: \"kubernetes.io/projected/7d2e3e9e-69f8-46f4-b825-ee369ab23de8-kube-api-access-65szh\") pod \"apiserver-7bbb656c7d-6khhb\" (UID: \"7d2e3e9e-69f8-46f4-b825-ee369ab23de8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6khhb" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.965877 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhg7z\" (UniqueName: \"kubernetes.io/projected/1645de9b-f227-4d9d-885f-ffd58e5bef69-kube-api-access-bhg7z\") pod \"oauth-openshift-558db77b4-f47gb\" (UID: \"1645de9b-f227-4d9d-885f-ffd58e5bef69\") " pod="openshift-authentication/oauth-openshift-558db77b4-f47gb" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.968926 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 10 15:25:39 crc kubenswrapper[4755]: I1210 15:25:39.988218 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Dec 10 15:25:40 crc kubenswrapper[4755]: I1210 15:25:40.011885 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 10 15:25:40 crc kubenswrapper[4755]: I1210 15:25:40.019596 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e45db753-a360-4857-9bd4-11f898ede4dc-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-4fdzm\" (UID: \"e45db753-a360-4857-9bd4-11f898ede4dc\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4fdzm" Dec 10 15:25:40 crc kubenswrapper[4755]: I1210 15:25:40.029013 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 10 15:25:40 crc kubenswrapper[4755]: I1210 15:25:40.042861 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/d91d7b48-9096-4f3e-8260-2d762173eb80-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-sdrvv\" (UID: \"d91d7b48-9096-4f3e-8260-2d762173eb80\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-sdrvv" Dec 10 15:25:40 crc kubenswrapper[4755]: I1210 15:25:40.048817 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 10 15:25:40 crc kubenswrapper[4755]: I1210 15:25:40.052775 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e45db753-a360-4857-9bd4-11f898ede4dc-config\") pod \"kube-controller-manager-operator-78b949d7b-4fdzm\" (UID: \"e45db753-a360-4857-9bd4-11f898ede4dc\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4fdzm" Dec 10 15:25:40 crc kubenswrapper[4755]: I1210 15:25:40.065791 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6khhb" Dec 10 15:25:40 crc kubenswrapper[4755]: I1210 15:25:40.066549 4755 request.go:700] Waited for 1.006221551s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dcontrol-plane-machine-set-operator-dockercfg-k9rxt&limit=500&resourceVersion=0 Dec 10 15:25:40 crc kubenswrapper[4755]: I1210 15:25:40.067860 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Dec 10 15:25:40 crc kubenswrapper[4755]: I1210 15:25:40.088849 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 10 15:25:40 crc kubenswrapper[4755]: I1210 15:25:40.109003 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 10 15:25:40 crc kubenswrapper[4755]: I1210 15:25:40.116418 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/27f3b602-6196-4cbc-bf90-e695403d20c7-metrics-tls\") pod \"ingress-operator-5b745b69d9-j46lk\" (UID: \"27f3b602-6196-4cbc-bf90-e695403d20c7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-j46lk" Dec 10 15:25:40 crc kubenswrapper[4755]: I1210 15:25:40.130084 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Dec 10 15:25:40 crc kubenswrapper[4755]: I1210 15:25:40.153945 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 10 15:25:40 crc kubenswrapper[4755]: I1210 15:25:40.155353 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/27f3b602-6196-4cbc-bf90-e695403d20c7-trusted-ca\") pod \"ingress-operator-5b745b69d9-j46lk\" (UID: \"27f3b602-6196-4cbc-bf90-e695403d20c7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-j46lk" Dec 10 15:25:40 crc kubenswrapper[4755]: I1210 15:25:40.168158 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 10 15:25:40 crc kubenswrapper[4755]: I1210 15:25:40.188899 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 10 15:25:40 crc kubenswrapper[4755]: I1210 15:25:40.193941 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8d212083-766e-4b88-ba08-8570f05f6c94-config-volume\") pod \"collect-profiles-29422995-knlb5\" (UID: \"8d212083-766e-4b88-ba08-8570f05f6c94\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29422995-knlb5" Dec 10 15:25:40 crc kubenswrapper[4755]: I1210 15:25:40.206928 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-f47gb" Dec 10 15:25:40 crc kubenswrapper[4755]: I1210 15:25:40.208570 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 10 15:25:40 crc kubenswrapper[4755]: I1210 15:25:40.216935 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8d212083-766e-4b88-ba08-8570f05f6c94-secret-volume\") pod \"collect-profiles-29422995-knlb5\" (UID: \"8d212083-766e-4b88-ba08-8570f05f6c94\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29422995-knlb5" Dec 10 15:25:40 crc kubenswrapper[4755]: I1210 15:25:40.228764 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 10 15:25:40 crc kubenswrapper[4755]: I1210 15:25:40.247938 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 10 15:25:40 crc kubenswrapper[4755]: E1210 15:25:40.252300 4755 secret.go:188] Couldn't get secret openshift-ingress/router-certs-default: failed to sync secret cache: timed out waiting for the condition Dec 10 15:25:40 crc kubenswrapper[4755]: E1210 15:25:40.252383 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dd9b14b8-6d7c-4eeb-9748-a2e99daa4293-default-certificate podName:dd9b14b8-6d7c-4eeb-9748-a2e99daa4293 nodeName:}" failed. No retries permitted until 2025-12-10 15:25:40.752363595 +0000 UTC m=+137.353247227 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-certificate" (UniqueName: "kubernetes.io/secret/dd9b14b8-6d7c-4eeb-9748-a2e99daa4293-default-certificate") pod "router-default-5444994796-ts8wv" (UID: "dd9b14b8-6d7c-4eeb-9748-a2e99daa4293") : failed to sync secret cache: timed out waiting for the condition Dec 10 15:25:40 crc kubenswrapper[4755]: E1210 15:25:40.253443 4755 configmap.go:193] Couldn't get configMap openshift-ingress/service-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Dec 10 15:25:40 crc kubenswrapper[4755]: E1210 15:25:40.253525 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dd9b14b8-6d7c-4eeb-9748-a2e99daa4293-service-ca-bundle podName:dd9b14b8-6d7c-4eeb-9748-a2e99daa4293 nodeName:}" failed. No retries permitted until 2025-12-10 15:25:40.753507196 +0000 UTC m=+137.354390828 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/dd9b14b8-6d7c-4eeb-9748-a2e99daa4293-service-ca-bundle") pod "router-default-5444994796-ts8wv" (UID: "dd9b14b8-6d7c-4eeb-9748-a2e99daa4293") : failed to sync configmap cache: timed out waiting for the condition Dec 10 15:25:40 crc kubenswrapper[4755]: E1210 15:25:40.254123 4755 secret.go:188] Couldn't get secret openshift-multus/multus-admission-controller-secret: failed to sync secret cache: timed out waiting for the condition Dec 10 15:25:40 crc kubenswrapper[4755]: E1210 15:25:40.254171 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/96d9c114-e67a-43a5-b081-4a1de76cc870-webhook-certs podName:96d9c114-e67a-43a5-b081-4a1de76cc870 nodeName:}" failed. No retries permitted until 2025-12-10 15:25:40.754162212 +0000 UTC m=+137.355045904 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/96d9c114-e67a-43a5-b081-4a1de76cc870-webhook-certs") pod "multus-admission-controller-857f4d67dd-b4kk7" (UID: "96d9c114-e67a-43a5-b081-4a1de76cc870") : failed to sync secret cache: timed out waiting for the condition Dec 10 15:25:40 crc kubenswrapper[4755]: E1210 15:25:40.254194 4755 secret.go:188] Couldn't get secret openshift-dns-operator/metrics-tls: failed to sync secret cache: timed out waiting for the condition Dec 10 15:25:40 crc kubenswrapper[4755]: E1210 15:25:40.254224 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e0d76e44-8053-4afa-b974-bb3f945c9c23-metrics-tls podName:e0d76e44-8053-4afa-b974-bb3f945c9c23 nodeName:}" failed. No retries permitted until 2025-12-10 15:25:40.754215254 +0000 UTC m=+137.355098886 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/e0d76e44-8053-4afa-b974-bb3f945c9c23-metrics-tls") pod "dns-operator-744455d44c-vhv9h" (UID: "e0d76e44-8053-4afa-b974-bb3f945c9c23") : failed to sync secret cache: timed out waiting for the condition Dec 10 15:25:40 crc kubenswrapper[4755]: E1210 15:25:40.254279 4755 secret.go:188] Couldn't get secret openshift-ingress/router-metrics-certs-default: failed to sync secret cache: timed out waiting for the condition Dec 10 15:25:40 crc kubenswrapper[4755]: E1210 15:25:40.254326 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dd9b14b8-6d7c-4eeb-9748-a2e99daa4293-metrics-certs podName:dd9b14b8-6d7c-4eeb-9748-a2e99daa4293 nodeName:}" failed. No retries permitted until 2025-12-10 15:25:40.754315326 +0000 UTC m=+137.355199018 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dd9b14b8-6d7c-4eeb-9748-a2e99daa4293-metrics-certs") pod "router-default-5444994796-ts8wv" (UID: "dd9b14b8-6d7c-4eeb-9748-a2e99daa4293") : failed to sync secret cache: timed out waiting for the condition Dec 10 15:25:40 crc kubenswrapper[4755]: I1210 15:25:40.258757 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/dd9b14b8-6d7c-4eeb-9748-a2e99daa4293-stats-auth\") pod \"router-default-5444994796-ts8wv\" (UID: \"dd9b14b8-6d7c-4eeb-9748-a2e99daa4293\") " pod="openshift-ingress/router-default-5444994796-ts8wv" Dec 10 15:25:40 crc kubenswrapper[4755]: I1210 15:25:40.268601 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Dec 10 15:25:40 crc kubenswrapper[4755]: I1210 15:25:40.290559 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 10 15:25:40 crc kubenswrapper[4755]: I1210 15:25:40.307637 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 10 15:25:40 crc kubenswrapper[4755]: I1210 15:25:40.329238 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 10 15:25:40 crc kubenswrapper[4755]: I1210 15:25:40.348599 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 10 15:25:40 crc kubenswrapper[4755]: E1210 15:25:40.356109 4755 secret.go:188] Couldn't get secret openshift-controller-manager-operator/openshift-controller-manager-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Dec 10 15:25:40 crc kubenswrapper[4755]: E1210 15:25:40.356186 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/94689167-954d-4350-a1b8-e6125e32bd1f-serving-cert podName:94689167-954d-4350-a1b8-e6125e32bd1f nodeName:}" failed. No retries permitted until 2025-12-10 15:25:40.856165078 +0000 UTC m=+137.457048700 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/94689167-954d-4350-a1b8-e6125e32bd1f-serving-cert") pod "openshift-controller-manager-operator-756b6f6bc6-g5hs8" (UID: "94689167-954d-4350-a1b8-e6125e32bd1f") : failed to sync secret cache: timed out waiting for the condition Dec 10 15:25:40 crc kubenswrapper[4755]: E1210 15:25:40.356184 4755 configmap.go:193] Couldn't get configMap openshift-dns/dns-default: failed to sync configmap cache: timed out waiting for the condition Dec 10 15:25:40 crc kubenswrapper[4755]: E1210 15:25:40.356202 4755 secret.go:188] Couldn't get secret openshift-dns/dns-default-metrics-tls: failed to sync secret cache: timed out waiting for the condition Dec 10 15:25:40 crc kubenswrapper[4755]: E1210 15:25:40.356267 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/30281ea2-4ba0-44c5-981d-8961073236a8-config-volume podName:30281ea2-4ba0-44c5-981d-8961073236a8 nodeName:}" failed. No retries permitted until 2025-12-10 15:25:40.85625003 +0000 UTC m=+137.457133662 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/30281ea2-4ba0-44c5-981d-8961073236a8-config-volume") pod "dns-default-wl9hp" (UID: "30281ea2-4ba0-44c5-981d-8961073236a8") : failed to sync configmap cache: timed out waiting for the condition Dec 10 15:25:40 crc kubenswrapper[4755]: E1210 15:25:40.356279 4755 configmap.go:193] Couldn't get configMap openshift-controller-manager-operator/openshift-controller-manager-operator-config: failed to sync configmap cache: timed out waiting for the condition Dec 10 15:25:40 crc kubenswrapper[4755]: E1210 15:25:40.356283 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/30281ea2-4ba0-44c5-981d-8961073236a8-metrics-tls podName:30281ea2-4ba0-44c5-981d-8961073236a8 nodeName:}" failed. No retries permitted until 2025-12-10 15:25:40.856277021 +0000 UTC m=+137.457160653 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/30281ea2-4ba0-44c5-981d-8961073236a8-metrics-tls") pod "dns-default-wl9hp" (UID: "30281ea2-4ba0-44c5-981d-8961073236a8") : failed to sync secret cache: timed out waiting for the condition Dec 10 15:25:40 crc kubenswrapper[4755]: E1210 15:25:40.356450 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/94689167-954d-4350-a1b8-e6125e32bd1f-config podName:94689167-954d-4350-a1b8-e6125e32bd1f nodeName:}" failed. No retries permitted until 2025-12-10 15:25:40.856388294 +0000 UTC m=+137.457271926 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/94689167-954d-4350-a1b8-e6125e32bd1f-config") pod "openshift-controller-manager-operator-756b6f6bc6-g5hs8" (UID: "94689167-954d-4350-a1b8-e6125e32bd1f") : failed to sync configmap cache: timed out waiting for the condition Dec 10 15:25:40 crc kubenswrapper[4755]: I1210 15:25:40.367984 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 10 15:25:40 crc kubenswrapper[4755]: I1210 15:25:40.390406 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 10 15:25:40 crc kubenswrapper[4755]: I1210 15:25:40.408648 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 10 15:25:40 crc kubenswrapper[4755]: I1210 15:25:40.428299 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Dec 10 15:25:40 crc kubenswrapper[4755]: I1210 15:25:40.448331 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 10 15:25:40 crc kubenswrapper[4755]: I1210 15:25:40.464184 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zcwtl" event={"ID":"e2f7713d-7734-477f-81df-093aaa83837f","Type":"ContainerStarted","Data":"383b75c3fc57dd2d2bcd36c1a2bbd594aaebe06750604fa55194997f630c30d2"} Dec 10 15:25:40 crc kubenswrapper[4755]: I1210 15:25:40.468530 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 10 15:25:40 crc kubenswrapper[4755]: I1210 15:25:40.488985 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Dec 10 15:25:40 crc kubenswrapper[4755]: I1210 15:25:40.508850 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 10 15:25:40 crc kubenswrapper[4755]: I1210 15:25:40.529503 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Dec 10 15:25:40 crc kubenswrapper[4755]: I1210 15:25:40.552441 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 10 15:25:40 crc kubenswrapper[4755]: I1210 15:25:40.554424 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gvpnh"] Dec 10 15:25:40 crc kubenswrapper[4755]: I1210 15:25:40.555503 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-sxtxj"] Dec 10 15:25:40 crc kubenswrapper[4755]: I1210 15:25:40.556485 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-6khhb"] Dec 10 15:25:40 crc kubenswrapper[4755]: I1210 15:25:40.559908 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-f47gb"] Dec 10 15:25:40 crc kubenswrapper[4755]: I1210 15:25:40.568425 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 10 15:25:40 crc kubenswrapper[4755]: I1210 15:25:40.589493 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 10 15:25:40 crc kubenswrapper[4755]: I1210 15:25:40.609526 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 10 15:25:40 crc kubenswrapper[4755]: I1210 15:25:40.630928 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Dec 10 15:25:40 crc kubenswrapper[4755]: I1210 15:25:40.647995 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 10 15:25:40 crc kubenswrapper[4755]: I1210 15:25:40.667880 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Dec 10 15:25:40 crc kubenswrapper[4755]: I1210 15:25:40.688626 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 10 15:25:40 crc kubenswrapper[4755]: I1210 15:25:40.708485 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 10 15:25:40 crc kubenswrapper[4755]: I1210 15:25:40.743862 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxncl\" (UniqueName: \"kubernetes.io/projected/0f76f7be-c8e4-4943-81f2-8e416e747aec-kube-api-access-cxncl\") pod \"etcd-operator-b45778765-kc8qq\" (UID: \"0f76f7be-c8e4-4943-81f2-8e416e747aec\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kc8qq" Dec 10 15:25:40 crc kubenswrapper[4755]: I1210 15:25:40.763279 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncjkj\" (UniqueName: \"kubernetes.io/projected/51e9f48f-0b01-4f2b-8d00-77fd91f6c3b0-kube-api-access-ncjkj\") pod \"machine-config-operator-74547568cd-2ldzj\" (UID: \"51e9f48f-0b01-4f2b-8d00-77fd91f6c3b0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2ldzj" Dec 10 15:25:40 crc kubenswrapper[4755]: I1210 15:25:40.774442 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 10 15:25:40 crc kubenswrapper[4755]: I1210 15:25:40.774920 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/96d9c114-e67a-43a5-b081-4a1de76cc870-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-b4kk7\" (UID: \"96d9c114-e67a-43a5-b081-4a1de76cc870\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-b4kk7" Dec 10 15:25:40 crc kubenswrapper[4755]: I1210 15:25:40.774991 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd9b14b8-6d7c-4eeb-9748-a2e99daa4293-service-ca-bundle\") pod \"router-default-5444994796-ts8wv\" (UID: \"dd9b14b8-6d7c-4eeb-9748-a2e99daa4293\") " pod="openshift-ingress/router-default-5444994796-ts8wv" Dec 10 15:25:40 crc kubenswrapper[4755]: I1210 15:25:40.775082 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e0d76e44-8053-4afa-b974-bb3f945c9c23-metrics-tls\") pod \"dns-operator-744455d44c-vhv9h\" (UID: \"e0d76e44-8053-4afa-b974-bb3f945c9c23\") " pod="openshift-dns-operator/dns-operator-744455d44c-vhv9h" Dec 10 15:25:40 crc kubenswrapper[4755]: I1210 15:25:40.775107 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dd9b14b8-6d7c-4eeb-9748-a2e99daa4293-metrics-certs\") pod \"router-default-5444994796-ts8wv\" (UID: \"dd9b14b8-6d7c-4eeb-9748-a2e99daa4293\") " pod="openshift-ingress/router-default-5444994796-ts8wv" Dec 10 15:25:40 crc kubenswrapper[4755]: I1210 15:25:40.775161 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/dd9b14b8-6d7c-4eeb-9748-a2e99daa4293-default-certificate\") pod \"router-default-5444994796-ts8wv\" (UID: \"dd9b14b8-6d7c-4eeb-9748-a2e99daa4293\") " pod="openshift-ingress/router-default-5444994796-ts8wv" Dec 10 15:25:40 crc kubenswrapper[4755]: I1210 15:25:40.776058 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd9b14b8-6d7c-4eeb-9748-a2e99daa4293-service-ca-bundle\") pod \"router-default-5444994796-ts8wv\" (UID: \"dd9b14b8-6d7c-4eeb-9748-a2e99daa4293\") " pod="openshift-ingress/router-default-5444994796-ts8wv" Dec 10 15:25:40 crc kubenswrapper[4755]: I1210 15:25:40.778227 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e0d76e44-8053-4afa-b974-bb3f945c9c23-metrics-tls\") pod \"dns-operator-744455d44c-vhv9h\" (UID: \"e0d76e44-8053-4afa-b974-bb3f945c9c23\") " pod="openshift-dns-operator/dns-operator-744455d44c-vhv9h" Dec 10 15:25:40 crc kubenswrapper[4755]: I1210 15:25:40.781595 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/96d9c114-e67a-43a5-b081-4a1de76cc870-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-b4kk7\" (UID: \"96d9c114-e67a-43a5-b081-4a1de76cc870\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-b4kk7" Dec 10 15:25:40 crc kubenswrapper[4755]: I1210 15:25:40.782892 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dd9b14b8-6d7c-4eeb-9748-a2e99daa4293-metrics-certs\") pod \"router-default-5444994796-ts8wv\" (UID: \"dd9b14b8-6d7c-4eeb-9748-a2e99daa4293\") " pod="openshift-ingress/router-default-5444994796-ts8wv" Dec 10 15:25:40 crc kubenswrapper[4755]: I1210 15:25:40.783164 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/dd9b14b8-6d7c-4eeb-9748-a2e99daa4293-default-certificate\") pod \"router-default-5444994796-ts8wv\" (UID: \"dd9b14b8-6d7c-4eeb-9748-a2e99daa4293\") " pod="openshift-ingress/router-default-5444994796-ts8wv" Dec 10 15:25:40 crc kubenswrapper[4755]: I1210 15:25:40.789418 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 10 15:25:40 crc kubenswrapper[4755]: I1210 15:25:40.810080 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Dec 10 15:25:40 crc kubenswrapper[4755]: I1210 15:25:40.829567 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 10 15:25:40 crc kubenswrapper[4755]: I1210 15:25:40.864085 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dknwm\" (UniqueName: \"kubernetes.io/projected/d003362b-b35a-4f18-b387-62ec1490321a-kube-api-access-dknwm\") pod \"authentication-operator-69f744f599-68s5b\" (UID: \"d003362b-b35a-4f18-b387-62ec1490321a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-68s5b" Dec 10 15:25:40 crc kubenswrapper[4755]: I1210 15:25:40.875992 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94689167-954d-4350-a1b8-e6125e32bd1f-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-g5hs8\" (UID: \"94689167-954d-4350-a1b8-e6125e32bd1f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-g5hs8" Dec 10 15:25:40 crc kubenswrapper[4755]: I1210 15:25:40.876085 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/30281ea2-4ba0-44c5-981d-8961073236a8-metrics-tls\") pod \"dns-default-wl9hp\" (UID: \"30281ea2-4ba0-44c5-981d-8961073236a8\") " pod="openshift-dns/dns-default-wl9hp" Dec 10 15:25:40 crc kubenswrapper[4755]: I1210 15:25:40.876183 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/30281ea2-4ba0-44c5-981d-8961073236a8-config-volume\") pod \"dns-default-wl9hp\" (UID: \"30281ea2-4ba0-44c5-981d-8961073236a8\") " pod="openshift-dns/dns-default-wl9hp" Dec 10 15:25:40 crc kubenswrapper[4755]: I1210 15:25:40.876475 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/94689167-954d-4350-a1b8-e6125e32bd1f-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-g5hs8\" (UID: \"94689167-954d-4350-a1b8-e6125e32bd1f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-g5hs8" Dec 10 15:25:40 crc kubenswrapper[4755]: I1210 15:25:40.878643 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94689167-954d-4350-a1b8-e6125e32bd1f-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-g5hs8\" (UID: \"94689167-954d-4350-a1b8-e6125e32bd1f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-g5hs8" Dec 10 15:25:40 crc kubenswrapper[4755]: I1210 15:25:40.879719 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/94689167-954d-4350-a1b8-e6125e32bd1f-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-g5hs8\" (UID: \"94689167-954d-4350-a1b8-e6125e32bd1f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-g5hs8" Dec 10 15:25:40 crc kubenswrapper[4755]: I1210 15:25:40.884697 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n56zj\" (UniqueName: \"kubernetes.io/projected/d40dd21f-096a-4dca-a313-566508e33dd3-kube-api-access-n56zj\") pod \"kube-storage-version-migrator-operator-b67b599dd-jkj94\" (UID: \"d40dd21f-096a-4dca-a313-566508e33dd3\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jkj94" Dec 10 15:25:40 crc kubenswrapper[4755]: I1210 15:25:40.910012 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gq8s\" (UniqueName: \"kubernetes.io/projected/ed5ac3ea-71b9-41f5-b629-ec016b6ef3c7-kube-api-access-6gq8s\") pod \"openshift-apiserver-operator-796bbdcf4f-k4nkz\" (UID: \"ed5ac3ea-71b9-41f5-b629-ec016b6ef3c7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-k4nkz" Dec 10 15:25:40 crc kubenswrapper[4755]: I1210 15:25:40.912635 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-68s5b" Dec 10 15:25:40 crc kubenswrapper[4755]: I1210 15:25:40.924110 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tj5fk\" (UniqueName: \"kubernetes.io/projected/c6d1d322-1622-41d5-afb0-c441b346b8bf-kube-api-access-tj5fk\") pod \"downloads-7954f5f757-gbvgh\" (UID: \"c6d1d322-1622-41d5-afb0-c441b346b8bf\") " pod="openshift-console/downloads-7954f5f757-gbvgh" Dec 10 15:25:40 crc kubenswrapper[4755]: I1210 15:25:40.942507 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48gjk\" (UniqueName: \"kubernetes.io/projected/dd11fb82-1556-4769-a1cc-11589b905b3f-kube-api-access-48gjk\") pod \"openshift-config-operator-7777fb866f-26mg6\" (UID: \"dd11fb82-1556-4769-a1cc-11589b905b3f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-26mg6" Dec 10 15:25:40 crc kubenswrapper[4755]: I1210 15:25:40.959593 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-26mg6" Dec 10 15:25:40 crc kubenswrapper[4755]: I1210 15:25:40.963127 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcbtb\" (UniqueName: \"kubernetes.io/projected/e846eda6-431b-4fdf-98dc-80e3fc6b122f-kube-api-access-mcbtb\") pod \"apiserver-76f77b778f-xx85g\" (UID: \"e846eda6-431b-4fdf-98dc-80e3fc6b122f\") " pod="openshift-apiserver/apiserver-76f77b778f-xx85g" Dec 10 15:25:40 crc kubenswrapper[4755]: I1210 15:25:40.969144 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-k4nkz" Dec 10 15:25:40 crc kubenswrapper[4755]: I1210 15:25:40.984438 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwp84\" (UniqueName: \"kubernetes.io/projected/4e98c44e-5a60-49a0-9186-2367509dda97-kube-api-access-rwp84\") pod \"controller-manager-879f6c89f-m9ntd\" (UID: \"4e98c44e-5a60-49a0-9186-2367509dda97\") " pod="openshift-controller-manager/controller-manager-879f6c89f-m9ntd" Dec 10 15:25:40 crc kubenswrapper[4755]: I1210 15:25:40.994478 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-m9ntd" Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.003966 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ee5f60d7-e2f5-4900-b238-e4ef9acf1de4-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-8vttb\" (UID: \"ee5f60d7-e2f5-4900-b238-e4ef9acf1de4\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8vttb" Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.004777 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-gbvgh" Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.013250 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-kc8qq" Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.022859 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c4a5cf0a-76df-4855-a52d-22dbc07e8f7a-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-qgjc4\" (UID: \"c4a5cf0a-76df-4855-a52d-22dbc07e8f7a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qgjc4" Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.027002 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jkj94" Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.034815 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2ldzj" Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.051303 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qgjc4" Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.052787 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9xch\" (UniqueName: \"kubernetes.io/projected/24e3bc3c-7e93-4c91-b0a2-85877004fafc-kube-api-access-l9xch\") pod \"console-f9d7485db-n6qb5\" (UID: \"24e3bc3c-7e93-4c91-b0a2-85877004fafc\") " pod="openshift-console/console-f9d7485db-n6qb5" Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.072435 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8h5xr\" (UniqueName: \"kubernetes.io/projected/8c1052c9-fa99-4f24-8fff-923dc489c08d-kube-api-access-8h5xr\") pod \"route-controller-manager-6576b87f9c-4bgvx\" (UID: \"8c1052c9-fa99-4f24-8fff-923dc489c08d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4bgvx" Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.087667 4755 request.go:700] Waited for 1.937210621s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/serviceaccounts/machine-api-operator/token Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.107236 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tvcd\" (UniqueName: \"kubernetes.io/projected/ee5f60d7-e2f5-4900-b238-e4ef9acf1de4-kube-api-access-4tvcd\") pod \"cluster-image-registry-operator-dc59b4c8b-8vttb\" (UID: \"ee5f60d7-e2f5-4900-b238-e4ef9acf1de4\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8vttb" Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.111136 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.116668 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9w25\" (UniqueName: \"kubernetes.io/projected/228e9f52-aead-4cf5-af32-8b0b3aec8cf4-kube-api-access-l9w25\") pod \"machine-api-operator-5694c8668f-n66x6\" (UID: \"228e9f52-aead-4cf5-af32-8b0b3aec8cf4\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-n66x6" Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.117786 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/30281ea2-4ba0-44c5-981d-8961073236a8-config-volume\") pod \"dns-default-wl9hp\" (UID: \"30281ea2-4ba0-44c5-981d-8961073236a8\") " pod="openshift-dns/dns-default-wl9hp" Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.130365 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.149333 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.166043 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/30281ea2-4ba0-44c5-981d-8961073236a8-metrics-tls\") pod \"dns-default-wl9hp\" (UID: \"30281ea2-4ba0-44c5-981d-8961073236a8\") " pod="openshift-dns/dns-default-wl9hp" Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.171856 4755 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.186322 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-n66x6" Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.188966 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.208564 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.234777 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-n6qb5" Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.236781 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.251849 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-xx85g" Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.251979 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.270273 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.288191 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4bgvx" Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.291435 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.320029 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8vttb" Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.322897 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-m9ntd"] Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.386744 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e45db753-a360-4857-9bd4-11f898ede4dc-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-4fdzm\" (UID: \"e45db753-a360-4857-9bd4-11f898ede4dc\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4fdzm" Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.397435 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-gbvgh"] Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.403623 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/27f3b602-6196-4cbc-bf90-e695403d20c7-bound-sa-token\") pod \"ingress-operator-5b745b69d9-j46lk\" (UID: \"27f3b602-6196-4cbc-bf90-e695403d20c7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-j46lk" Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.405086 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4fdzm" Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.405253 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9gpg\" (UniqueName: \"kubernetes.io/projected/d91d7b48-9096-4f3e-8260-2d762173eb80-kube-api-access-f9gpg\") pod \"control-plane-machine-set-operator-78cbb6b69f-sdrvv\" (UID: \"d91d7b48-9096-4f3e-8260-2d762173eb80\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-sdrvv" Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.420755 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-sdrvv" Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.432985 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94gk7\" (UniqueName: \"kubernetes.io/projected/27f3b602-6196-4cbc-bf90-e695403d20c7-kube-api-access-94gk7\") pod \"ingress-operator-5b745b69d9-j46lk\" (UID: \"27f3b602-6196-4cbc-bf90-e695403d20c7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-j46lk" Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.443129 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2b2r7\" (UniqueName: \"kubernetes.io/projected/011a7c6a-048f-4db1-ac1f-259b44dd28bc-kube-api-access-2b2r7\") pod \"service-ca-9c57cc56f-skm6x\" (UID: \"011a7c6a-048f-4db1-ac1f-259b44dd28bc\") " pod="openshift-service-ca/service-ca-9c57cc56f-skm6x" Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.461260 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-k4nkz"] Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.465254 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-68s5b"] Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.469960 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcr9j\" (UniqueName: \"kubernetes.io/projected/22368f6c-56f1-4ef1-bada-fcad04f7b8a4-kube-api-access-fcr9j\") pod \"packageserver-d55dfcdfc-vnzg7\" (UID: \"22368f6c-56f1-4ef1-bada-fcad04f7b8a4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vnzg7" Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.481597 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-26mg6"] Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.492601 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xb9ll\" (UniqueName: \"kubernetes.io/projected/8d212083-766e-4b88-ba08-8570f05f6c94-kube-api-access-xb9ll\") pod \"collect-profiles-29422995-knlb5\" (UID: \"8d212083-766e-4b88-ba08-8570f05f6c94\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29422995-knlb5" Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.496650 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gvpnh" event={"ID":"9336d238-f0fb-430a-acfe-4aa5c888ebc8","Type":"ContainerStarted","Data":"d3489ed1cf11c68bdba2307c80c59fb7907bf69d7544b00d4bd2f7f6b71015b2"} Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.496689 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gvpnh" event={"ID":"9336d238-f0fb-430a-acfe-4aa5c888ebc8","Type":"ContainerStarted","Data":"73a9aa8ab207abd6ee8ecf04de8f2bed84d50280180efca23db2a5eb9188d5f7"} Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.496700 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gvpnh" event={"ID":"9336d238-f0fb-430a-acfe-4aa5c888ebc8","Type":"ContainerStarted","Data":"3f74d6f189b07c0e07d90578b0121dce247b9a6874c18b8b77331b9534fc6e37"} Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.498353 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-m9ntd" event={"ID":"4e98c44e-5a60-49a0-9186-2367509dda97","Type":"ContainerStarted","Data":"98819c52c8c96639765927c04af504bf678b1db167d0df6f1f819b6216e3652f"} Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.499539 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-f47gb" event={"ID":"1645de9b-f227-4d9d-885f-ffd58e5bef69","Type":"ContainerStarted","Data":"2f21b859d93a04a953b6fe2a81322f9db43ef41d669bfe80eb66c37c1b8f8559"} Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.499570 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-f47gb" event={"ID":"1645de9b-f227-4d9d-885f-ffd58e5bef69","Type":"ContainerStarted","Data":"d931452a3974fc8e3333733937d9d22330cb6644bfc53e5f825a9c54f964a6aa"} Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.500054 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-f47gb" Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.501346 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zcwtl" event={"ID":"e2f7713d-7734-477f-81df-093aaa83837f","Type":"ContainerStarted","Data":"b435ababa54174cefc08788bd8336a975079c0fcf1e73b0a9a45dd3b6bc3b866"} Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.501396 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zcwtl" event={"ID":"e2f7713d-7734-477f-81df-093aaa83837f","Type":"ContainerStarted","Data":"bc452d8bc9d70ce0756944ea041be57f7af013a001bc728f24fe5f2810878154"} Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.511347 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-gbvgh" event={"ID":"c6d1d322-1622-41d5-afb0-c441b346b8bf","Type":"ContainerStarted","Data":"61b6011f4c72ccbc68e6236484da430dc75ae47e44aa57b6664496316b3c631f"} Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.514982 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bh2t\" (UniqueName: \"kubernetes.io/projected/ee6ce5dd-6c0d-4acd-b795-6cb930770bec-kube-api-access-2bh2t\") pod \"service-ca-operator-777779d784-6hpgg\" (UID: \"ee6ce5dd-6c0d-4acd-b795-6cb930770bec\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6hpgg" Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.522771 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-f47gb" Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.528986 4755 generic.go:334] "Generic (PLEG): container finished" podID="7d2e3e9e-69f8-46f4-b825-ee369ab23de8" containerID="5a73505713fec45d317ef6f17eea2bcae80c73bebb6bd1fa81a3bacc6b770bce" exitCode=0 Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.529331 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6khhb" event={"ID":"7d2e3e9e-69f8-46f4-b825-ee369ab23de8","Type":"ContainerDied","Data":"5a73505713fec45d317ef6f17eea2bcae80c73bebb6bd1fa81a3bacc6b770bce"} Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.529370 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6khhb" event={"ID":"7d2e3e9e-69f8-46f4-b825-ee369ab23de8","Type":"ContainerStarted","Data":"4303bb201be93852d4f0980f9dc07656d2dc8438201487c673fbcb5ea21fa383"} Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.531905 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5h9k\" (UniqueName: \"kubernetes.io/projected/96d9c114-e67a-43a5-b081-4a1de76cc870-kube-api-access-f5h9k\") pod \"multus-admission-controller-857f4d67dd-b4kk7\" (UID: \"96d9c114-e67a-43a5-b081-4a1de76cc870\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-b4kk7" Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.536773 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jkj94"] Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.541754 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-sxtxj" event={"ID":"72d8a3d3-6b10-4a2d-bb42-b96a0d6135a5","Type":"ContainerStarted","Data":"23a0819c7150f2987ac242f1b9c25e7117ea096ece310cf0525384339c54e409"} Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.541799 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-sxtxj" event={"ID":"72d8a3d3-6b10-4a2d-bb42-b96a0d6135a5","Type":"ContainerStarted","Data":"b3c390a7e3704ec2bb4eaafbef379e27cb158361ef059e9640531b6bd4380e8b"} Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.542437 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-sxtxj" Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.557084 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6d5f332b-9d6b-40c2-8e63-47aa309ea740-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-7svz5\" (UID: \"6d5f332b-9d6b-40c2-8e63-47aa309ea740\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7svz5" Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.585099 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcvkn\" (UniqueName: \"kubernetes.io/projected/3fd6e72e-c555-446b-ad32-bf71e8c1be54-kube-api-access-vcvkn\") pod \"package-server-manager-789f6589d5-dhh5x\" (UID: \"3fd6e72e-c555-446b-ad32-bf71e8c1be54\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dhh5x" Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.586356 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbmrd\" (UniqueName: \"kubernetes.io/projected/e0d76e44-8053-4afa-b974-bb3f945c9c23-kube-api-access-pbmrd\") pod \"dns-operator-744455d44c-vhv9h\" (UID: \"e0d76e44-8053-4afa-b974-bb3f945c9c23\") " pod="openshift-dns-operator/dns-operator-744455d44c-vhv9h" Dec 10 15:25:41 crc kubenswrapper[4755]: W1210 15:25:41.589256 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddd11fb82_1556_4769_a1cc_11589b905b3f.slice/crio-27133b0920efbd8c3e679b13632ffeb2b91ad10fd86d1c79904943630eb81c5d WatchSource:0}: Error finding container 27133b0920efbd8c3e679b13632ffeb2b91ad10fd86d1c79904943630eb81c5d: Status 404 returned error can't find the container with id 27133b0920efbd8c3e679b13632ffeb2b91ad10fd86d1c79904943630eb81c5d Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.589371 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-2ldzj"] Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.618281 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqmw2\" (UniqueName: \"kubernetes.io/projected/64e1b92e-9035-4439-abdc-86205e68c591-kube-api-access-zqmw2\") pod \"marketplace-operator-79b997595-5bvrp\" (UID: \"64e1b92e-9035-4439-abdc-86205e68c591\") " pod="openshift-marketplace/marketplace-operator-79b997595-5bvrp" Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.620144 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-n66x6"] Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.622956 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rw6tc\" (UniqueName: \"kubernetes.io/projected/dd9b14b8-6d7c-4eeb-9748-a2e99daa4293-kube-api-access-rw6tc\") pod \"router-default-5444994796-ts8wv\" (UID: \"dd9b14b8-6d7c-4eeb-9748-a2e99daa4293\") " pod="openshift-ingress/router-default-5444994796-ts8wv" Dec 10 15:25:41 crc kubenswrapper[4755]: W1210 15:25:41.627414 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod51e9f48f_0b01_4f2b_8d00_77fd91f6c3b0.slice/crio-af6be7242b9d0d6a3190e6167f6a836dee7f9b052dda51fe110ea3798718f7be WatchSource:0}: Error finding container af6be7242b9d0d6a3190e6167f6a836dee7f9b052dda51fe110ea3798718f7be: Status 404 returned error can't find the container with id af6be7242b9d0d6a3190e6167f6a836dee7f9b052dda51fe110ea3798718f7be Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.648450 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dhh5x" Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.650786 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-sxtxj" Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.660092 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7svz5" Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.661120 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78gsp\" (UniqueName: \"kubernetes.io/projected/460a4f34-c415-4a61-8877-7ad9d851c0e5-kube-api-access-78gsp\") pod \"migrator-59844c95c7-sw7vc\" (UID: \"460a4f34-c415-4a61-8877-7ad9d851c0e5\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-sw7vc" Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.667895 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-sw7vc" Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.673637 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-skm6x" Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.681686 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9z9hd\" (UniqueName: \"kubernetes.io/projected/0470027c-6ca8-4404-a366-997bf288e1d0-kube-api-access-9z9hd\") pod \"csi-hostpathplugin-rbjcx\" (UID: \"0470027c-6ca8-4404-a366-997bf288e1d0\") " pod="hostpath-provisioner/csi-hostpathplugin-rbjcx" Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.682319 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-6hpgg" Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.683682 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdqp2\" (UniqueName: \"kubernetes.io/projected/94689167-954d-4350-a1b8-e6125e32bd1f-kube-api-access-hdqp2\") pod \"openshift-controller-manager-operator-756b6f6bc6-g5hs8\" (UID: \"94689167-954d-4350-a1b8-e6125e32bd1f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-g5hs8" Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.693307 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-5bvrp" Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.696781 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vnzg7" Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.719076 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vp8j7\" (UniqueName: \"kubernetes.io/projected/30281ea2-4ba0-44c5-981d-8961073236a8-kube-api-access-vp8j7\") pod \"dns-default-wl9hp\" (UID: \"30281ea2-4ba0-44c5-981d-8961073236a8\") " pod="openshift-dns/dns-default-wl9hp" Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.723368 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-j46lk" Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.727621 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29422995-knlb5" Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.737336 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-ts8wv" Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.745666 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-vhv9h" Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.752959 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-g5hs8" Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.762082 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-b4kk7" Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.785895 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qgjc4"] Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.791942 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-kc8qq"] Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.799872 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4fdzm"] Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.800033 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-wl9hp" Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.801580 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/1189a5c2-6e43-4e4b-8181-d2bd78031673-registry-tls\") pod \"image-registry-697d97f7c8-mqq47\" (UID: \"1189a5c2-6e43-4e4b-8181-d2bd78031673\") " pod="openshift-image-registry/image-registry-697d97f7c8-mqq47" Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.801631 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfbtx\" (UniqueName: \"kubernetes.io/projected/441cfc68-29a3-4b18-9b34-cf3bc58107cc-kube-api-access-qfbtx\") pod \"ingress-canary-pw2tq\" (UID: \"441cfc68-29a3-4b18-9b34-cf3bc58107cc\") " pod="openshift-ingress-canary/ingress-canary-pw2tq" Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.801656 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b25z2\" (UniqueName: \"kubernetes.io/projected/ca0adcad-c9ae-4aaa-ada2-f9e0baa478a7-kube-api-access-b25z2\") pod \"machine-config-server-455tj\" (UID: \"ca0adcad-c9ae-4aaa-ada2-f9e0baa478a7\") " pod="openshift-machine-config-operator/machine-config-server-455tj" Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.801677 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvdm6\" (UniqueName: \"kubernetes.io/projected/6f034c10-25fc-403c-9a49-58cb5a182222-kube-api-access-jvdm6\") pod \"catalog-operator-68c6474976-7r8q8\" (UID: \"6f034c10-25fc-403c-9a49-58cb5a182222\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7r8q8" Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.801721 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1189a5c2-6e43-4e4b-8181-d2bd78031673-bound-sa-token\") pod \"image-registry-697d97f7c8-mqq47\" (UID: \"1189a5c2-6e43-4e4b-8181-d2bd78031673\") " pod="openshift-image-registry/image-registry-697d97f7c8-mqq47" Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.801743 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5ac506be-249f-4065-84b6-26fb51e47790-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-plc7w\" (UID: \"5ac506be-249f-4065-84b6-26fb51e47790\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-plc7w" Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.801766 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5ac506be-249f-4065-84b6-26fb51e47790-proxy-tls\") pod \"machine-config-controller-84d6567774-plc7w\" (UID: \"5ac506be-249f-4065-84b6-26fb51e47790\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-plc7w" Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.801869 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ggbv\" (UniqueName: \"kubernetes.io/projected/1189a5c2-6e43-4e4b-8181-d2bd78031673-kube-api-access-4ggbv\") pod \"image-registry-697d97f7c8-mqq47\" (UID: \"1189a5c2-6e43-4e4b-8181-d2bd78031673\") " pod="openshift-image-registry/image-registry-697d97f7c8-mqq47" Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.801895 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f516b039-30bd-47f7-b6fa-48966012efe2-srv-cert\") pod \"olm-operator-6b444d44fb-cvrkm\" (UID: \"f516b039-30bd-47f7-b6fa-48966012efe2\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cvrkm" Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.801916 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/441cfc68-29a3-4b18-9b34-cf3bc58107cc-cert\") pod \"ingress-canary-pw2tq\" (UID: \"441cfc68-29a3-4b18-9b34-cf3bc58107cc\") " pod="openshift-ingress-canary/ingress-canary-pw2tq" Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.801994 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpkrm\" (UniqueName: \"kubernetes.io/projected/5ac506be-249f-4065-84b6-26fb51e47790-kube-api-access-kpkrm\") pod \"machine-config-controller-84d6567774-plc7w\" (UID: \"5ac506be-249f-4065-84b6-26fb51e47790\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-plc7w" Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.802018 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/ca0adcad-c9ae-4aaa-ada2-f9e0baa478a7-certs\") pod \"machine-config-server-455tj\" (UID: \"ca0adcad-c9ae-4aaa-ada2-f9e0baa478a7\") " pod="openshift-machine-config-operator/machine-config-server-455tj" Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.802056 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/1189a5c2-6e43-4e4b-8181-d2bd78031673-installation-pull-secrets\") pod \"image-registry-697d97f7c8-mqq47\" (UID: \"1189a5c2-6e43-4e4b-8181-d2bd78031673\") " pod="openshift-image-registry/image-registry-697d97f7c8-mqq47" Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.802123 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/6f034c10-25fc-403c-9a49-58cb5a182222-profile-collector-cert\") pod \"catalog-operator-68c6474976-7r8q8\" (UID: \"6f034c10-25fc-403c-9a49-58cb5a182222\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7r8q8" Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.802145 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f516b039-30bd-47f7-b6fa-48966012efe2-profile-collector-cert\") pod \"olm-operator-6b444d44fb-cvrkm\" (UID: \"f516b039-30bd-47f7-b6fa-48966012efe2\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cvrkm" Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.802176 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6f034c10-25fc-403c-9a49-58cb5a182222-srv-cert\") pod \"catalog-operator-68c6474976-7r8q8\" (UID: \"6f034c10-25fc-403c-9a49-58cb5a182222\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7r8q8" Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.802228 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/1189a5c2-6e43-4e4b-8181-d2bd78031673-registry-certificates\") pod \"image-registry-697d97f7c8-mqq47\" (UID: \"1189a5c2-6e43-4e4b-8181-d2bd78031673\") " pod="openshift-image-registry/image-registry-697d97f7c8-mqq47" Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.802251 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fb2lg\" (UniqueName: \"kubernetes.io/projected/f516b039-30bd-47f7-b6fa-48966012efe2-kube-api-access-fb2lg\") pod \"olm-operator-6b444d44fb-cvrkm\" (UID: \"f516b039-30bd-47f7-b6fa-48966012efe2\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cvrkm" Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.802271 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1189a5c2-6e43-4e4b-8181-d2bd78031673-trusted-ca\") pod \"image-registry-697d97f7c8-mqq47\" (UID: \"1189a5c2-6e43-4e4b-8181-d2bd78031673\") " pod="openshift-image-registry/image-registry-697d97f7c8-mqq47" Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.802292 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/ca0adcad-c9ae-4aaa-ada2-f9e0baa478a7-node-bootstrap-token\") pod \"machine-config-server-455tj\" (UID: \"ca0adcad-c9ae-4aaa-ada2-f9e0baa478a7\") " pod="openshift-machine-config-operator/machine-config-server-455tj" Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.802323 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/1189a5c2-6e43-4e4b-8181-d2bd78031673-ca-trust-extracted\") pod \"image-registry-697d97f7c8-mqq47\" (UID: \"1189a5c2-6e43-4e4b-8181-d2bd78031673\") " pod="openshift-image-registry/image-registry-697d97f7c8-mqq47" Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.802393 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mqq47\" (UID: \"1189a5c2-6e43-4e4b-8181-d2bd78031673\") " pod="openshift-image-registry/image-registry-697d97f7c8-mqq47" Dec 10 15:25:41 crc kubenswrapper[4755]: E1210 15:25:41.804146 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:25:42.304127848 +0000 UTC m=+138.905011560 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mqq47" (UID: "1189a5c2-6e43-4e4b-8181-d2bd78031673") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.822507 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-rbjcx" Dec 10 15:25:41 crc kubenswrapper[4755]: W1210 15:25:41.853656 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode45db753_a360_4857_9bd4_11f898ede4dc.slice/crio-93d81403b186695e829972b94cbbf8015b50e563a74c7730bc269293c05f4cb9 WatchSource:0}: Error finding container 93d81403b186695e829972b94cbbf8015b50e563a74c7730bc269293c05f4cb9: Status 404 returned error can't find the container with id 93d81403b186695e829972b94cbbf8015b50e563a74c7730bc269293c05f4cb9 Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.903308 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 15:25:41 crc kubenswrapper[4755]: E1210 15:25:41.903581 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 15:25:42.403538347 +0000 UTC m=+139.004421979 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.904893 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1189a5c2-6e43-4e4b-8181-d2bd78031673-bound-sa-token\") pod \"image-registry-697d97f7c8-mqq47\" (UID: \"1189a5c2-6e43-4e4b-8181-d2bd78031673\") " pod="openshift-image-registry/image-registry-697d97f7c8-mqq47" Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.904928 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5ac506be-249f-4065-84b6-26fb51e47790-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-plc7w\" (UID: \"5ac506be-249f-4065-84b6-26fb51e47790\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-plc7w" Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.904956 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5ac506be-249f-4065-84b6-26fb51e47790-proxy-tls\") pod \"machine-config-controller-84d6567774-plc7w\" (UID: \"5ac506be-249f-4065-84b6-26fb51e47790\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-plc7w" Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.905072 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ggbv\" (UniqueName: \"kubernetes.io/projected/1189a5c2-6e43-4e4b-8181-d2bd78031673-kube-api-access-4ggbv\") pod \"image-registry-697d97f7c8-mqq47\" (UID: \"1189a5c2-6e43-4e4b-8181-d2bd78031673\") " pod="openshift-image-registry/image-registry-697d97f7c8-mqq47" Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.905112 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f516b039-30bd-47f7-b6fa-48966012efe2-srv-cert\") pod \"olm-operator-6b444d44fb-cvrkm\" (UID: \"f516b039-30bd-47f7-b6fa-48966012efe2\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cvrkm" Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.905185 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/441cfc68-29a3-4b18-9b34-cf3bc58107cc-cert\") pod \"ingress-canary-pw2tq\" (UID: \"441cfc68-29a3-4b18-9b34-cf3bc58107cc\") " pod="openshift-ingress-canary/ingress-canary-pw2tq" Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.905399 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kpkrm\" (UniqueName: \"kubernetes.io/projected/5ac506be-249f-4065-84b6-26fb51e47790-kube-api-access-kpkrm\") pod \"machine-config-controller-84d6567774-plc7w\" (UID: \"5ac506be-249f-4065-84b6-26fb51e47790\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-plc7w" Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.905486 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/ca0adcad-c9ae-4aaa-ada2-f9e0baa478a7-certs\") pod \"machine-config-server-455tj\" (UID: \"ca0adcad-c9ae-4aaa-ada2-f9e0baa478a7\") " pod="openshift-machine-config-operator/machine-config-server-455tj" Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.905667 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/1189a5c2-6e43-4e4b-8181-d2bd78031673-installation-pull-secrets\") pod \"image-registry-697d97f7c8-mqq47\" (UID: \"1189a5c2-6e43-4e4b-8181-d2bd78031673\") " pod="openshift-image-registry/image-registry-697d97f7c8-mqq47" Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.905777 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5ac506be-249f-4065-84b6-26fb51e47790-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-plc7w\" (UID: \"5ac506be-249f-4065-84b6-26fb51e47790\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-plc7w" Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.905880 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/6f034c10-25fc-403c-9a49-58cb5a182222-profile-collector-cert\") pod \"catalog-operator-68c6474976-7r8q8\" (UID: \"6f034c10-25fc-403c-9a49-58cb5a182222\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7r8q8" Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.905894 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-xx85g"] Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.905908 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f516b039-30bd-47f7-b6fa-48966012efe2-profile-collector-cert\") pod \"olm-operator-6b444d44fb-cvrkm\" (UID: \"f516b039-30bd-47f7-b6fa-48966012efe2\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cvrkm" Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.906008 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6f034c10-25fc-403c-9a49-58cb5a182222-srv-cert\") pod \"catalog-operator-68c6474976-7r8q8\" (UID: \"6f034c10-25fc-403c-9a49-58cb5a182222\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7r8q8" Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.906110 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/1189a5c2-6e43-4e4b-8181-d2bd78031673-registry-certificates\") pod \"image-registry-697d97f7c8-mqq47\" (UID: \"1189a5c2-6e43-4e4b-8181-d2bd78031673\") " pod="openshift-image-registry/image-registry-697d97f7c8-mqq47" Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.906140 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fb2lg\" (UniqueName: \"kubernetes.io/projected/f516b039-30bd-47f7-b6fa-48966012efe2-kube-api-access-fb2lg\") pod \"olm-operator-6b444d44fb-cvrkm\" (UID: \"f516b039-30bd-47f7-b6fa-48966012efe2\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cvrkm" Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.906197 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1189a5c2-6e43-4e4b-8181-d2bd78031673-trusted-ca\") pod \"image-registry-697d97f7c8-mqq47\" (UID: \"1189a5c2-6e43-4e4b-8181-d2bd78031673\") " pod="openshift-image-registry/image-registry-697d97f7c8-mqq47" Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.906222 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/ca0adcad-c9ae-4aaa-ada2-f9e0baa478a7-node-bootstrap-token\") pod \"machine-config-server-455tj\" (UID: \"ca0adcad-c9ae-4aaa-ada2-f9e0baa478a7\") " pod="openshift-machine-config-operator/machine-config-server-455tj" Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.906274 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/1189a5c2-6e43-4e4b-8181-d2bd78031673-ca-trust-extracted\") pod \"image-registry-697d97f7c8-mqq47\" (UID: \"1189a5c2-6e43-4e4b-8181-d2bd78031673\") " pod="openshift-image-registry/image-registry-697d97f7c8-mqq47" Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.906395 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mqq47\" (UID: \"1189a5c2-6e43-4e4b-8181-d2bd78031673\") " pod="openshift-image-registry/image-registry-697d97f7c8-mqq47" Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.906534 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/1189a5c2-6e43-4e4b-8181-d2bd78031673-registry-tls\") pod \"image-registry-697d97f7c8-mqq47\" (UID: \"1189a5c2-6e43-4e4b-8181-d2bd78031673\") " pod="openshift-image-registry/image-registry-697d97f7c8-mqq47" Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.906559 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfbtx\" (UniqueName: \"kubernetes.io/projected/441cfc68-29a3-4b18-9b34-cf3bc58107cc-kube-api-access-qfbtx\") pod \"ingress-canary-pw2tq\" (UID: \"441cfc68-29a3-4b18-9b34-cf3bc58107cc\") " pod="openshift-ingress-canary/ingress-canary-pw2tq" Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.906601 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b25z2\" (UniqueName: \"kubernetes.io/projected/ca0adcad-c9ae-4aaa-ada2-f9e0baa478a7-kube-api-access-b25z2\") pod \"machine-config-server-455tj\" (UID: \"ca0adcad-c9ae-4aaa-ada2-f9e0baa478a7\") " pod="openshift-machine-config-operator/machine-config-server-455tj" Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.906626 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvdm6\" (UniqueName: \"kubernetes.io/projected/6f034c10-25fc-403c-9a49-58cb5a182222-kube-api-access-jvdm6\") pod \"catalog-operator-68c6474976-7r8q8\" (UID: \"6f034c10-25fc-403c-9a49-58cb5a182222\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7r8q8" Dec 10 15:25:41 crc kubenswrapper[4755]: E1210 15:25:41.910898 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:25:42.410877067 +0000 UTC m=+139.011760749 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mqq47" (UID: "1189a5c2-6e43-4e4b-8181-d2bd78031673") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.911065 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/1189a5c2-6e43-4e4b-8181-d2bd78031673-registry-certificates\") pod \"image-registry-697d97f7c8-mqq47\" (UID: \"1189a5c2-6e43-4e4b-8181-d2bd78031673\") " pod="openshift-image-registry/image-registry-697d97f7c8-mqq47" Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.911276 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/1189a5c2-6e43-4e4b-8181-d2bd78031673-ca-trust-extracted\") pod \"image-registry-697d97f7c8-mqq47\" (UID: \"1189a5c2-6e43-4e4b-8181-d2bd78031673\") " pod="openshift-image-registry/image-registry-697d97f7c8-mqq47" Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.914531 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f516b039-30bd-47f7-b6fa-48966012efe2-profile-collector-cert\") pod \"olm-operator-6b444d44fb-cvrkm\" (UID: \"f516b039-30bd-47f7-b6fa-48966012efe2\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cvrkm" Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.914992 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f516b039-30bd-47f7-b6fa-48966012efe2-srv-cert\") pod \"olm-operator-6b444d44fb-cvrkm\" (UID: \"f516b039-30bd-47f7-b6fa-48966012efe2\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cvrkm" Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.916677 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5ac506be-249f-4065-84b6-26fb51e47790-proxy-tls\") pod \"machine-config-controller-84d6567774-plc7w\" (UID: \"5ac506be-249f-4065-84b6-26fb51e47790\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-plc7w" Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.919996 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6f034c10-25fc-403c-9a49-58cb5a182222-srv-cert\") pod \"catalog-operator-68c6474976-7r8q8\" (UID: \"6f034c10-25fc-403c-9a49-58cb5a182222\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7r8q8" Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.920590 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/6f034c10-25fc-403c-9a49-58cb5a182222-profile-collector-cert\") pod \"catalog-operator-68c6474976-7r8q8\" (UID: \"6f034c10-25fc-403c-9a49-58cb5a182222\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7r8q8" Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.924894 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-n6qb5"] Dec 10 15:25:41 crc kubenswrapper[4755]: I1210 15:25:41.995672 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfbtx\" (UniqueName: \"kubernetes.io/projected/441cfc68-29a3-4b18-9b34-cf3bc58107cc-kube-api-access-qfbtx\") pod \"ingress-canary-pw2tq\" (UID: \"441cfc68-29a3-4b18-9b34-cf3bc58107cc\") " pod="openshift-ingress-canary/ingress-canary-pw2tq" Dec 10 15:25:42 crc kubenswrapper[4755]: I1210 15:25:42.012508 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 15:25:42 crc kubenswrapper[4755]: E1210 15:25:42.012997 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 15:25:42.512982796 +0000 UTC m=+139.113866428 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:25:42 crc kubenswrapper[4755]: I1210 15:25:42.013092 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-4bgvx"] Dec 10 15:25:42 crc kubenswrapper[4755]: I1210 15:25:42.031904 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvdm6\" (UniqueName: \"kubernetes.io/projected/6f034c10-25fc-403c-9a49-58cb5a182222-kube-api-access-jvdm6\") pod \"catalog-operator-68c6474976-7r8q8\" (UID: \"6f034c10-25fc-403c-9a49-58cb5a182222\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7r8q8" Dec 10 15:25:42 crc kubenswrapper[4755]: I1210 15:25:42.066851 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fb2lg\" (UniqueName: \"kubernetes.io/projected/f516b039-30bd-47f7-b6fa-48966012efe2-kube-api-access-fb2lg\") pod \"olm-operator-6b444d44fb-cvrkm\" (UID: \"f516b039-30bd-47f7-b6fa-48966012efe2\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cvrkm" Dec 10 15:25:42 crc kubenswrapper[4755]: I1210 15:25:42.067958 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-sdrvv"] Dec 10 15:25:42 crc kubenswrapper[4755]: I1210 15:25:42.071210 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7r8q8" Dec 10 15:25:42 crc kubenswrapper[4755]: I1210 15:25:42.076497 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kpkrm\" (UniqueName: \"kubernetes.io/projected/5ac506be-249f-4065-84b6-26fb51e47790-kube-api-access-kpkrm\") pod \"machine-config-controller-84d6567774-plc7w\" (UID: \"5ac506be-249f-4065-84b6-26fb51e47790\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-plc7w" Dec 10 15:25:42 crc kubenswrapper[4755]: I1210 15:25:42.078120 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8vttb"] Dec 10 15:25:42 crc kubenswrapper[4755]: I1210 15:25:42.092974 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cvrkm" Dec 10 15:25:42 crc kubenswrapper[4755]: I1210 15:25:42.093936 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dhh5x"] Dec 10 15:25:42 crc kubenswrapper[4755]: I1210 15:25:42.115310 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mqq47\" (UID: \"1189a5c2-6e43-4e4b-8181-d2bd78031673\") " pod="openshift-image-registry/image-registry-697d97f7c8-mqq47" Dec 10 15:25:42 crc kubenswrapper[4755]: E1210 15:25:42.115753 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:25:42.615740261 +0000 UTC m=+139.216623893 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mqq47" (UID: "1189a5c2-6e43-4e4b-8181-d2bd78031673") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:25:42 crc kubenswrapper[4755]: I1210 15:25:42.116797 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7svz5"] Dec 10 15:25:42 crc kubenswrapper[4755]: I1210 15:25:42.138671 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vnzg7"] Dec 10 15:25:42 crc kubenswrapper[4755]: I1210 15:25:42.167984 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-g5hs8"] Dec 10 15:25:42 crc kubenswrapper[4755]: I1210 15:25:42.216299 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 15:25:42 crc kubenswrapper[4755]: E1210 15:25:42.216488 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 15:25:42.716445154 +0000 UTC m=+139.317328786 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:25:42 crc kubenswrapper[4755]: I1210 15:25:42.216747 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mqq47\" (UID: \"1189a5c2-6e43-4e4b-8181-d2bd78031673\") " pod="openshift-image-registry/image-registry-697d97f7c8-mqq47" Dec 10 15:25:42 crc kubenswrapper[4755]: E1210 15:25:42.217117 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:25:42.717106791 +0000 UTC m=+139.317990433 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mqq47" (UID: "1189a5c2-6e43-4e4b-8181-d2bd78031673") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:25:42 crc kubenswrapper[4755]: I1210 15:25:42.318225 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 15:25:42 crc kubenswrapper[4755]: E1210 15:25:42.318724 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 15:25:42.818704857 +0000 UTC m=+139.419588499 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:25:42 crc kubenswrapper[4755]: I1210 15:25:42.364841 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-plc7w" Dec 10 15:25:42 crc kubenswrapper[4755]: I1210 15:25:42.402768 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/441cfc68-29a3-4b18-9b34-cf3bc58107cc-cert\") pod \"ingress-canary-pw2tq\" (UID: \"441cfc68-29a3-4b18-9b34-cf3bc58107cc\") " pod="openshift-ingress-canary/ingress-canary-pw2tq" Dec 10 15:25:42 crc kubenswrapper[4755]: I1210 15:25:42.403632 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/1189a5c2-6e43-4e4b-8181-d2bd78031673-installation-pull-secrets\") pod \"image-registry-697d97f7c8-mqq47\" (UID: \"1189a5c2-6e43-4e4b-8181-d2bd78031673\") " pod="openshift-image-registry/image-registry-697d97f7c8-mqq47" Dec 10 15:25:42 crc kubenswrapper[4755]: I1210 15:25:42.404145 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1189a5c2-6e43-4e4b-8181-d2bd78031673-trusted-ca\") pod \"image-registry-697d97f7c8-mqq47\" (UID: \"1189a5c2-6e43-4e4b-8181-d2bd78031673\") " pod="openshift-image-registry/image-registry-697d97f7c8-mqq47" Dec 10 15:25:42 crc kubenswrapper[4755]: I1210 15:25:42.406780 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/1189a5c2-6e43-4e4b-8181-d2bd78031673-registry-tls\") pod \"image-registry-697d97f7c8-mqq47\" (UID: \"1189a5c2-6e43-4e4b-8181-d2bd78031673\") " pod="openshift-image-registry/image-registry-697d97f7c8-mqq47" Dec 10 15:25:42 crc kubenswrapper[4755]: I1210 15:25:42.407250 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ggbv\" (UniqueName: \"kubernetes.io/projected/1189a5c2-6e43-4e4b-8181-d2bd78031673-kube-api-access-4ggbv\") pod \"image-registry-697d97f7c8-mqq47\" (UID: \"1189a5c2-6e43-4e4b-8181-d2bd78031673\") " pod="openshift-image-registry/image-registry-697d97f7c8-mqq47" Dec 10 15:25:42 crc kubenswrapper[4755]: I1210 15:25:42.408574 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/ca0adcad-c9ae-4aaa-ada2-f9e0baa478a7-certs\") pod \"machine-config-server-455tj\" (UID: \"ca0adcad-c9ae-4aaa-ada2-f9e0baa478a7\") " pod="openshift-machine-config-operator/machine-config-server-455tj" Dec 10 15:25:42 crc kubenswrapper[4755]: I1210 15:25:42.409096 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1189a5c2-6e43-4e4b-8181-d2bd78031673-bound-sa-token\") pod \"image-registry-697d97f7c8-mqq47\" (UID: \"1189a5c2-6e43-4e4b-8181-d2bd78031673\") " pod="openshift-image-registry/image-registry-697d97f7c8-mqq47" Dec 10 15:25:42 crc kubenswrapper[4755]: I1210 15:25:42.411080 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b25z2\" (UniqueName: \"kubernetes.io/projected/ca0adcad-c9ae-4aaa-ada2-f9e0baa478a7-kube-api-access-b25z2\") pod \"machine-config-server-455tj\" (UID: \"ca0adcad-c9ae-4aaa-ada2-f9e0baa478a7\") " pod="openshift-machine-config-operator/machine-config-server-455tj" Dec 10 15:25:42 crc kubenswrapper[4755]: I1210 15:25:42.422586 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mqq47\" (UID: \"1189a5c2-6e43-4e4b-8181-d2bd78031673\") " pod="openshift-image-registry/image-registry-697d97f7c8-mqq47" Dec 10 15:25:42 crc kubenswrapper[4755]: E1210 15:25:42.423015 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:25:42.922999802 +0000 UTC m=+139.523883434 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mqq47" (UID: "1189a5c2-6e43-4e4b-8181-d2bd78031673") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:25:42 crc kubenswrapper[4755]: I1210 15:25:42.423157 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/ca0adcad-c9ae-4aaa-ada2-f9e0baa478a7-node-bootstrap-token\") pod \"machine-config-server-455tj\" (UID: \"ca0adcad-c9ae-4aaa-ada2-f9e0baa478a7\") " pod="openshift-machine-config-operator/machine-config-server-455tj" Dec 10 15:25:42 crc kubenswrapper[4755]: I1210 15:25:42.429729 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pw2tq" Dec 10 15:25:42 crc kubenswrapper[4755]: W1210 15:25:42.452673 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddd9b14b8_6d7c_4eeb_9748_a2e99daa4293.slice/crio-bfae3cfe6466a1c00c9cf72bfbfe8c3aaa577b5f579664b533607d0c79d974b0 WatchSource:0}: Error finding container bfae3cfe6466a1c00c9cf72bfbfe8c3aaa577b5f579664b533607d0c79d974b0: Status 404 returned error can't find the container with id bfae3cfe6466a1c00c9cf72bfbfe8c3aaa577b5f579664b533607d0c79d974b0 Dec 10 15:25:42 crc kubenswrapper[4755]: I1210 15:25:42.523219 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 15:25:42 crc kubenswrapper[4755]: E1210 15:25:42.523542 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 15:25:43.02352275 +0000 UTC m=+139.624406382 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:25:42 crc kubenswrapper[4755]: I1210 15:25:42.549450 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-m9ntd" event={"ID":"4e98c44e-5a60-49a0-9186-2367509dda97","Type":"ContainerStarted","Data":"a5f87a58a8fe5f3e98addc6e46e77d39a101ec826cfb7531df98eb4a0ce6d9a4"} Dec 10 15:25:42 crc kubenswrapper[4755]: I1210 15:25:42.550125 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-m9ntd" Dec 10 15:25:42 crc kubenswrapper[4755]: I1210 15:25:42.551931 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8vttb" event={"ID":"ee5f60d7-e2f5-4900-b238-e4ef9acf1de4","Type":"ContainerStarted","Data":"26bb836efa1ed217a16d2c97842f1ac1c60015c0b2379f5e8c5cbac745477992"} Dec 10 15:25:42 crc kubenswrapper[4755]: I1210 15:25:42.553146 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qgjc4" event={"ID":"c4a5cf0a-76df-4855-a52d-22dbc07e8f7a","Type":"ContainerStarted","Data":"4cee13d2194a54f46155356cd4a86a686b5bba0b23aca4a7aa1e8660e6bb4d08"} Dec 10 15:25:42 crc kubenswrapper[4755]: I1210 15:25:42.554777 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dhh5x" event={"ID":"3fd6e72e-c555-446b-ad32-bf71e8c1be54","Type":"ContainerStarted","Data":"b0292bf7c7c03b0ec2d64b16e03a0266bbb53188a8de3ff9661fba5c87439f3d"} Dec 10 15:25:42 crc kubenswrapper[4755]: I1210 15:25:42.556975 4755 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-m9ntd container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Dec 10 15:25:42 crc kubenswrapper[4755]: I1210 15:25:42.557015 4755 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-m9ntd" podUID="4e98c44e-5a60-49a0-9186-2367509dda97" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" Dec 10 15:25:42 crc kubenswrapper[4755]: I1210 15:25:42.557686 4755 generic.go:334] "Generic (PLEG): container finished" podID="dd11fb82-1556-4769-a1cc-11589b905b3f" containerID="dc1c1339793625908a4478d1a991789295638bbbfa3e3e955105d1a97ad945fd" exitCode=0 Dec 10 15:25:42 crc kubenswrapper[4755]: I1210 15:25:42.557752 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-26mg6" event={"ID":"dd11fb82-1556-4769-a1cc-11589b905b3f","Type":"ContainerDied","Data":"dc1c1339793625908a4478d1a991789295638bbbfa3e3e955105d1a97ad945fd"} Dec 10 15:25:42 crc kubenswrapper[4755]: I1210 15:25:42.557777 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-26mg6" event={"ID":"dd11fb82-1556-4769-a1cc-11589b905b3f","Type":"ContainerStarted","Data":"27133b0920efbd8c3e679b13632ffeb2b91ad10fd86d1c79904943630eb81c5d"} Dec 10 15:25:42 crc kubenswrapper[4755]: I1210 15:25:42.568802 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jkj94" event={"ID":"d40dd21f-096a-4dca-a313-566508e33dd3","Type":"ContainerStarted","Data":"a3bf0a82a3f8bfb005633e80438bb14cc01b8f6e8e86a64acc8d73df644de1c9"} Dec 10 15:25:42 crc kubenswrapper[4755]: I1210 15:25:42.571231 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-k4nkz" event={"ID":"ed5ac3ea-71b9-41f5-b629-ec016b6ef3c7","Type":"ContainerStarted","Data":"5d2585c5d8c9e533cdf9f2c6c8aa3291de65c1711beeb0640e75a3c18a1b81df"} Dec 10 15:25:42 crc kubenswrapper[4755]: I1210 15:25:42.575426 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7svz5" event={"ID":"6d5f332b-9d6b-40c2-8e63-47aa309ea740","Type":"ContainerStarted","Data":"80c6b3ecef6700349dda4a294da0d47b5aeea4aeac079eb3d70f8d8abefe6abc"} Dec 10 15:25:42 crc kubenswrapper[4755]: I1210 15:25:42.577367 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-kc8qq" event={"ID":"0f76f7be-c8e4-4943-81f2-8e416e747aec","Type":"ContainerStarted","Data":"abf4747ef3c1c579237c526fed182b916816c51cae875167a3e91506592f9764"} Dec 10 15:25:42 crc kubenswrapper[4755]: I1210 15:25:42.578159 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-ts8wv" event={"ID":"dd9b14b8-6d7c-4eeb-9748-a2e99daa4293","Type":"ContainerStarted","Data":"bfae3cfe6466a1c00c9cf72bfbfe8c3aaa577b5f579664b533607d0c79d974b0"} Dec 10 15:25:42 crc kubenswrapper[4755]: I1210 15:25:42.578879 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4bgvx" event={"ID":"8c1052c9-fa99-4f24-8fff-923dc489c08d","Type":"ContainerStarted","Data":"14cff523b2edbd32269f33544f5bc8f5b8bc75cf641ecdf026bddaf0610bd47a"} Dec 10 15:25:42 crc kubenswrapper[4755]: I1210 15:25:42.581940 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-g5hs8" event={"ID":"94689167-954d-4350-a1b8-e6125e32bd1f","Type":"ContainerStarted","Data":"f2e1e92f110073067b558e28ef39a9035e6ad9387cfa2b89d8af26a57b28b48d"} Dec 10 15:25:42 crc kubenswrapper[4755]: I1210 15:25:42.584602 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-gbvgh" event={"ID":"c6d1d322-1622-41d5-afb0-c441b346b8bf","Type":"ContainerStarted","Data":"2f331e44c10b9dc478e00c7fb5391d09d73661e0cf66c5f9c17a58534f7c300c"} Dec 10 15:25:42 crc kubenswrapper[4755]: I1210 15:25:42.585411 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-gbvgh" Dec 10 15:25:42 crc kubenswrapper[4755]: I1210 15:25:42.588354 4755 patch_prober.go:28] interesting pod/downloads-7954f5f757-gbvgh container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Dec 10 15:25:42 crc kubenswrapper[4755]: I1210 15:25:42.588407 4755 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-gbvgh" podUID="c6d1d322-1622-41d5-afb0-c441b346b8bf" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Dec 10 15:25:42 crc kubenswrapper[4755]: I1210 15:25:42.589287 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-sdrvv" event={"ID":"d91d7b48-9096-4f3e-8260-2d762173eb80","Type":"ContainerStarted","Data":"43df92eb392fe162f3d9851ca67904c5dec4af10b8dd263547afab26157ba077"} Dec 10 15:25:42 crc kubenswrapper[4755]: I1210 15:25:42.601210 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-68s5b" event={"ID":"d003362b-b35a-4f18-b387-62ec1490321a","Type":"ContainerStarted","Data":"907b67ab6c4433f397dc391b1d34ab828e085f57b1e9c6d88e357a0b1055ed12"} Dec 10 15:25:42 crc kubenswrapper[4755]: I1210 15:25:42.601243 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-68s5b" event={"ID":"d003362b-b35a-4f18-b387-62ec1490321a","Type":"ContainerStarted","Data":"e18c28b893b12cc47b56ec2d2c32057d98aa06e25009cb5c01633b1121381ba8"} Dec 10 15:25:42 crc kubenswrapper[4755]: I1210 15:25:42.607263 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2ldzj" event={"ID":"51e9f48f-0b01-4f2b-8d00-77fd91f6c3b0","Type":"ContainerStarted","Data":"1bd638779544a320aaa32e6fb58baed55d1317debf84d73f10ecf78e90a62d6a"} Dec 10 15:25:42 crc kubenswrapper[4755]: I1210 15:25:42.607310 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2ldzj" event={"ID":"51e9f48f-0b01-4f2b-8d00-77fd91f6c3b0","Type":"ContainerStarted","Data":"af6be7242b9d0d6a3190e6167f6a836dee7f9b052dda51fe110ea3798718f7be"} Dec 10 15:25:42 crc kubenswrapper[4755]: I1210 15:25:42.612039 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-n66x6" event={"ID":"228e9f52-aead-4cf5-af32-8b0b3aec8cf4","Type":"ContainerStarted","Data":"e2e6102bf733632716b6429ef068bd62439c167492810f9eb4fef12a3862f8a1"} Dec 10 15:25:42 crc kubenswrapper[4755]: I1210 15:25:42.615834 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4fdzm" event={"ID":"e45db753-a360-4857-9bd4-11f898ede4dc","Type":"ContainerStarted","Data":"93d81403b186695e829972b94cbbf8015b50e563a74c7730bc269293c05f4cb9"} Dec 10 15:25:42 crc kubenswrapper[4755]: I1210 15:25:42.624479 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mqq47\" (UID: \"1189a5c2-6e43-4e4b-8181-d2bd78031673\") " pod="openshift-image-registry/image-registry-697d97f7c8-mqq47" Dec 10 15:25:42 crc kubenswrapper[4755]: E1210 15:25:42.628197 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:25:43.128180704 +0000 UTC m=+139.729064336 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mqq47" (UID: "1189a5c2-6e43-4e4b-8181-d2bd78031673") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:25:42 crc kubenswrapper[4755]: I1210 15:25:42.629880 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-xx85g" event={"ID":"e846eda6-431b-4fdf-98dc-80e3fc6b122f","Type":"ContainerStarted","Data":"2b211b217e1b6b09100e960252ff7cade291a271f95bd46eba0194ca33f063d4"} Dec 10 15:25:42 crc kubenswrapper[4755]: I1210 15:25:42.677761 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-n6qb5" event={"ID":"24e3bc3c-7e93-4c91-b0a2-85877004fafc","Type":"ContainerStarted","Data":"fa2868a1c2578591f381f7fed742fbb61db0c2e5b7be3cee10d36161a9a77338"} Dec 10 15:25:42 crc kubenswrapper[4755]: I1210 15:25:42.686198 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-455tj" Dec 10 15:25:42 crc kubenswrapper[4755]: I1210 15:25:42.700245 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6khhb" event={"ID":"7d2e3e9e-69f8-46f4-b825-ee369ab23de8","Type":"ContainerStarted","Data":"25cfde89615583aa054cdb7b30a8754a4a721c84d65d3b662bae19a612d6945f"} Dec 10 15:25:42 crc kubenswrapper[4755]: I1210 15:25:42.704460 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vnzg7" event={"ID":"22368f6c-56f1-4ef1-bada-fcad04f7b8a4","Type":"ContainerStarted","Data":"1234bff8d24cb803bd22d1ed250e418ff60e81091a3d37430e41765299a30cf0"} Dec 10 15:25:42 crc kubenswrapper[4755]: I1210 15:25:42.725979 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 15:25:42 crc kubenswrapper[4755]: E1210 15:25:42.726519 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 15:25:43.226497914 +0000 UTC m=+139.827381546 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:25:42 crc kubenswrapper[4755]: I1210 15:25:42.747535 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zcwtl" podStartSLOduration=119.747458329 podStartE2EDuration="1m59.747458329s" podCreationTimestamp="2025-12-10 15:23:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:25:42.719947455 +0000 UTC m=+139.320831087" watchObservedRunningTime="2025-12-10 15:25:42.747458329 +0000 UTC m=+139.348341961" Dec 10 15:25:42 crc kubenswrapper[4755]: I1210 15:25:42.827807 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mqq47\" (UID: \"1189a5c2-6e43-4e4b-8181-d2bd78031673\") " pod="openshift-image-registry/image-registry-697d97f7c8-mqq47" Dec 10 15:25:42 crc kubenswrapper[4755]: E1210 15:25:42.831593 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:25:43.33157214 +0000 UTC m=+139.932455832 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mqq47" (UID: "1189a5c2-6e43-4e4b-8181-d2bd78031673") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:25:42 crc kubenswrapper[4755]: I1210 15:25:42.942895 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 15:25:42 crc kubenswrapper[4755]: E1210 15:25:42.943337 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 15:25:43.443315389 +0000 UTC m=+140.044199021 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:25:42 crc kubenswrapper[4755]: I1210 15:25:42.959586 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-rbjcx"] Dec 10 15:25:43 crc kubenswrapper[4755]: I1210 15:25:43.024348 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29422995-knlb5"] Dec 10 15:25:43 crc kubenswrapper[4755]: I1210 15:25:43.046970 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mqq47\" (UID: \"1189a5c2-6e43-4e4b-8181-d2bd78031673\") " pod="openshift-image-registry/image-registry-697d97f7c8-mqq47" Dec 10 15:25:43 crc kubenswrapper[4755]: E1210 15:25:43.047284 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:25:43.547271446 +0000 UTC m=+140.148155078 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mqq47" (UID: "1189a5c2-6e43-4e4b-8181-d2bd78031673") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:25:43 crc kubenswrapper[4755]: I1210 15:25:43.134352 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-skm6x"] Dec 10 15:25:43 crc kubenswrapper[4755]: I1210 15:25:43.155144 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 15:25:43 crc kubenswrapper[4755]: E1210 15:25:43.155555 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 15:25:43.655523734 +0000 UTC m=+140.256407366 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:25:43 crc kubenswrapper[4755]: I1210 15:25:43.210509 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gvpnh" podStartSLOduration=120.210483879 podStartE2EDuration="2m0.210483879s" podCreationTimestamp="2025-12-10 15:23:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:25:43.209272608 +0000 UTC m=+139.810156240" watchObservedRunningTime="2025-12-10 15:25:43.210483879 +0000 UTC m=+139.811367511" Dec 10 15:25:43 crc kubenswrapper[4755]: I1210 15:25:43.256311 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mqq47\" (UID: \"1189a5c2-6e43-4e4b-8181-d2bd78031673\") " pod="openshift-image-registry/image-registry-697d97f7c8-mqq47" Dec 10 15:25:43 crc kubenswrapper[4755]: E1210 15:25:43.257093 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:25:43.757077568 +0000 UTC m=+140.357961200 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mqq47" (UID: "1189a5c2-6e43-4e4b-8181-d2bd78031673") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:25:43 crc kubenswrapper[4755]: I1210 15:25:43.344781 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-sxtxj" podStartSLOduration=120.344754622 podStartE2EDuration="2m0.344754622s" podCreationTimestamp="2025-12-10 15:23:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:25:43.343977462 +0000 UTC m=+139.944861124" watchObservedRunningTime="2025-12-10 15:25:43.344754622 +0000 UTC m=+139.945638254" Dec 10 15:25:43 crc kubenswrapper[4755]: I1210 15:25:43.358655 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 15:25:43 crc kubenswrapper[4755]: E1210 15:25:43.359141 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 15:25:43.859123035 +0000 UTC m=+140.460006667 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:25:43 crc kubenswrapper[4755]: I1210 15:25:43.462594 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mqq47\" (UID: \"1189a5c2-6e43-4e4b-8181-d2bd78031673\") " pod="openshift-image-registry/image-registry-697d97f7c8-mqq47" Dec 10 15:25:43 crc kubenswrapper[4755]: E1210 15:25:43.464065 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:25:43.964049407 +0000 UTC m=+140.564933039 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mqq47" (UID: "1189a5c2-6e43-4e4b-8181-d2bd78031673") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:25:43 crc kubenswrapper[4755]: I1210 15:25:43.471536 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-sw7vc"] Dec 10 15:25:43 crc kubenswrapper[4755]: W1210 15:25:43.559358 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8d212083_766e_4b88_ba08_8570f05f6c94.slice/crio-eee42504cbad9db430318ba88a34f0efb4b388e5aab6832ae2663c9a51a1febd WatchSource:0}: Error finding container eee42504cbad9db430318ba88a34f0efb4b388e5aab6832ae2663c9a51a1febd: Status 404 returned error can't find the container with id eee42504cbad9db430318ba88a34f0efb4b388e5aab6832ae2663c9a51a1febd Dec 10 15:25:43 crc kubenswrapper[4755]: I1210 15:25:43.563337 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 15:25:43 crc kubenswrapper[4755]: E1210 15:25:43.563660 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 15:25:44.06363341 +0000 UTC m=+140.664517092 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:25:43 crc kubenswrapper[4755]: I1210 15:25:43.563709 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mqq47\" (UID: \"1189a5c2-6e43-4e4b-8181-d2bd78031673\") " pod="openshift-image-registry/image-registry-697d97f7c8-mqq47" Dec 10 15:25:43 crc kubenswrapper[4755]: E1210 15:25:43.564055 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:25:44.06402613 +0000 UTC m=+140.664909762 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mqq47" (UID: "1189a5c2-6e43-4e4b-8181-d2bd78031673") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:25:43 crc kubenswrapper[4755]: I1210 15:25:43.673455 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 15:25:43 crc kubenswrapper[4755]: E1210 15:25:43.674179 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 15:25:44.174151557 +0000 UTC m=+140.775035179 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:25:43 crc kubenswrapper[4755]: I1210 15:25:43.701492 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-f47gb" podStartSLOduration=120.701447605 podStartE2EDuration="2m0.701447605s" podCreationTimestamp="2025-12-10 15:23:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:25:43.676311393 +0000 UTC m=+140.277195045" watchObservedRunningTime="2025-12-10 15:25:43.701447605 +0000 UTC m=+140.302331237" Dec 10 15:25:43 crc kubenswrapper[4755]: I1210 15:25:43.704817 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-j46lk"] Dec 10 15:25:43 crc kubenswrapper[4755]: I1210 15:25:43.775327 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mqq47\" (UID: \"1189a5c2-6e43-4e4b-8181-d2bd78031673\") " pod="openshift-image-registry/image-registry-697d97f7c8-mqq47" Dec 10 15:25:43 crc kubenswrapper[4755]: E1210 15:25:43.778955 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:25:44.278936015 +0000 UTC m=+140.879819647 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mqq47" (UID: "1189a5c2-6e43-4e4b-8181-d2bd78031673") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:25:43 crc kubenswrapper[4755]: I1210 15:25:43.871873 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qgjc4" event={"ID":"c4a5cf0a-76df-4855-a52d-22dbc07e8f7a","Type":"ContainerStarted","Data":"7ca6dbf38ee81d1314d9c780f6d2e7a1b0e34c410a5758e714e9e5441c841569"} Dec 10 15:25:43 crc kubenswrapper[4755]: I1210 15:25:43.879289 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 15:25:43 crc kubenswrapper[4755]: E1210 15:25:43.879882 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 15:25:44.379857803 +0000 UTC m=+140.980741425 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:25:43 crc kubenswrapper[4755]: I1210 15:25:43.909619 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-gbvgh" podStartSLOduration=120.909587735 podStartE2EDuration="2m0.909587735s" podCreationTimestamp="2025-12-10 15:23:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:25:43.825239957 +0000 UTC m=+140.426123619" watchObservedRunningTime="2025-12-10 15:25:43.909587735 +0000 UTC m=+140.510471367" Dec 10 15:25:43 crc kubenswrapper[4755]: I1210 15:25:43.950988 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-k4nkz" event={"ID":"ed5ac3ea-71b9-41f5-b629-ec016b6ef3c7","Type":"ContainerStarted","Data":"6701f0850f0535d2cf920274a56ae9a17bb186c5fb64391d6873150c2e87a31a"} Dec 10 15:25:43 crc kubenswrapper[4755]: I1210 15:25:43.981407 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7r8q8"] Dec 10 15:25:43 crc kubenswrapper[4755]: E1210 15:25:43.990413 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:25:44.490389841 +0000 UTC m=+141.091273643 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mqq47" (UID: "1189a5c2-6e43-4e4b-8181-d2bd78031673") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:25:44 crc kubenswrapper[4755]: I1210 15:25:43.984350 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mqq47\" (UID: \"1189a5c2-6e43-4e4b-8181-d2bd78031673\") " pod="openshift-image-registry/image-registry-697d97f7c8-mqq47" Dec 10 15:25:44 crc kubenswrapper[4755]: I1210 15:25:44.007779 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-n66x6" event={"ID":"228e9f52-aead-4cf5-af32-8b0b3aec8cf4","Type":"ContainerStarted","Data":"b42a024e51f07597f5a07dc193f8323cca3ed45b409685c4864342bae0f7ec4f"} Dec 10 15:25:44 crc kubenswrapper[4755]: I1210 15:25:44.009913 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-vhv9h"] Dec 10 15:25:44 crc kubenswrapper[4755]: I1210 15:25:44.011815 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-5bvrp"] Dec 10 15:25:44 crc kubenswrapper[4755]: I1210 15:25:44.040541 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-6hpgg"] Dec 10 15:25:44 crc kubenswrapper[4755]: I1210 15:25:44.054415 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-m9ntd" podStartSLOduration=121.05437688 podStartE2EDuration="2m1.05437688s" podCreationTimestamp="2025-12-10 15:23:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:25:43.976709696 +0000 UTC m=+140.577593328" watchObservedRunningTime="2025-12-10 15:25:44.05437688 +0000 UTC m=+140.655260522" Dec 10 15:25:44 crc kubenswrapper[4755]: I1210 15:25:44.055005 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4fdzm" event={"ID":"e45db753-a360-4857-9bd4-11f898ede4dc","Type":"ContainerStarted","Data":"e0bf20a84cc50c82f98f7337cd578a9e9a3ff846a10a1f84b6c60becb30781fe"} Dec 10 15:25:44 crc kubenswrapper[4755]: W1210 15:25:44.062259 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod27f3b602_6196_4cbc_bf90_e695403d20c7.slice/crio-aa79ae7bd4ff27196bcdf73c5cec12480b7a3080b4c53cb44943cbc76ac0a866 WatchSource:0}: Error finding container aa79ae7bd4ff27196bcdf73c5cec12480b7a3080b4c53cb44943cbc76ac0a866: Status 404 returned error can't find the container with id aa79ae7bd4ff27196bcdf73c5cec12480b7a3080b4c53cb44943cbc76ac0a866 Dec 10 15:25:44 crc kubenswrapper[4755]: I1210 15:25:44.074313 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-wl9hp"] Dec 10 15:25:44 crc kubenswrapper[4755]: I1210 15:25:44.100005 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 15:25:44 crc kubenswrapper[4755]: E1210 15:25:44.100901 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 15:25:44.600887137 +0000 UTC m=+141.201770769 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:25:44 crc kubenswrapper[4755]: I1210 15:25:44.107410 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-pw2tq"] Dec 10 15:25:44 crc kubenswrapper[4755]: I1210 15:25:44.123841 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cvrkm"] Dec 10 15:25:44 crc kubenswrapper[4755]: I1210 15:25:44.126933 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6khhb" podStartSLOduration=120.126896211 podStartE2EDuration="2m0.126896211s" podCreationTimestamp="2025-12-10 15:23:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:25:44.062389638 +0000 UTC m=+140.663273270" watchObservedRunningTime="2025-12-10 15:25:44.126896211 +0000 UTC m=+140.727779843" Dec 10 15:25:44 crc kubenswrapper[4755]: I1210 15:25:44.128071 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-68s5b" podStartSLOduration=121.128059472 podStartE2EDuration="2m1.128059472s" podCreationTimestamp="2025-12-10 15:23:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:25:44.090094457 +0000 UTC m=+140.690978089" watchObservedRunningTime="2025-12-10 15:25:44.128059472 +0000 UTC m=+140.728943104" Dec 10 15:25:44 crc kubenswrapper[4755]: I1210 15:25:44.169320 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jkj94" event={"ID":"d40dd21f-096a-4dca-a313-566508e33dd3","Type":"ContainerStarted","Data":"a95d97b5d0b09523df92187c773b075de853dcaea2d7274a18c2bc6cceb82a8d"} Dec 10 15:25:44 crc kubenswrapper[4755]: W1210 15:25:44.171151 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podee6ce5dd_6c0d_4acd_b795_6cb930770bec.slice/crio-a64bf214846430f38572c030e379b3878579cffaa85332fc26e0cc7636d08a78 WatchSource:0}: Error finding container a64bf214846430f38572c030e379b3878579cffaa85332fc26e0cc7636d08a78: Status 404 returned error can't find the container with id a64bf214846430f38572c030e379b3878579cffaa85332fc26e0cc7636d08a78 Dec 10 15:25:44 crc kubenswrapper[4755]: I1210 15:25:44.206252 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mqq47\" (UID: \"1189a5c2-6e43-4e4b-8181-d2bd78031673\") " pod="openshift-image-registry/image-registry-697d97f7c8-mqq47" Dec 10 15:25:44 crc kubenswrapper[4755]: I1210 15:25:44.210990 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-plc7w"] Dec 10 15:25:44 crc kubenswrapper[4755]: I1210 15:25:44.243913 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29422995-knlb5" event={"ID":"8d212083-766e-4b88-ba08-8570f05f6c94","Type":"ContainerStarted","Data":"eee42504cbad9db430318ba88a34f0efb4b388e5aab6832ae2663c9a51a1febd"} Dec 10 15:25:44 crc kubenswrapper[4755]: I1210 15:25:44.265520 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dhh5x" event={"ID":"3fd6e72e-c555-446b-ad32-bf71e8c1be54","Type":"ContainerStarted","Data":"9d0ea5aa00b9d438107ae0c96513d2e08506a1eef909021245a5885012e50106"} Dec 10 15:25:44 crc kubenswrapper[4755]: E1210 15:25:44.270734 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:25:44.770660111 +0000 UTC m=+141.371543743 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mqq47" (UID: "1189a5c2-6e43-4e4b-8181-d2bd78031673") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:25:44 crc kubenswrapper[4755]: I1210 15:25:44.276654 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-b4kk7"] Dec 10 15:25:44 crc kubenswrapper[4755]: I1210 15:25:44.283095 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-skm6x" event={"ID":"011a7c6a-048f-4db1-ac1f-259b44dd28bc","Type":"ContainerStarted","Data":"0c91cb3cce270939edf4b0b85d5a0c053126b21633d10fe662d95eb9ba7aba16"} Dec 10 15:25:44 crc kubenswrapper[4755]: I1210 15:25:44.297733 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qgjc4" podStartSLOduration=120.297703192 podStartE2EDuration="2m0.297703192s" podCreationTimestamp="2025-12-10 15:23:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:25:44.252933491 +0000 UTC m=+140.853817123" watchObservedRunningTime="2025-12-10 15:25:44.297703192 +0000 UTC m=+140.898586824" Dec 10 15:25:44 crc kubenswrapper[4755]: I1210 15:25:44.328500 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 15:25:44 crc kubenswrapper[4755]: E1210 15:25:44.329076 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 15:25:44.829054675 +0000 UTC m=+141.429938307 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:25:44 crc kubenswrapper[4755]: I1210 15:25:44.332166 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-sw7vc" event={"ID":"460a4f34-c415-4a61-8877-7ad9d851c0e5","Type":"ContainerStarted","Data":"190613b04f2c935781b5fc36a26d052a12a5fe02e674a3256b8e3e4d960bc3f7"} Dec 10 15:25:44 crc kubenswrapper[4755]: I1210 15:25:44.350930 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-rbjcx" event={"ID":"0470027c-6ca8-4404-a366-997bf288e1d0","Type":"ContainerStarted","Data":"4e0ec33e6ea2c43176620985d120df9057ba42d70b2467d5e1ddcd5aaf89f0b2"} Dec 10 15:25:44 crc kubenswrapper[4755]: I1210 15:25:44.357355 4755 patch_prober.go:28] interesting pod/downloads-7954f5f757-gbvgh container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Dec 10 15:25:44 crc kubenswrapper[4755]: I1210 15:25:44.357575 4755 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-gbvgh" podUID="c6d1d322-1622-41d5-afb0-c441b346b8bf" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Dec 10 15:25:44 crc kubenswrapper[4755]: I1210 15:25:44.378389 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-m9ntd" Dec 10 15:25:44 crc kubenswrapper[4755]: I1210 15:25:44.430110 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mqq47\" (UID: \"1189a5c2-6e43-4e4b-8181-d2bd78031673\") " pod="openshift-image-registry/image-registry-697d97f7c8-mqq47" Dec 10 15:25:44 crc kubenswrapper[4755]: E1210 15:25:44.438236 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:25:44.938216207 +0000 UTC m=+141.539099839 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mqq47" (UID: "1189a5c2-6e43-4e4b-8181-d2bd78031673") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:25:44 crc kubenswrapper[4755]: I1210 15:25:44.478699 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4fdzm" podStartSLOduration=121.478671657 podStartE2EDuration="2m1.478671657s" podCreationTimestamp="2025-12-10 15:23:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:25:44.467856176 +0000 UTC m=+141.068739798" watchObservedRunningTime="2025-12-10 15:25:44.478671657 +0000 UTC m=+141.079555299" Dec 10 15:25:44 crc kubenswrapper[4755]: I1210 15:25:44.532945 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 15:25:44 crc kubenswrapper[4755]: E1210 15:25:44.533440 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 15:25:45.033424287 +0000 UTC m=+141.634307929 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:25:44 crc kubenswrapper[4755]: I1210 15:25:44.584605 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-k4nkz" podStartSLOduration=121.584587975 podStartE2EDuration="2m1.584587975s" podCreationTimestamp="2025-12-10 15:23:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:25:44.584058841 +0000 UTC m=+141.184942493" watchObservedRunningTime="2025-12-10 15:25:44.584587975 +0000 UTC m=+141.185471607" Dec 10 15:25:44 crc kubenswrapper[4755]: I1210 15:25:44.636174 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mqq47\" (UID: \"1189a5c2-6e43-4e4b-8181-d2bd78031673\") " pod="openshift-image-registry/image-registry-697d97f7c8-mqq47" Dec 10 15:25:44 crc kubenswrapper[4755]: E1210 15:25:44.636518 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:25:45.136505191 +0000 UTC m=+141.737388823 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mqq47" (UID: "1189a5c2-6e43-4e4b-8181-d2bd78031673") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:25:44 crc kubenswrapper[4755]: I1210 15:25:44.650380 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-g5hs8" podStartSLOduration=121.65036258 podStartE2EDuration="2m1.65036258s" podCreationTimestamp="2025-12-10 15:23:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:25:44.647858575 +0000 UTC m=+141.248742207" watchObservedRunningTime="2025-12-10 15:25:44.65036258 +0000 UTC m=+141.251246212" Dec 10 15:25:44 crc kubenswrapper[4755]: I1210 15:25:44.708130 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2ldzj" podStartSLOduration=120.708108489 podStartE2EDuration="2m0.708108489s" podCreationTimestamp="2025-12-10 15:23:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:25:44.705942722 +0000 UTC m=+141.306826364" watchObservedRunningTime="2025-12-10 15:25:44.708108489 +0000 UTC m=+141.308992121" Dec 10 15:25:44 crc kubenswrapper[4755]: I1210 15:25:44.732280 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jkj94" podStartSLOduration=120.732258544 podStartE2EDuration="2m0.732258544s" podCreationTimestamp="2025-12-10 15:23:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:25:44.730024987 +0000 UTC m=+141.330908629" watchObservedRunningTime="2025-12-10 15:25:44.732258544 +0000 UTC m=+141.333142186" Dec 10 15:25:44 crc kubenswrapper[4755]: I1210 15:25:44.736996 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 15:25:44 crc kubenswrapper[4755]: E1210 15:25:44.737176 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 15:25:45.237160592 +0000 UTC m=+141.838044224 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:25:44 crc kubenswrapper[4755]: I1210 15:25:44.737250 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mqq47\" (UID: \"1189a5c2-6e43-4e4b-8181-d2bd78031673\") " pod="openshift-image-registry/image-registry-697d97f7c8-mqq47" Dec 10 15:25:44 crc kubenswrapper[4755]: E1210 15:25:44.737635 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:25:45.237616883 +0000 UTC m=+141.838500515 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mqq47" (UID: "1189a5c2-6e43-4e4b-8181-d2bd78031673") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:25:44 crc kubenswrapper[4755]: I1210 15:25:44.838084 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 15:25:44 crc kubenswrapper[4755]: E1210 15:25:44.838523 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 15:25:45.338506021 +0000 UTC m=+141.939389653 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:25:44 crc kubenswrapper[4755]: I1210 15:25:44.939334 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mqq47\" (UID: \"1189a5c2-6e43-4e4b-8181-d2bd78031673\") " pod="openshift-image-registry/image-registry-697d97f7c8-mqq47" Dec 10 15:25:44 crc kubenswrapper[4755]: E1210 15:25:44.939795 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:25:45.439768627 +0000 UTC m=+142.040652259 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mqq47" (UID: "1189a5c2-6e43-4e4b-8181-d2bd78031673") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:25:45 crc kubenswrapper[4755]: I1210 15:25:45.042272 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 15:25:45 crc kubenswrapper[4755]: E1210 15:25:45.042564 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 15:25:45.542533153 +0000 UTC m=+142.143416795 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:25:45 crc kubenswrapper[4755]: I1210 15:25:45.042702 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mqq47\" (UID: \"1189a5c2-6e43-4e4b-8181-d2bd78031673\") " pod="openshift-image-registry/image-registry-697d97f7c8-mqq47" Dec 10 15:25:45 crc kubenswrapper[4755]: E1210 15:25:45.058336 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:25:45.558309563 +0000 UTC m=+142.159193195 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mqq47" (UID: "1189a5c2-6e43-4e4b-8181-d2bd78031673") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:25:45 crc kubenswrapper[4755]: I1210 15:25:45.065905 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6khhb" Dec 10 15:25:45 crc kubenswrapper[4755]: I1210 15:25:45.066579 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6khhb" Dec 10 15:25:45 crc kubenswrapper[4755]: I1210 15:25:45.085352 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6khhb" Dec 10 15:25:45 crc kubenswrapper[4755]: I1210 15:25:45.149206 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 15:25:45 crc kubenswrapper[4755]: E1210 15:25:45.149770 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 15:25:45.649748094 +0000 UTC m=+142.250631726 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:25:45 crc kubenswrapper[4755]: I1210 15:25:45.149895 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mqq47\" (UID: \"1189a5c2-6e43-4e4b-8181-d2bd78031673\") " pod="openshift-image-registry/image-registry-697d97f7c8-mqq47" Dec 10 15:25:45 crc kubenswrapper[4755]: E1210 15:25:45.150229 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:25:45.650223177 +0000 UTC m=+142.251106809 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mqq47" (UID: "1189a5c2-6e43-4e4b-8181-d2bd78031673") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:25:45 crc kubenswrapper[4755]: I1210 15:25:45.253535 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 15:25:45 crc kubenswrapper[4755]: E1210 15:25:45.253782 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 15:25:45.753745083 +0000 UTC m=+142.354628725 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:25:45 crc kubenswrapper[4755]: I1210 15:25:45.253957 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mqq47\" (UID: \"1189a5c2-6e43-4e4b-8181-d2bd78031673\") " pod="openshift-image-registry/image-registry-697d97f7c8-mqq47" Dec 10 15:25:45 crc kubenswrapper[4755]: E1210 15:25:45.254591 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:25:45.754579694 +0000 UTC m=+142.355463476 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mqq47" (UID: "1189a5c2-6e43-4e4b-8181-d2bd78031673") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:25:45 crc kubenswrapper[4755]: I1210 15:25:45.354844 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 15:25:45 crc kubenswrapper[4755]: E1210 15:25:45.355033 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 15:25:45.855004259 +0000 UTC m=+142.455887891 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:25:45 crc kubenswrapper[4755]: I1210 15:25:45.355172 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mqq47\" (UID: \"1189a5c2-6e43-4e4b-8181-d2bd78031673\") " pod="openshift-image-registry/image-registry-697d97f7c8-mqq47" Dec 10 15:25:45 crc kubenswrapper[4755]: E1210 15:25:45.355539 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:25:45.855531753 +0000 UTC m=+142.456415385 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mqq47" (UID: "1189a5c2-6e43-4e4b-8181-d2bd78031673") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:25:45 crc kubenswrapper[4755]: I1210 15:25:45.390108 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-26mg6" event={"ID":"dd11fb82-1556-4769-a1cc-11589b905b3f","Type":"ContainerStarted","Data":"88c952398cabf2752aecc4c9ffbe47bb56467f100a97a9f0d823f532ebedc717"} Dec 10 15:25:45 crc kubenswrapper[4755]: I1210 15:25:45.395762 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-ts8wv" event={"ID":"dd9b14b8-6d7c-4eeb-9748-a2e99daa4293","Type":"ContainerStarted","Data":"beb16a3dbd0427858a139489021fa8c67ad3f6e22dde0312b9a13fb3c29a55ee"} Dec 10 15:25:45 crc kubenswrapper[4755]: I1210 15:25:45.398613 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-sdrvv" event={"ID":"d91d7b48-9096-4f3e-8260-2d762173eb80","Type":"ContainerStarted","Data":"28aa42a6e19ab203de3cf670d5cd1dedcfe167a26ff2186e89752effcf01de24"} Dec 10 15:25:45 crc kubenswrapper[4755]: I1210 15:25:45.401103 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-455tj" event={"ID":"ca0adcad-c9ae-4aaa-ada2-f9e0baa478a7","Type":"ContainerStarted","Data":"bf3454f6fa6bce49f2f0fd430064378a0aa1859052d08996f46a3cd2a6f638fe"} Dec 10 15:25:45 crc kubenswrapper[4755]: I1210 15:25:45.402088 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-g5hs8" event={"ID":"94689167-954d-4350-a1b8-e6125e32bd1f","Type":"ContainerStarted","Data":"579b858405616280fce12f838541fce3b49a566fbe34e2152a6413ef669c00d6"} Dec 10 15:25:45 crc kubenswrapper[4755]: I1210 15:25:45.404655 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-plc7w" event={"ID":"5ac506be-249f-4065-84b6-26fb51e47790","Type":"ContainerStarted","Data":"d88d1b2eb8435b10472342a9de9537d43b799931b834f61db0a6b946d0f8d66f"} Dec 10 15:25:45 crc kubenswrapper[4755]: I1210 15:25:45.407254 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-n6qb5" event={"ID":"24e3bc3c-7e93-4c91-b0a2-85877004fafc","Type":"ContainerStarted","Data":"779c415b86b45e88f4ad32b99d19ddf0ebcab03f99eeaf29c20ff0b22c36e94a"} Dec 10 15:25:45 crc kubenswrapper[4755]: I1210 15:25:45.414956 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-b4kk7" event={"ID":"96d9c114-e67a-43a5-b081-4a1de76cc870","Type":"ContainerStarted","Data":"a0c81448bd1110feef8be8859a0fbf183bf4b7130edd7efa334465e5e6f772da"} Dec 10 15:25:45 crc kubenswrapper[4755]: I1210 15:25:45.416036 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-5bvrp" event={"ID":"64e1b92e-9035-4439-abdc-86205e68c591","Type":"ContainerStarted","Data":"d716721530d9e5cf0f4402f8434d6dd7b2cf3580fd677de9f4555a2ae693d323"} Dec 10 15:25:45 crc kubenswrapper[4755]: I1210 15:25:45.417250 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dhh5x" event={"ID":"3fd6e72e-c555-446b-ad32-bf71e8c1be54","Type":"ContainerStarted","Data":"266b8c6e8e31bc923078558bf98ac7dad2d6fb94117ddffd8267c6aba5ea51d7"} Dec 10 15:25:45 crc kubenswrapper[4755]: I1210 15:25:45.418174 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dhh5x" Dec 10 15:25:45 crc kubenswrapper[4755]: I1210 15:25:45.419099 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-vhv9h" event={"ID":"e0d76e44-8053-4afa-b974-bb3f945c9c23","Type":"ContainerStarted","Data":"d494a4e019111e8703b5adc9fdee2fb8906c0ba84bd152e84fc2b465048c627a"} Dec 10 15:25:45 crc kubenswrapper[4755]: I1210 15:25:45.421603 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-n66x6" event={"ID":"228e9f52-aead-4cf5-af32-8b0b3aec8cf4","Type":"ContainerStarted","Data":"4bc0abfc48bd32911930e00a41f0d7935b1fb1ae0c16344f2f93b831759abe76"} Dec 10 15:25:45 crc kubenswrapper[4755]: I1210 15:25:45.424949 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-pw2tq" event={"ID":"441cfc68-29a3-4b18-9b34-cf3bc58107cc","Type":"ContainerStarted","Data":"c237ab8a3217327ac8cd4a8768d40ed493a04fbb473e4acc5ac7f768ac8b6e3b"} Dec 10 15:25:45 crc kubenswrapper[4755]: I1210 15:25:45.426290 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-j46lk" event={"ID":"27f3b602-6196-4cbc-bf90-e695403d20c7","Type":"ContainerStarted","Data":"aa79ae7bd4ff27196bcdf73c5cec12480b7a3080b4c53cb44943cbc76ac0a866"} Dec 10 15:25:45 crc kubenswrapper[4755]: I1210 15:25:45.427779 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7r8q8" event={"ID":"6f034c10-25fc-403c-9a49-58cb5a182222","Type":"ContainerStarted","Data":"20cb50c2f14e66ebe33f88360f6008843d8f77ecbfa9bbeefa118e8b55b22afc"} Dec 10 15:25:45 crc kubenswrapper[4755]: I1210 15:25:45.441547 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cvrkm" event={"ID":"f516b039-30bd-47f7-b6fa-48966012efe2","Type":"ContainerStarted","Data":"5ca4cfab6c7e2034bcb1ae51fb372f4888f1476f63353ddacae7d70a448644ba"} Dec 10 15:25:45 crc kubenswrapper[4755]: I1210 15:25:45.443138 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8vttb" event={"ID":"ee5f60d7-e2f5-4900-b238-e4ef9acf1de4","Type":"ContainerStarted","Data":"e69a2edb327fa2e81fc8cd915f2ea6ff1c157b9860f36f3a76044083fb1c643d"} Dec 10 15:25:45 crc kubenswrapper[4755]: I1210 15:25:45.444885 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-wl9hp" event={"ID":"30281ea2-4ba0-44c5-981d-8961073236a8","Type":"ContainerStarted","Data":"5234e94b117f2a13954dc0e4a42b4b47e593cc67ce21cd587bd1d6205262e6c3"} Dec 10 15:25:45 crc kubenswrapper[4755]: I1210 15:25:45.446625 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2ldzj" event={"ID":"51e9f48f-0b01-4f2b-8d00-77fd91f6c3b0","Type":"ContainerStarted","Data":"c527efd42baa0be0d1685e08e09339e670f1724786f8ed10f1a1208e0f25b50e"} Dec 10 15:25:45 crc kubenswrapper[4755]: I1210 15:25:45.448147 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-6hpgg" event={"ID":"ee6ce5dd-6c0d-4acd-b795-6cb930770bec","Type":"ContainerStarted","Data":"a64bf214846430f38572c030e379b3878579cffaa85332fc26e0cc7636d08a78"} Dec 10 15:25:45 crc kubenswrapper[4755]: I1210 15:25:45.449311 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4bgvx" event={"ID":"8c1052c9-fa99-4f24-8fff-923dc489c08d","Type":"ContainerStarted","Data":"66de0d05dd090c473da341567845c0e3f2768051c7b48d2558f5c9727dba8053"} Dec 10 15:25:45 crc kubenswrapper[4755]: I1210 15:25:45.450259 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4bgvx" Dec 10 15:25:45 crc kubenswrapper[4755]: I1210 15:25:45.454799 4755 generic.go:334] "Generic (PLEG): container finished" podID="e846eda6-431b-4fdf-98dc-80e3fc6b122f" containerID="1145aff509a10c95f35fdcba93808eb2cb58c2d1bd9d5661faadd63972dac885" exitCode=0 Dec 10 15:25:45 crc kubenswrapper[4755]: I1210 15:25:45.454949 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-xx85g" event={"ID":"e846eda6-431b-4fdf-98dc-80e3fc6b122f","Type":"ContainerDied","Data":"1145aff509a10c95f35fdcba93808eb2cb58c2d1bd9d5661faadd63972dac885"} Dec 10 15:25:45 crc kubenswrapper[4755]: I1210 15:25:45.455812 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 15:25:45 crc kubenswrapper[4755]: E1210 15:25:45.456076 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 15:25:45.956061431 +0000 UTC m=+142.556945063 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:25:45 crc kubenswrapper[4755]: I1210 15:25:45.456305 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mqq47\" (UID: \"1189a5c2-6e43-4e4b-8181-d2bd78031673\") " pod="openshift-image-registry/image-registry-697d97f7c8-mqq47" Dec 10 15:25:45 crc kubenswrapper[4755]: I1210 15:25:45.456617 4755 patch_prober.go:28] interesting pod/downloads-7954f5f757-gbvgh container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Dec 10 15:25:45 crc kubenswrapper[4755]: I1210 15:25:45.456672 4755 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-gbvgh" podUID="c6d1d322-1622-41d5-afb0-c441b346b8bf" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Dec 10 15:25:45 crc kubenswrapper[4755]: E1210 15:25:45.458408 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:25:45.958394081 +0000 UTC m=+142.559277793 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mqq47" (UID: "1189a5c2-6e43-4e4b-8181-d2bd78031673") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:25:45 crc kubenswrapper[4755]: I1210 15:25:45.467366 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6khhb" Dec 10 15:25:45 crc kubenswrapper[4755]: I1210 15:25:45.557362 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 15:25:45 crc kubenswrapper[4755]: E1210 15:25:45.561535 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 15:25:46.061511595 +0000 UTC m=+142.662395277 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:25:45 crc kubenswrapper[4755]: I1210 15:25:45.614620 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-ts8wv" podStartSLOduration=122.614587792 podStartE2EDuration="2m2.614587792s" podCreationTimestamp="2025-12-10 15:23:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:25:45.598840894 +0000 UTC m=+142.199724526" watchObservedRunningTime="2025-12-10 15:25:45.614587792 +0000 UTC m=+142.215471424" Dec 10 15:25:45 crc kubenswrapper[4755]: I1210 15:25:45.659812 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mqq47\" (UID: \"1189a5c2-6e43-4e4b-8181-d2bd78031673\") " pod="openshift-image-registry/image-registry-697d97f7c8-mqq47" Dec 10 15:25:45 crc kubenswrapper[4755]: E1210 15:25:45.660221 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:25:46.160205446 +0000 UTC m=+142.761089088 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mqq47" (UID: "1189a5c2-6e43-4e4b-8181-d2bd78031673") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:25:45 crc kubenswrapper[4755]: I1210 15:25:45.704742 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-sdrvv" podStartSLOduration=121.704727181 podStartE2EDuration="2m1.704727181s" podCreationTimestamp="2025-12-10 15:23:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:25:45.704396692 +0000 UTC m=+142.305280324" watchObservedRunningTime="2025-12-10 15:25:45.704727181 +0000 UTC m=+142.305610813" Dec 10 15:25:45 crc kubenswrapper[4755]: I1210 15:25:45.743844 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-ts8wv" Dec 10 15:25:45 crc kubenswrapper[4755]: I1210 15:25:45.759347 4755 patch_prober.go:28] interesting pod/router-default-5444994796-ts8wv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 10 15:25:45 crc kubenswrapper[4755]: [-]has-synced failed: reason withheld Dec 10 15:25:45 crc kubenswrapper[4755]: [+]process-running ok Dec 10 15:25:45 crc kubenswrapper[4755]: healthz check failed Dec 10 15:25:45 crc kubenswrapper[4755]: I1210 15:25:45.759412 4755 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ts8wv" podUID="dd9b14b8-6d7c-4eeb-9748-a2e99daa4293" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 10 15:25:45 crc kubenswrapper[4755]: I1210 15:25:45.761154 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 15:25:45 crc kubenswrapper[4755]: E1210 15:25:45.761685 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 15:25:46.261664708 +0000 UTC m=+142.862548340 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:25:45 crc kubenswrapper[4755]: I1210 15:25:45.804237 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8vttb" podStartSLOduration=122.804204581 podStartE2EDuration="2m2.804204581s" podCreationTimestamp="2025-12-10 15:23:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:25:45.793873973 +0000 UTC m=+142.394757615" watchObservedRunningTime="2025-12-10 15:25:45.804204581 +0000 UTC m=+142.405088263" Dec 10 15:25:45 crc kubenswrapper[4755]: I1210 15:25:45.806976 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4bgvx" Dec 10 15:25:45 crc kubenswrapper[4755]: I1210 15:25:45.865547 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mqq47\" (UID: \"1189a5c2-6e43-4e4b-8181-d2bd78031673\") " pod="openshift-image-registry/image-registry-697d97f7c8-mqq47" Dec 10 15:25:45 crc kubenswrapper[4755]: E1210 15:25:45.866310 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:25:46.366293742 +0000 UTC m=+142.967177374 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mqq47" (UID: "1189a5c2-6e43-4e4b-8181-d2bd78031673") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:25:45 crc kubenswrapper[4755]: I1210 15:25:45.927340 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" Dec 10 15:25:45 crc kubenswrapper[4755]: I1210 15:25:45.967749 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 15:25:45 crc kubenswrapper[4755]: E1210 15:25:45.968552 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 15:25:46.468526314 +0000 UTC m=+143.069409976 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:25:46 crc kubenswrapper[4755]: I1210 15:25:46.020968 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4bgvx" podStartSLOduration=122.020951313 podStartE2EDuration="2m2.020951313s" podCreationTimestamp="2025-12-10 15:23:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:25:45.944386718 +0000 UTC m=+142.545270350" watchObservedRunningTime="2025-12-10 15:25:46.020951313 +0000 UTC m=+142.621834945" Dec 10 15:25:46 crc kubenswrapper[4755]: I1210 15:25:46.072125 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mqq47\" (UID: \"1189a5c2-6e43-4e4b-8181-d2bd78031673\") " pod="openshift-image-registry/image-registry-697d97f7c8-mqq47" Dec 10 15:25:46 crc kubenswrapper[4755]: E1210 15:25:46.074147 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:25:46.574134353 +0000 UTC m=+143.175017985 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mqq47" (UID: "1189a5c2-6e43-4e4b-8181-d2bd78031673") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:25:46 crc kubenswrapper[4755]: I1210 15:25:46.151571 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-n6qb5" podStartSLOduration=123.151539931 podStartE2EDuration="2m3.151539931s" podCreationTimestamp="2025-12-10 15:23:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:25:46.087970962 +0000 UTC m=+142.688854594" watchObservedRunningTime="2025-12-10 15:25:46.151539931 +0000 UTC m=+142.752423553" Dec 10 15:25:46 crc kubenswrapper[4755]: I1210 15:25:46.175984 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 15:25:46 crc kubenswrapper[4755]: E1210 15:25:46.176270 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 15:25:46.676253723 +0000 UTC m=+143.277137355 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:25:46 crc kubenswrapper[4755]: I1210 15:25:46.196580 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-n66x6" podStartSLOduration=122.196561729 podStartE2EDuration="2m2.196561729s" podCreationTimestamp="2025-12-10 15:23:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:25:46.152713341 +0000 UTC m=+142.753596983" watchObservedRunningTime="2025-12-10 15:25:46.196561729 +0000 UTC m=+142.797445361" Dec 10 15:25:46 crc kubenswrapper[4755]: I1210 15:25:46.198395 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dhh5x" podStartSLOduration=122.198387117 podStartE2EDuration="2m2.198387117s" podCreationTimestamp="2025-12-10 15:23:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:25:46.191759914 +0000 UTC m=+142.792643556" watchObservedRunningTime="2025-12-10 15:25:46.198387117 +0000 UTC m=+142.799270749" Dec 10 15:25:46 crc kubenswrapper[4755]: I1210 15:25:46.279854 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mqq47\" (UID: \"1189a5c2-6e43-4e4b-8181-d2bd78031673\") " pod="openshift-image-registry/image-registry-697d97f7c8-mqq47" Dec 10 15:25:46 crc kubenswrapper[4755]: E1210 15:25:46.280225 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:25:46.780207669 +0000 UTC m=+143.381091301 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mqq47" (UID: "1189a5c2-6e43-4e4b-8181-d2bd78031673") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:25:46 crc kubenswrapper[4755]: I1210 15:25:46.383852 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 15:25:46 crc kubenswrapper[4755]: E1210 15:25:46.384264 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 15:25:46.884244038 +0000 UTC m=+143.485127670 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:25:46 crc kubenswrapper[4755]: I1210 15:25:46.467193 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-rbjcx" event={"ID":"0470027c-6ca8-4404-a366-997bf288e1d0","Type":"ContainerStarted","Data":"7e800a0f14b59e382ff4669c3d4cc86fe748c84ca8d8bff907231a1227c39df5"} Dec 10 15:25:46 crc kubenswrapper[4755]: I1210 15:25:46.468379 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-6hpgg" event={"ID":"ee6ce5dd-6c0d-4acd-b795-6cb930770bec","Type":"ContainerStarted","Data":"05297691e7e872dc6fb8a92a4918f4be1277fa8711515cc4660372c15e48c7b6"} Dec 10 15:25:46 crc kubenswrapper[4755]: I1210 15:25:46.486453 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mqq47\" (UID: \"1189a5c2-6e43-4e4b-8181-d2bd78031673\") " pod="openshift-image-registry/image-registry-697d97f7c8-mqq47" Dec 10 15:25:46 crc kubenswrapper[4755]: E1210 15:25:46.486852 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:25:46.986824899 +0000 UTC m=+143.587708531 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mqq47" (UID: "1189a5c2-6e43-4e4b-8181-d2bd78031673") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:25:46 crc kubenswrapper[4755]: I1210 15:25:46.487035 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-xx85g" event={"ID":"e846eda6-431b-4fdf-98dc-80e3fc6b122f","Type":"ContainerStarted","Data":"dc303b0aa52248431f36c27f5378b2033598f9ec03b28dfc04385ff88198f07f"} Dec 10 15:25:46 crc kubenswrapper[4755]: I1210 15:25:46.489849 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7r8q8" event={"ID":"6f034c10-25fc-403c-9a49-58cb5a182222","Type":"ContainerStarted","Data":"94ae992487c39954726a658ee3f3d93efbac7c8ce6fc662db41a8bb24c397b3c"} Dec 10 15:25:46 crc kubenswrapper[4755]: I1210 15:25:46.490563 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7r8q8" Dec 10 15:25:46 crc kubenswrapper[4755]: I1210 15:25:46.492736 4755 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-7r8q8 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" start-of-body= Dec 10 15:25:46 crc kubenswrapper[4755]: I1210 15:25:46.492791 4755 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7r8q8" podUID="6f034c10-25fc-403c-9a49-58cb5a182222" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" Dec 10 15:25:46 crc kubenswrapper[4755]: I1210 15:25:46.494203 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cvrkm" event={"ID":"f516b039-30bd-47f7-b6fa-48966012efe2","Type":"ContainerStarted","Data":"be9b2f6a4b6460628864e108682e1f243eb5d62568fb27a1f72d2403687f7926"} Dec 10 15:25:46 crc kubenswrapper[4755]: I1210 15:25:46.494766 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cvrkm" Dec 10 15:25:46 crc kubenswrapper[4755]: I1210 15:25:46.502305 4755 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-cvrkm container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" start-of-body= Dec 10 15:25:46 crc kubenswrapper[4755]: I1210 15:25:46.502361 4755 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cvrkm" podUID="f516b039-30bd-47f7-b6fa-48966012efe2" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" Dec 10 15:25:46 crc kubenswrapper[4755]: I1210 15:25:46.502645 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-6hpgg" podStartSLOduration=122.502631378 podStartE2EDuration="2m2.502631378s" podCreationTimestamp="2025-12-10 15:23:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:25:46.500589236 +0000 UTC m=+143.101472868" watchObservedRunningTime="2025-12-10 15:25:46.502631378 +0000 UTC m=+143.103515010" Dec 10 15:25:46 crc kubenswrapper[4755]: I1210 15:25:46.512116 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-b4kk7" event={"ID":"96d9c114-e67a-43a5-b081-4a1de76cc870","Type":"ContainerStarted","Data":"23398b253d91dd5f23e69363670f95abcb2a1e49923d3cf439ec00cf6ff65137"} Dec 10 15:25:46 crc kubenswrapper[4755]: I1210 15:25:46.524933 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-pw2tq" event={"ID":"441cfc68-29a3-4b18-9b34-cf3bc58107cc","Type":"ContainerStarted","Data":"9749b61c6337debce87ac9799b019a3ab8fad795ef3f8d5a0325c763f1462a17"} Dec 10 15:25:46 crc kubenswrapper[4755]: I1210 15:25:46.528501 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-5bvrp" event={"ID":"64e1b92e-9035-4439-abdc-86205e68c591","Type":"ContainerStarted","Data":"b52e2062ef030c0a58749ece4f375a26c3658e59d6ef93b0ffed076a9c53aad1"} Dec 10 15:25:46 crc kubenswrapper[4755]: I1210 15:25:46.529210 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-5bvrp" Dec 10 15:25:46 crc kubenswrapper[4755]: I1210 15:25:46.530272 4755 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-5bvrp container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/healthz\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Dec 10 15:25:46 crc kubenswrapper[4755]: I1210 15:25:46.530316 4755 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-5bvrp" podUID="64e1b92e-9035-4439-abdc-86205e68c591" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.16:8080/healthz\": dial tcp 10.217.0.16:8080: connect: connection refused" Dec 10 15:25:46 crc kubenswrapper[4755]: I1210 15:25:46.555246 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cvrkm" podStartSLOduration=122.555232463 podStartE2EDuration="2m2.555232463s" podCreationTimestamp="2025-12-10 15:23:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:25:46.553157209 +0000 UTC m=+143.154040851" watchObservedRunningTime="2025-12-10 15:25:46.555232463 +0000 UTC m=+143.156116085" Dec 10 15:25:46 crc kubenswrapper[4755]: I1210 15:25:46.585430 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7r8q8" podStartSLOduration=122.585411956 podStartE2EDuration="2m2.585411956s" podCreationTimestamp="2025-12-10 15:23:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:25:46.580056917 +0000 UTC m=+143.180940559" watchObservedRunningTime="2025-12-10 15:25:46.585411956 +0000 UTC m=+143.186295588" Dec 10 15:25:46 crc kubenswrapper[4755]: I1210 15:25:46.589127 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-skm6x" event={"ID":"011a7c6a-048f-4db1-ac1f-259b44dd28bc","Type":"ContainerStarted","Data":"7df502eb8374d7dd0bdb6f6757f4eb28ca3c52f63353ca45231404aec4d04489"} Dec 10 15:25:46 crc kubenswrapper[4755]: I1210 15:25:46.589633 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 15:25:46 crc kubenswrapper[4755]: E1210 15:25:46.589837 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 15:25:47.089819311 +0000 UTC m=+143.690702943 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:25:46 crc kubenswrapper[4755]: I1210 15:25:46.590131 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mqq47\" (UID: \"1189a5c2-6e43-4e4b-8181-d2bd78031673\") " pod="openshift-image-registry/image-registry-697d97f7c8-mqq47" Dec 10 15:25:46 crc kubenswrapper[4755]: E1210 15:25:46.591670 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:25:47.091656848 +0000 UTC m=+143.692540480 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mqq47" (UID: "1189a5c2-6e43-4e4b-8181-d2bd78031673") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:25:46 crc kubenswrapper[4755]: I1210 15:25:46.623681 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-sw7vc" event={"ID":"460a4f34-c415-4a61-8877-7ad9d851c0e5","Type":"ContainerStarted","Data":"f140d0c2b4225e6c4d1bec8d203f09d6b18e39d995aee30ca0b94ed24836ac50"} Dec 10 15:25:46 crc kubenswrapper[4755]: I1210 15:25:46.644788 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-kc8qq" event={"ID":"0f76f7be-c8e4-4943-81f2-8e416e747aec","Type":"ContainerStarted","Data":"45b94056269a805bfac11153a9812e7dba1bfde4e92653c6ed10c723ba0a1f7f"} Dec 10 15:25:46 crc kubenswrapper[4755]: I1210 15:25:46.659778 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-pw2tq" podStartSLOduration=7.659764184 podStartE2EDuration="7.659764184s" podCreationTimestamp="2025-12-10 15:25:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:25:46.659716193 +0000 UTC m=+143.260599825" watchObservedRunningTime="2025-12-10 15:25:46.659764184 +0000 UTC m=+143.260647816" Dec 10 15:25:46 crc kubenswrapper[4755]: I1210 15:25:46.660027 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-5bvrp" podStartSLOduration=122.660022152 podStartE2EDuration="2m2.660022152s" podCreationTimestamp="2025-12-10 15:23:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:25:46.621459291 +0000 UTC m=+143.222342913" watchObservedRunningTime="2025-12-10 15:25:46.660022152 +0000 UTC m=+143.260905784" Dec 10 15:25:46 crc kubenswrapper[4755]: I1210 15:25:46.691009 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 15:25:46 crc kubenswrapper[4755]: E1210 15:25:46.693441 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 15:25:47.193410767 +0000 UTC m=+143.794294399 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:25:46 crc kubenswrapper[4755]: I1210 15:25:46.700684 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-j46lk" event={"ID":"27f3b602-6196-4cbc-bf90-e695403d20c7","Type":"ContainerStarted","Data":"24b0065183bad45b5d3cd918d0e59953682475091d2bbfdd89f79fcad91e8ec3"} Dec 10 15:25:46 crc kubenswrapper[4755]: I1210 15:25:46.733637 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29422995-knlb5" event={"ID":"8d212083-766e-4b88-ba08-8570f05f6c94","Type":"ContainerStarted","Data":"5a8f3ef990acd86ea0a2ed264f1039a0a313e1e53fa3e09e93fb45cde45f47fd"} Dec 10 15:25:46 crc kubenswrapper[4755]: I1210 15:25:46.744705 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-kc8qq" podStartSLOduration=123.744688078 podStartE2EDuration="2m3.744688078s" podCreationTimestamp="2025-12-10 15:23:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:25:46.704876585 +0000 UTC m=+143.305760217" watchObservedRunningTime="2025-12-10 15:25:46.744688078 +0000 UTC m=+143.345571710" Dec 10 15:25:46 crc kubenswrapper[4755]: I1210 15:25:46.776016 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-wl9hp" event={"ID":"30281ea2-4ba0-44c5-981d-8961073236a8","Type":"ContainerStarted","Data":"de2ac3137751ff71adcac76efa36774f8303d82b5f1f689f5d1d4b3ddd4e9845"} Dec 10 15:25:46 crc kubenswrapper[4755]: I1210 15:25:46.781722 4755 patch_prober.go:28] interesting pod/router-default-5444994796-ts8wv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 10 15:25:46 crc kubenswrapper[4755]: [-]has-synced failed: reason withheld Dec 10 15:25:46 crc kubenswrapper[4755]: [+]process-running ok Dec 10 15:25:46 crc kubenswrapper[4755]: healthz check failed Dec 10 15:25:46 crc kubenswrapper[4755]: I1210 15:25:46.781790 4755 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ts8wv" podUID="dd9b14b8-6d7c-4eeb-9748-a2e99daa4293" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 10 15:25:46 crc kubenswrapper[4755]: I1210 15:25:46.788655 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7svz5" event={"ID":"6d5f332b-9d6b-40c2-8e63-47aa309ea740","Type":"ContainerStarted","Data":"d2b77ce413d4326bf946c662179c1dcef2bbba0aa397f6452ab7c9c4d279b6aa"} Dec 10 15:25:46 crc kubenswrapper[4755]: I1210 15:25:46.797803 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mqq47\" (UID: \"1189a5c2-6e43-4e4b-8181-d2bd78031673\") " pod="openshift-image-registry/image-registry-697d97f7c8-mqq47" Dec 10 15:25:46 crc kubenswrapper[4755]: E1210 15:25:46.798901 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:25:47.298883943 +0000 UTC m=+143.899767575 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mqq47" (UID: "1189a5c2-6e43-4e4b-8181-d2bd78031673") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:25:46 crc kubenswrapper[4755]: I1210 15:25:46.824857 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29422995-knlb5" podStartSLOduration=123.824839647 podStartE2EDuration="2m3.824839647s" podCreationTimestamp="2025-12-10 15:23:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:25:46.819995012 +0000 UTC m=+143.420878644" watchObservedRunningTime="2025-12-10 15:25:46.824839647 +0000 UTC m=+143.425723279" Dec 10 15:25:46 crc kubenswrapper[4755]: I1210 15:25:46.826248 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-skm6x" podStartSLOduration=122.826240403 podStartE2EDuration="2m2.826240403s" podCreationTimestamp="2025-12-10 15:23:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:25:46.74823642 +0000 UTC m=+143.349120052" watchObservedRunningTime="2025-12-10 15:25:46.826240403 +0000 UTC m=+143.427124035" Dec 10 15:25:46 crc kubenswrapper[4755]: I1210 15:25:46.868321 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-455tj" event={"ID":"ca0adcad-c9ae-4aaa-ada2-f9e0baa478a7","Type":"ContainerStarted","Data":"a539381407fd70d3d599c1f60336c73c33db6d473c0d077f67e486914789e0a2"} Dec 10 15:25:46 crc kubenswrapper[4755]: I1210 15:25:46.896951 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-vhv9h" event={"ID":"e0d76e44-8053-4afa-b974-bb3f945c9c23","Type":"ContainerStarted","Data":"72c684ae5fe26bea3c4cfe7cb779d686275b4b67cd970f8d4d88f83a9460155a"} Dec 10 15:25:46 crc kubenswrapper[4755]: I1210 15:25:46.899288 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 15:25:46 crc kubenswrapper[4755]: E1210 15:25:46.900859 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 15:25:47.400837399 +0000 UTC m=+144.001721031 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:25:46 crc kubenswrapper[4755]: I1210 15:25:46.910828 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7svz5" podStartSLOduration=122.910814467 podStartE2EDuration="2m2.910814467s" podCreationTimestamp="2025-12-10 15:23:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:25:46.909092482 +0000 UTC m=+143.509976114" watchObservedRunningTime="2025-12-10 15:25:46.910814467 +0000 UTC m=+143.511698099" Dec 10 15:25:46 crc kubenswrapper[4755]: I1210 15:25:46.940214 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vnzg7" event={"ID":"22368f6c-56f1-4ef1-bada-fcad04f7b8a4","Type":"ContainerStarted","Data":"0830888bd7e3c40b9564e870fb288d320a507610167f517f725fc2f37af1e27c"} Dec 10 15:25:46 crc kubenswrapper[4755]: I1210 15:25:46.941026 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vnzg7" Dec 10 15:25:46 crc kubenswrapper[4755]: I1210 15:25:46.955406 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-455tj" podStartSLOduration=8.955386643 podStartE2EDuration="8.955386643s" podCreationTimestamp="2025-12-10 15:25:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:25:46.954780418 +0000 UTC m=+143.555664060" watchObservedRunningTime="2025-12-10 15:25:46.955386643 +0000 UTC m=+143.556270275" Dec 10 15:25:46 crc kubenswrapper[4755]: I1210 15:25:46.970047 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-plc7w" event={"ID":"5ac506be-249f-4065-84b6-26fb51e47790","Type":"ContainerStarted","Data":"fe7fdeb205581a6efd6f0a973b9d0f5d7355ed27e90c4f46012c3956bfb9b537"} Dec 10 15:25:46 crc kubenswrapper[4755]: I1210 15:25:46.970101 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-26mg6" Dec 10 15:25:47 crc kubenswrapper[4755]: I1210 15:25:47.000693 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vnzg7" podStartSLOduration=123.000671698 podStartE2EDuration="2m3.000671698s" podCreationTimestamp="2025-12-10 15:23:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:25:46.999154219 +0000 UTC m=+143.600037851" watchObservedRunningTime="2025-12-10 15:25:47.000671698 +0000 UTC m=+143.601555330" Dec 10 15:25:47 crc kubenswrapper[4755]: I1210 15:25:47.002125 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mqq47\" (UID: \"1189a5c2-6e43-4e4b-8181-d2bd78031673\") " pod="openshift-image-registry/image-registry-697d97f7c8-mqq47" Dec 10 15:25:47 crc kubenswrapper[4755]: E1210 15:25:47.003079 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:25:47.50306628 +0000 UTC m=+144.103949902 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mqq47" (UID: "1189a5c2-6e43-4e4b-8181-d2bd78031673") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:25:47 crc kubenswrapper[4755]: I1210 15:25:47.067785 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-26mg6" podStartSLOduration=124.067765318 podStartE2EDuration="2m4.067765318s" podCreationTimestamp="2025-12-10 15:23:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:25:47.066630869 +0000 UTC m=+143.667514501" watchObservedRunningTime="2025-12-10 15:25:47.067765318 +0000 UTC m=+143.668648950" Dec 10 15:25:47 crc kubenswrapper[4755]: I1210 15:25:47.106125 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 15:25:47 crc kubenswrapper[4755]: E1210 15:25:47.106554 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 15:25:47.606535044 +0000 UTC m=+144.207418686 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:25:47 crc kubenswrapper[4755]: I1210 15:25:47.208673 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mqq47\" (UID: \"1189a5c2-6e43-4e4b-8181-d2bd78031673\") " pod="openshift-image-registry/image-registry-697d97f7c8-mqq47" Dec 10 15:25:47 crc kubenswrapper[4755]: E1210 15:25:47.209152 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:25:47.709136606 +0000 UTC m=+144.310020238 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mqq47" (UID: "1189a5c2-6e43-4e4b-8181-d2bd78031673") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:25:47 crc kubenswrapper[4755]: I1210 15:25:47.309585 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 15:25:47 crc kubenswrapper[4755]: E1210 15:25:47.309762 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 15:25:47.809720094 +0000 UTC m=+144.410603726 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:25:47 crc kubenswrapper[4755]: I1210 15:25:47.310136 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mqq47\" (UID: \"1189a5c2-6e43-4e4b-8181-d2bd78031673\") " pod="openshift-image-registry/image-registry-697d97f7c8-mqq47" Dec 10 15:25:47 crc kubenswrapper[4755]: E1210 15:25:47.310494 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:25:47.810480185 +0000 UTC m=+144.411363817 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mqq47" (UID: "1189a5c2-6e43-4e4b-8181-d2bd78031673") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:25:47 crc kubenswrapper[4755]: I1210 15:25:47.411077 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 15:25:47 crc kubenswrapper[4755]: E1210 15:25:47.411309 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 15:25:47.911258829 +0000 UTC m=+144.512142461 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:25:47 crc kubenswrapper[4755]: I1210 15:25:47.411409 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mqq47\" (UID: \"1189a5c2-6e43-4e4b-8181-d2bd78031673\") " pod="openshift-image-registry/image-registry-697d97f7c8-mqq47" Dec 10 15:25:47 crc kubenswrapper[4755]: E1210 15:25:47.411972 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:25:47.911946487 +0000 UTC m=+144.512830119 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mqq47" (UID: "1189a5c2-6e43-4e4b-8181-d2bd78031673") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:25:47 crc kubenswrapper[4755]: I1210 15:25:47.512011 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 15:25:47 crc kubenswrapper[4755]: E1210 15:25:47.512107 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 15:25:48.012085764 +0000 UTC m=+144.612969396 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:25:47 crc kubenswrapper[4755]: I1210 15:25:47.512306 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mqq47\" (UID: \"1189a5c2-6e43-4e4b-8181-d2bd78031673\") " pod="openshift-image-registry/image-registry-697d97f7c8-mqq47" Dec 10 15:25:47 crc kubenswrapper[4755]: E1210 15:25:47.512681 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:25:48.012671559 +0000 UTC m=+144.613555191 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mqq47" (UID: "1189a5c2-6e43-4e4b-8181-d2bd78031673") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:25:47 crc kubenswrapper[4755]: I1210 15:25:47.613407 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 15:25:47 crc kubenswrapper[4755]: E1210 15:25:47.613780 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 15:25:48.113762872 +0000 UTC m=+144.714646504 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:25:47 crc kubenswrapper[4755]: I1210 15:25:47.715574 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mqq47\" (UID: \"1189a5c2-6e43-4e4b-8181-d2bd78031673\") " pod="openshift-image-registry/image-registry-697d97f7c8-mqq47" Dec 10 15:25:47 crc kubenswrapper[4755]: E1210 15:25:47.715917 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:25:48.215903161 +0000 UTC m=+144.816786793 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mqq47" (UID: "1189a5c2-6e43-4e4b-8181-d2bd78031673") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:25:47 crc kubenswrapper[4755]: I1210 15:25:47.746355 4755 patch_prober.go:28] interesting pod/router-default-5444994796-ts8wv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 10 15:25:47 crc kubenswrapper[4755]: [-]has-synced failed: reason withheld Dec 10 15:25:47 crc kubenswrapper[4755]: [+]process-running ok Dec 10 15:25:47 crc kubenswrapper[4755]: healthz check failed Dec 10 15:25:47 crc kubenswrapper[4755]: I1210 15:25:47.746408 4755 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ts8wv" podUID="dd9b14b8-6d7c-4eeb-9748-a2e99daa4293" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 10 15:25:47 crc kubenswrapper[4755]: I1210 15:25:47.816912 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 15:25:47 crc kubenswrapper[4755]: E1210 15:25:47.817042 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 15:25:48.317025015 +0000 UTC m=+144.917908637 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:25:47 crc kubenswrapper[4755]: I1210 15:25:47.817274 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mqq47\" (UID: \"1189a5c2-6e43-4e4b-8181-d2bd78031673\") " pod="openshift-image-registry/image-registry-697d97f7c8-mqq47" Dec 10 15:25:47 crc kubenswrapper[4755]: E1210 15:25:47.817615 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:25:48.3176041 +0000 UTC m=+144.918487732 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mqq47" (UID: "1189a5c2-6e43-4e4b-8181-d2bd78031673") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:25:47 crc kubenswrapper[4755]: I1210 15:25:47.896917 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vnzg7" Dec 10 15:25:47 crc kubenswrapper[4755]: I1210 15:25:47.914737 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-plc7w" podStartSLOduration=123.914719298 podStartE2EDuration="2m3.914719298s" podCreationTimestamp="2025-12-10 15:23:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:25:47.101664338 +0000 UTC m=+143.702547970" watchObservedRunningTime="2025-12-10 15:25:47.914719298 +0000 UTC m=+144.515602930" Dec 10 15:25:47 crc kubenswrapper[4755]: I1210 15:25:47.918514 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 15:25:47 crc kubenswrapper[4755]: E1210 15:25:47.918706 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 15:25:48.418679001 +0000 UTC m=+145.019562633 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:25:47 crc kubenswrapper[4755]: I1210 15:25:47.918877 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mqq47\" (UID: \"1189a5c2-6e43-4e4b-8181-d2bd78031673\") " pod="openshift-image-registry/image-registry-697d97f7c8-mqq47" Dec 10 15:25:47 crc kubenswrapper[4755]: E1210 15:25:47.919166 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:25:48.419157734 +0000 UTC m=+145.020041366 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mqq47" (UID: "1189a5c2-6e43-4e4b-8181-d2bd78031673") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:25:47 crc kubenswrapper[4755]: I1210 15:25:47.973999 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-b4kk7" event={"ID":"96d9c114-e67a-43a5-b081-4a1de76cc870","Type":"ContainerStarted","Data":"b23548bb035fe77ae13b595aed4b019c776fc2ab3b626618952b8c6272ae04f0"} Dec 10 15:25:47 crc kubenswrapper[4755]: I1210 15:25:47.976072 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-j46lk" event={"ID":"27f3b602-6196-4cbc-bf90-e695403d20c7","Type":"ContainerStarted","Data":"0295bc793dd2e207891949394092f54cc8404731f940a963a72018c71240932c"} Dec 10 15:25:47 crc kubenswrapper[4755]: I1210 15:25:47.978211 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-plc7w" event={"ID":"5ac506be-249f-4065-84b6-26fb51e47790","Type":"ContainerStarted","Data":"171b693899e5316172054fe47be363f4cca3a96189d17517f7a349a0ac18421e"} Dec 10 15:25:47 crc kubenswrapper[4755]: I1210 15:25:47.980648 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-rbjcx" event={"ID":"0470027c-6ca8-4404-a366-997bf288e1d0","Type":"ContainerStarted","Data":"b2bafa4172c0c9b26276313c4c5e6a5a29262ccfad3c3604679e0a04376664bc"} Dec 10 15:25:47 crc kubenswrapper[4755]: I1210 15:25:47.981889 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-wl9hp" event={"ID":"30281ea2-4ba0-44c5-981d-8961073236a8","Type":"ContainerStarted","Data":"5a7cc6c7117f14043df3efda31f45e25fdb05ca7f86cee9c2a78eb6b362dd126"} Dec 10 15:25:47 crc kubenswrapper[4755]: I1210 15:25:47.982331 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-wl9hp" Dec 10 15:25:47 crc kubenswrapper[4755]: I1210 15:25:47.983773 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-vhv9h" event={"ID":"e0d76e44-8053-4afa-b974-bb3f945c9c23","Type":"ContainerStarted","Data":"a39b44e46ccfbcfa13a6edf70d37c7dbe5ec54c5b0ee8494cef0e18c01e4862b"} Dec 10 15:25:47 crc kubenswrapper[4755]: I1210 15:25:47.986360 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-xx85g" event={"ID":"e846eda6-431b-4fdf-98dc-80e3fc6b122f","Type":"ContainerStarted","Data":"55bec24f66d8d919d951db7ec52823060a565f49f27753006db1a7596e76c209"} Dec 10 15:25:47 crc kubenswrapper[4755]: I1210 15:25:47.988881 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-sw7vc" event={"ID":"460a4f34-c415-4a61-8877-7ad9d851c0e5","Type":"ContainerStarted","Data":"ebc59004a3b07d8f6bb8356d7d1da92a7b3823e75d8055c22e1acbd1089b43b0"} Dec 10 15:25:47 crc kubenswrapper[4755]: I1210 15:25:47.991371 4755 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-5bvrp container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/healthz\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Dec 10 15:25:47 crc kubenswrapper[4755]: I1210 15:25:47.991436 4755 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-5bvrp" podUID="64e1b92e-9035-4439-abdc-86205e68c591" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.16:8080/healthz\": dial tcp 10.217.0.16:8080: connect: connection refused" Dec 10 15:25:48 crc kubenswrapper[4755]: I1210 15:25:48.000099 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cvrkm" Dec 10 15:25:48 crc kubenswrapper[4755]: I1210 15:25:48.000204 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-26mg6" Dec 10 15:25:48 crc kubenswrapper[4755]: I1210 15:25:48.001550 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7r8q8" Dec 10 15:25:48 crc kubenswrapper[4755]: I1210 15:25:48.020245 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 15:25:48 crc kubenswrapper[4755]: E1210 15:25:48.021566 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 15:25:48.52155005 +0000 UTC m=+145.122433682 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:25:48 crc kubenswrapper[4755]: I1210 15:25:48.071297 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-b4kk7" podStartSLOduration=124.071281329 podStartE2EDuration="2m4.071281329s" podCreationTimestamp="2025-12-10 15:23:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:25:48.023642234 +0000 UTC m=+144.624525886" watchObservedRunningTime="2025-12-10 15:25:48.071281329 +0000 UTC m=+144.672164961" Dec 10 15:25:48 crc kubenswrapper[4755]: I1210 15:25:48.118594 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-vhv9h" podStartSLOduration=125.118578666 podStartE2EDuration="2m5.118578666s" podCreationTimestamp="2025-12-10 15:23:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:25:48.07169167 +0000 UTC m=+144.672575302" watchObservedRunningTime="2025-12-10 15:25:48.118578666 +0000 UTC m=+144.719462298" Dec 10 15:25:48 crc kubenswrapper[4755]: I1210 15:25:48.122287 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mqq47\" (UID: \"1189a5c2-6e43-4e4b-8181-d2bd78031673\") " pod="openshift-image-registry/image-registry-697d97f7c8-mqq47" Dec 10 15:25:48 crc kubenswrapper[4755]: E1210 15:25:48.122653 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:25:48.622641462 +0000 UTC m=+145.223525094 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mqq47" (UID: "1189a5c2-6e43-4e4b-8181-d2bd78031673") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:25:48 crc kubenswrapper[4755]: I1210 15:25:48.134088 4755 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Dec 10 15:25:48 crc kubenswrapper[4755]: I1210 15:25:48.137150 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-xx85g" podStartSLOduration=125.137140268 podStartE2EDuration="2m5.137140268s" podCreationTimestamp="2025-12-10 15:23:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:25:48.116790411 +0000 UTC m=+144.717674043" watchObservedRunningTime="2025-12-10 15:25:48.137140268 +0000 UTC m=+144.738023900" Dec 10 15:25:48 crc kubenswrapper[4755]: I1210 15:25:48.201927 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-wl9hp" podStartSLOduration=10.201909388 podStartE2EDuration="10.201909388s" podCreationTimestamp="2025-12-10 15:25:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:25:48.198043948 +0000 UTC m=+144.798927580" watchObservedRunningTime="2025-12-10 15:25:48.201909388 +0000 UTC m=+144.802793020" Dec 10 15:25:48 crc kubenswrapper[4755]: I1210 15:25:48.223920 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 15:25:48 crc kubenswrapper[4755]: E1210 15:25:48.224053 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 15:25:48.724027492 +0000 UTC m=+145.324911124 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:25:48 crc kubenswrapper[4755]: I1210 15:25:48.224186 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mqq47\" (UID: \"1189a5c2-6e43-4e4b-8181-d2bd78031673\") " pod="openshift-image-registry/image-registry-697d97f7c8-mqq47" Dec 10 15:25:48 crc kubenswrapper[4755]: E1210 15:25:48.224497 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:25:48.724489724 +0000 UTC m=+145.325373356 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mqq47" (UID: "1189a5c2-6e43-4e4b-8181-d2bd78031673") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:25:48 crc kubenswrapper[4755]: I1210 15:25:48.242034 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-j46lk" podStartSLOduration=125.242014528 podStartE2EDuration="2m5.242014528s" podCreationTimestamp="2025-12-10 15:23:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:25:48.241557397 +0000 UTC m=+144.842441029" watchObservedRunningTime="2025-12-10 15:25:48.242014528 +0000 UTC m=+144.842898160" Dec 10 15:25:48 crc kubenswrapper[4755]: I1210 15:25:48.245115 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-sw7vc" podStartSLOduration=124.245097638 podStartE2EDuration="2m4.245097638s" podCreationTimestamp="2025-12-10 15:23:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:25:48.21893442 +0000 UTC m=+144.819818052" watchObservedRunningTime="2025-12-10 15:25:48.245097638 +0000 UTC m=+144.845981280" Dec 10 15:25:48 crc kubenswrapper[4755]: I1210 15:25:48.325040 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 15:25:48 crc kubenswrapper[4755]: E1210 15:25:48.325174 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 15:25:48.825155615 +0000 UTC m=+145.426039247 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:25:48 crc kubenswrapper[4755]: I1210 15:25:48.325322 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mqq47\" (UID: \"1189a5c2-6e43-4e4b-8181-d2bd78031673\") " pod="openshift-image-registry/image-registry-697d97f7c8-mqq47" Dec 10 15:25:48 crc kubenswrapper[4755]: E1210 15:25:48.325611 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:25:48.825604267 +0000 UTC m=+145.426487899 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mqq47" (UID: "1189a5c2-6e43-4e4b-8181-d2bd78031673") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:25:48 crc kubenswrapper[4755]: I1210 15:25:48.636315 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 15:25:48 crc kubenswrapper[4755]: E1210 15:25:48.636598 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 15:25:49.136576273 +0000 UTC m=+145.737459905 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:25:48 crc kubenswrapper[4755]: I1210 15:25:48.687850 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-v7hds"] Dec 10 15:25:48 crc kubenswrapper[4755]: I1210 15:25:48.689133 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v7hds" Dec 10 15:25:48 crc kubenswrapper[4755]: I1210 15:25:48.692088 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 10 15:25:48 crc kubenswrapper[4755]: I1210 15:25:48.738096 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/202dcce7-fa8a-4991-bd7a-661eab1f3274-catalog-content\") pod \"community-operators-v7hds\" (UID: \"202dcce7-fa8a-4991-bd7a-661eab1f3274\") " pod="openshift-marketplace/community-operators-v7hds" Dec 10 15:25:48 crc kubenswrapper[4755]: I1210 15:25:48.738162 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/202dcce7-fa8a-4991-bd7a-661eab1f3274-utilities\") pod \"community-operators-v7hds\" (UID: \"202dcce7-fa8a-4991-bd7a-661eab1f3274\") " pod="openshift-marketplace/community-operators-v7hds" Dec 10 15:25:48 crc kubenswrapper[4755]: I1210 15:25:48.738223 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bbmb\" (UniqueName: \"kubernetes.io/projected/202dcce7-fa8a-4991-bd7a-661eab1f3274-kube-api-access-7bbmb\") pod \"community-operators-v7hds\" (UID: \"202dcce7-fa8a-4991-bd7a-661eab1f3274\") " pod="openshift-marketplace/community-operators-v7hds" Dec 10 15:25:48 crc kubenswrapper[4755]: I1210 15:25:48.738268 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mqq47\" (UID: \"1189a5c2-6e43-4e4b-8181-d2bd78031673\") " pod="openshift-image-registry/image-registry-697d97f7c8-mqq47" Dec 10 15:25:48 crc kubenswrapper[4755]: E1210 15:25:48.738831 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:25:49.238816676 +0000 UTC m=+145.839700308 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mqq47" (UID: "1189a5c2-6e43-4e4b-8181-d2bd78031673") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:25:48 crc kubenswrapper[4755]: I1210 15:25:48.746017 4755 patch_prober.go:28] interesting pod/router-default-5444994796-ts8wv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 10 15:25:48 crc kubenswrapper[4755]: [-]has-synced failed: reason withheld Dec 10 15:25:48 crc kubenswrapper[4755]: [+]process-running ok Dec 10 15:25:48 crc kubenswrapper[4755]: healthz check failed Dec 10 15:25:48 crc kubenswrapper[4755]: I1210 15:25:48.746098 4755 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ts8wv" podUID="dd9b14b8-6d7c-4eeb-9748-a2e99daa4293" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 10 15:25:48 crc kubenswrapper[4755]: I1210 15:25:48.768451 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-v7hds"] Dec 10 15:25:48 crc kubenswrapper[4755]: I1210 15:25:48.839128 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 15:25:48 crc kubenswrapper[4755]: E1210 15:25:48.839320 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 15:25:49.339291062 +0000 UTC m=+145.940174694 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:25:48 crc kubenswrapper[4755]: I1210 15:25:48.839440 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bbmb\" (UniqueName: \"kubernetes.io/projected/202dcce7-fa8a-4991-bd7a-661eab1f3274-kube-api-access-7bbmb\") pod \"community-operators-v7hds\" (UID: \"202dcce7-fa8a-4991-bd7a-661eab1f3274\") " pod="openshift-marketplace/community-operators-v7hds" Dec 10 15:25:48 crc kubenswrapper[4755]: I1210 15:25:48.839557 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mqq47\" (UID: \"1189a5c2-6e43-4e4b-8181-d2bd78031673\") " pod="openshift-image-registry/image-registry-697d97f7c8-mqq47" Dec 10 15:25:48 crc kubenswrapper[4755]: I1210 15:25:48.839666 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/202dcce7-fa8a-4991-bd7a-661eab1f3274-catalog-content\") pod \"community-operators-v7hds\" (UID: \"202dcce7-fa8a-4991-bd7a-661eab1f3274\") " pod="openshift-marketplace/community-operators-v7hds" Dec 10 15:25:48 crc kubenswrapper[4755]: I1210 15:25:48.839720 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/202dcce7-fa8a-4991-bd7a-661eab1f3274-utilities\") pod \"community-operators-v7hds\" (UID: \"202dcce7-fa8a-4991-bd7a-661eab1f3274\") " pod="openshift-marketplace/community-operators-v7hds" Dec 10 15:25:48 crc kubenswrapper[4755]: I1210 15:25:48.840205 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/202dcce7-fa8a-4991-bd7a-661eab1f3274-catalog-content\") pod \"community-operators-v7hds\" (UID: \"202dcce7-fa8a-4991-bd7a-661eab1f3274\") " pod="openshift-marketplace/community-operators-v7hds" Dec 10 15:25:48 crc kubenswrapper[4755]: I1210 15:25:48.840272 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/202dcce7-fa8a-4991-bd7a-661eab1f3274-utilities\") pod \"community-operators-v7hds\" (UID: \"202dcce7-fa8a-4991-bd7a-661eab1f3274\") " pod="openshift-marketplace/community-operators-v7hds" Dec 10 15:25:48 crc kubenswrapper[4755]: E1210 15:25:48.840480 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:25:49.340445082 +0000 UTC m=+145.941328784 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mqq47" (UID: "1189a5c2-6e43-4e4b-8181-d2bd78031673") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:25:48 crc kubenswrapper[4755]: I1210 15:25:48.889051 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-d89xt"] Dec 10 15:25:48 crc kubenswrapper[4755]: I1210 15:25:48.890213 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d89xt" Dec 10 15:25:48 crc kubenswrapper[4755]: I1210 15:25:48.892309 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 10 15:25:48 crc kubenswrapper[4755]: I1210 15:25:48.920309 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bbmb\" (UniqueName: \"kubernetes.io/projected/202dcce7-fa8a-4991-bd7a-661eab1f3274-kube-api-access-7bbmb\") pod \"community-operators-v7hds\" (UID: \"202dcce7-fa8a-4991-bd7a-661eab1f3274\") " pod="openshift-marketplace/community-operators-v7hds" Dec 10 15:25:48 crc kubenswrapper[4755]: I1210 15:25:48.923023 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-d89xt"] Dec 10 15:25:48 crc kubenswrapper[4755]: I1210 15:25:48.940445 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 15:25:48 crc kubenswrapper[4755]: E1210 15:25:48.940597 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 15:25:49.44057197 +0000 UTC m=+146.041455602 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:25:48 crc kubenswrapper[4755]: I1210 15:25:48.940642 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kxhw\" (UniqueName: \"kubernetes.io/projected/8f9f6949-50c9-4d7e-b75f-b990a642d3a7-kube-api-access-8kxhw\") pod \"certified-operators-d89xt\" (UID: \"8f9f6949-50c9-4d7e-b75f-b990a642d3a7\") " pod="openshift-marketplace/certified-operators-d89xt" Dec 10 15:25:48 crc kubenswrapper[4755]: I1210 15:25:48.940710 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f9f6949-50c9-4d7e-b75f-b990a642d3a7-catalog-content\") pod \"certified-operators-d89xt\" (UID: \"8f9f6949-50c9-4d7e-b75f-b990a642d3a7\") " pod="openshift-marketplace/certified-operators-d89xt" Dec 10 15:25:48 crc kubenswrapper[4755]: I1210 15:25:48.940744 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f9f6949-50c9-4d7e-b75f-b990a642d3a7-utilities\") pod \"certified-operators-d89xt\" (UID: \"8f9f6949-50c9-4d7e-b75f-b990a642d3a7\") " pod="openshift-marketplace/certified-operators-d89xt" Dec 10 15:25:48 crc kubenswrapper[4755]: I1210 15:25:48.940955 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mqq47\" (UID: \"1189a5c2-6e43-4e4b-8181-d2bd78031673\") " pod="openshift-image-registry/image-registry-697d97f7c8-mqq47" Dec 10 15:25:48 crc kubenswrapper[4755]: E1210 15:25:48.941233 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:25:49.441226007 +0000 UTC m=+146.042109639 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mqq47" (UID: "1189a5c2-6e43-4e4b-8181-d2bd78031673") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:25:48 crc kubenswrapper[4755]: I1210 15:25:48.997084 4755 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-5bvrp container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/healthz\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Dec 10 15:25:48 crc kubenswrapper[4755]: I1210 15:25:48.997134 4755 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-5bvrp" podUID="64e1b92e-9035-4439-abdc-86205e68c591" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.16:8080/healthz\": dial tcp 10.217.0.16:8080: connect: connection refused" Dec 10 15:25:49 crc kubenswrapper[4755]: I1210 15:25:49.002421 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v7hds" Dec 10 15:25:49 crc kubenswrapper[4755]: I1210 15:25:49.042618 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 15:25:49 crc kubenswrapper[4755]: E1210 15:25:49.042874 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 15:25:49.542851113 +0000 UTC m=+146.143734745 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:25:49 crc kubenswrapper[4755]: I1210 15:25:49.042959 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kxhw\" (UniqueName: \"kubernetes.io/projected/8f9f6949-50c9-4d7e-b75f-b990a642d3a7-kube-api-access-8kxhw\") pod \"certified-operators-d89xt\" (UID: \"8f9f6949-50c9-4d7e-b75f-b990a642d3a7\") " pod="openshift-marketplace/certified-operators-d89xt" Dec 10 15:25:49 crc kubenswrapper[4755]: I1210 15:25:49.043054 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f9f6949-50c9-4d7e-b75f-b990a642d3a7-catalog-content\") pod \"certified-operators-d89xt\" (UID: \"8f9f6949-50c9-4d7e-b75f-b990a642d3a7\") " pod="openshift-marketplace/certified-operators-d89xt" Dec 10 15:25:49 crc kubenswrapper[4755]: I1210 15:25:49.043104 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f9f6949-50c9-4d7e-b75f-b990a642d3a7-utilities\") pod \"certified-operators-d89xt\" (UID: \"8f9f6949-50c9-4d7e-b75f-b990a642d3a7\") " pod="openshift-marketplace/certified-operators-d89xt" Dec 10 15:25:49 crc kubenswrapper[4755]: I1210 15:25:49.043552 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mqq47\" (UID: \"1189a5c2-6e43-4e4b-8181-d2bd78031673\") " pod="openshift-image-registry/image-registry-697d97f7c8-mqq47" Dec 10 15:25:49 crc kubenswrapper[4755]: I1210 15:25:49.044340 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f9f6949-50c9-4d7e-b75f-b990a642d3a7-utilities\") pod \"certified-operators-d89xt\" (UID: \"8f9f6949-50c9-4d7e-b75f-b990a642d3a7\") " pod="openshift-marketplace/certified-operators-d89xt" Dec 10 15:25:49 crc kubenswrapper[4755]: I1210 15:25:49.044367 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f9f6949-50c9-4d7e-b75f-b990a642d3a7-catalog-content\") pod \"certified-operators-d89xt\" (UID: \"8f9f6949-50c9-4d7e-b75f-b990a642d3a7\") " pod="openshift-marketplace/certified-operators-d89xt" Dec 10 15:25:49 crc kubenswrapper[4755]: E1210 15:25:49.045418 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 15:25:49.545401559 +0000 UTC m=+146.146285191 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mqq47" (UID: "1189a5c2-6e43-4e4b-8181-d2bd78031673") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 15:25:49 crc kubenswrapper[4755]: I1210 15:25:49.046592 4755 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-12-10T15:25:48.1341179Z","Handler":null,"Name":""} Dec 10 15:25:49 crc kubenswrapper[4755]: I1210 15:25:49.048818 4755 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Dec 10 15:25:49 crc kubenswrapper[4755]: I1210 15:25:49.048842 4755 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Dec 10 15:25:49 crc kubenswrapper[4755]: I1210 15:25:49.062051 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kxhw\" (UniqueName: \"kubernetes.io/projected/8f9f6949-50c9-4d7e-b75f-b990a642d3a7-kube-api-access-8kxhw\") pod \"certified-operators-d89xt\" (UID: \"8f9f6949-50c9-4d7e-b75f-b990a642d3a7\") " pod="openshift-marketplace/certified-operators-d89xt" Dec 10 15:25:49 crc kubenswrapper[4755]: I1210 15:25:49.079183 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9slgn"] Dec 10 15:25:49 crc kubenswrapper[4755]: I1210 15:25:49.080099 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9slgn" Dec 10 15:25:49 crc kubenswrapper[4755]: I1210 15:25:49.089211 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9slgn"] Dec 10 15:25:49 crc kubenswrapper[4755]: I1210 15:25:49.155285 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 15:25:49 crc kubenswrapper[4755]: I1210 15:25:49.196229 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 10 15:25:49 crc kubenswrapper[4755]: I1210 15:25:49.203728 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d89xt" Dec 10 15:25:49 crc kubenswrapper[4755]: I1210 15:25:49.257522 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d74a8b0-992e-46b5-9364-cc82c84ac2d8-catalog-content\") pod \"community-operators-9slgn\" (UID: \"1d74a8b0-992e-46b5-9364-cc82c84ac2d8\") " pod="openshift-marketplace/community-operators-9slgn" Dec 10 15:25:49 crc kubenswrapper[4755]: I1210 15:25:49.257583 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d74a8b0-992e-46b5-9364-cc82c84ac2d8-utilities\") pod \"community-operators-9slgn\" (UID: \"1d74a8b0-992e-46b5-9364-cc82c84ac2d8\") " pod="openshift-marketplace/community-operators-9slgn" Dec 10 15:25:49 crc kubenswrapper[4755]: I1210 15:25:49.257621 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wczkt\" (UniqueName: \"kubernetes.io/projected/1d74a8b0-992e-46b5-9364-cc82c84ac2d8-kube-api-access-wczkt\") pod \"community-operators-9slgn\" (UID: \"1d74a8b0-992e-46b5-9364-cc82c84ac2d8\") " pod="openshift-marketplace/community-operators-9slgn" Dec 10 15:25:49 crc kubenswrapper[4755]: I1210 15:25:49.257753 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mqq47\" (UID: \"1189a5c2-6e43-4e4b-8181-d2bd78031673\") " pod="openshift-image-registry/image-registry-697d97f7c8-mqq47" Dec 10 15:25:49 crc kubenswrapper[4755]: I1210 15:25:49.275594 4755 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 10 15:25:49 crc kubenswrapper[4755]: I1210 15:25:49.275644 4755 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mqq47\" (UID: \"1189a5c2-6e43-4e4b-8181-d2bd78031673\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-mqq47" Dec 10 15:25:49 crc kubenswrapper[4755]: I1210 15:25:49.284091 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ck7mx"] Dec 10 15:25:49 crc kubenswrapper[4755]: I1210 15:25:49.285288 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ck7mx" Dec 10 15:25:49 crc kubenswrapper[4755]: I1210 15:25:49.305528 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ck7mx"] Dec 10 15:25:49 crc kubenswrapper[4755]: I1210 15:25:49.365612 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/609745aa-bdb9-440a-b029-fcd706ed320e-utilities\") pod \"certified-operators-ck7mx\" (UID: \"609745aa-bdb9-440a-b029-fcd706ed320e\") " pod="openshift-marketplace/certified-operators-ck7mx" Dec 10 15:25:49 crc kubenswrapper[4755]: I1210 15:25:49.365688 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d74a8b0-992e-46b5-9364-cc82c84ac2d8-catalog-content\") pod \"community-operators-9slgn\" (UID: \"1d74a8b0-992e-46b5-9364-cc82c84ac2d8\") " pod="openshift-marketplace/community-operators-9slgn" Dec 10 15:25:49 crc kubenswrapper[4755]: I1210 15:25:49.365786 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d74a8b0-992e-46b5-9364-cc82c84ac2d8-utilities\") pod \"community-operators-9slgn\" (UID: \"1d74a8b0-992e-46b5-9364-cc82c84ac2d8\") " pod="openshift-marketplace/community-operators-9slgn" Dec 10 15:25:49 crc kubenswrapper[4755]: I1210 15:25:49.365869 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wczkt\" (UniqueName: \"kubernetes.io/projected/1d74a8b0-992e-46b5-9364-cc82c84ac2d8-kube-api-access-wczkt\") pod \"community-operators-9slgn\" (UID: \"1d74a8b0-992e-46b5-9364-cc82c84ac2d8\") " pod="openshift-marketplace/community-operators-9slgn" Dec 10 15:25:49 crc kubenswrapper[4755]: I1210 15:25:49.365918 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/609745aa-bdb9-440a-b029-fcd706ed320e-catalog-content\") pod \"certified-operators-ck7mx\" (UID: \"609745aa-bdb9-440a-b029-fcd706ed320e\") " pod="openshift-marketplace/certified-operators-ck7mx" Dec 10 15:25:49 crc kubenswrapper[4755]: I1210 15:25:49.365972 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzhqn\" (UniqueName: \"kubernetes.io/projected/609745aa-bdb9-440a-b029-fcd706ed320e-kube-api-access-nzhqn\") pod \"certified-operators-ck7mx\" (UID: \"609745aa-bdb9-440a-b029-fcd706ed320e\") " pod="openshift-marketplace/certified-operators-ck7mx" Dec 10 15:25:49 crc kubenswrapper[4755]: I1210 15:25:49.366709 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d74a8b0-992e-46b5-9364-cc82c84ac2d8-catalog-content\") pod \"community-operators-9slgn\" (UID: \"1d74a8b0-992e-46b5-9364-cc82c84ac2d8\") " pod="openshift-marketplace/community-operators-9slgn" Dec 10 15:25:49 crc kubenswrapper[4755]: I1210 15:25:49.366990 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d74a8b0-992e-46b5-9364-cc82c84ac2d8-utilities\") pod \"community-operators-9slgn\" (UID: \"1d74a8b0-992e-46b5-9364-cc82c84ac2d8\") " pod="openshift-marketplace/community-operators-9slgn" Dec 10 15:25:49 crc kubenswrapper[4755]: I1210 15:25:49.396435 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mqq47\" (UID: \"1189a5c2-6e43-4e4b-8181-d2bd78031673\") " pod="openshift-image-registry/image-registry-697d97f7c8-mqq47" Dec 10 15:25:49 crc kubenswrapper[4755]: I1210 15:25:49.408653 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wczkt\" (UniqueName: \"kubernetes.io/projected/1d74a8b0-992e-46b5-9364-cc82c84ac2d8-kube-api-access-wczkt\") pod \"community-operators-9slgn\" (UID: \"1d74a8b0-992e-46b5-9364-cc82c84ac2d8\") " pod="openshift-marketplace/community-operators-9slgn" Dec 10 15:25:49 crc kubenswrapper[4755]: I1210 15:25:49.469165 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/609745aa-bdb9-440a-b029-fcd706ed320e-catalog-content\") pod \"certified-operators-ck7mx\" (UID: \"609745aa-bdb9-440a-b029-fcd706ed320e\") " pod="openshift-marketplace/certified-operators-ck7mx" Dec 10 15:25:49 crc kubenswrapper[4755]: I1210 15:25:49.469240 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzhqn\" (UniqueName: \"kubernetes.io/projected/609745aa-bdb9-440a-b029-fcd706ed320e-kube-api-access-nzhqn\") pod \"certified-operators-ck7mx\" (UID: \"609745aa-bdb9-440a-b029-fcd706ed320e\") " pod="openshift-marketplace/certified-operators-ck7mx" Dec 10 15:25:49 crc kubenswrapper[4755]: I1210 15:25:49.469325 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/609745aa-bdb9-440a-b029-fcd706ed320e-utilities\") pod \"certified-operators-ck7mx\" (UID: \"609745aa-bdb9-440a-b029-fcd706ed320e\") " pod="openshift-marketplace/certified-operators-ck7mx" Dec 10 15:25:49 crc kubenswrapper[4755]: I1210 15:25:49.469831 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/609745aa-bdb9-440a-b029-fcd706ed320e-utilities\") pod \"certified-operators-ck7mx\" (UID: \"609745aa-bdb9-440a-b029-fcd706ed320e\") " pod="openshift-marketplace/certified-operators-ck7mx" Dec 10 15:25:49 crc kubenswrapper[4755]: I1210 15:25:49.470084 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/609745aa-bdb9-440a-b029-fcd706ed320e-catalog-content\") pod \"certified-operators-ck7mx\" (UID: \"609745aa-bdb9-440a-b029-fcd706ed320e\") " pod="openshift-marketplace/certified-operators-ck7mx" Dec 10 15:25:49 crc kubenswrapper[4755]: I1210 15:25:49.488599 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzhqn\" (UniqueName: \"kubernetes.io/projected/609745aa-bdb9-440a-b029-fcd706ed320e-kube-api-access-nzhqn\") pod \"certified-operators-ck7mx\" (UID: \"609745aa-bdb9-440a-b029-fcd706ed320e\") " pod="openshift-marketplace/certified-operators-ck7mx" Dec 10 15:25:49 crc kubenswrapper[4755]: I1210 15:25:49.563927 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-v7hds"] Dec 10 15:25:49 crc kubenswrapper[4755]: I1210 15:25:49.578231 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-mqq47" Dec 10 15:25:49 crc kubenswrapper[4755]: I1210 15:25:49.578983 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-d89xt"] Dec 10 15:25:49 crc kubenswrapper[4755]: I1210 15:25:49.674355 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ck7mx" Dec 10 15:25:49 crc kubenswrapper[4755]: I1210 15:25:49.700821 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9slgn" Dec 10 15:25:49 crc kubenswrapper[4755]: I1210 15:25:49.749440 4755 patch_prober.go:28] interesting pod/router-default-5444994796-ts8wv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 10 15:25:49 crc kubenswrapper[4755]: [-]has-synced failed: reason withheld Dec 10 15:25:49 crc kubenswrapper[4755]: [+]process-running ok Dec 10 15:25:49 crc kubenswrapper[4755]: healthz check failed Dec 10 15:25:49 crc kubenswrapper[4755]: I1210 15:25:49.749513 4755 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ts8wv" podUID="dd9b14b8-6d7c-4eeb-9748-a2e99daa4293" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 10 15:25:49 crc kubenswrapper[4755]: I1210 15:25:49.777777 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Dec 10 15:25:49 crc kubenswrapper[4755]: I1210 15:25:49.941022 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-mqq47"] Dec 10 15:25:50 crc kubenswrapper[4755]: W1210 15:25:50.006864 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1189a5c2_6e43_4e4b_8181_d2bd78031673.slice/crio-fb834714129853b62b5ac115923507eecd20ae33bd8c07b59e3ee2c77832d659 WatchSource:0}: Error finding container fb834714129853b62b5ac115923507eecd20ae33bd8c07b59e3ee2c77832d659: Status 404 returned error can't find the container with id fb834714129853b62b5ac115923507eecd20ae33bd8c07b59e3ee2c77832d659 Dec 10 15:25:50 crc kubenswrapper[4755]: I1210 15:25:50.017225 4755 generic.go:334] "Generic (PLEG): container finished" podID="202dcce7-fa8a-4991-bd7a-661eab1f3274" containerID="6f7aca0b830b7034d624d5009377acc24ddcffbd335813350d4e9d3973d01abf" exitCode=0 Dec 10 15:25:50 crc kubenswrapper[4755]: I1210 15:25:50.017270 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v7hds" event={"ID":"202dcce7-fa8a-4991-bd7a-661eab1f3274","Type":"ContainerDied","Data":"6f7aca0b830b7034d624d5009377acc24ddcffbd335813350d4e9d3973d01abf"} Dec 10 15:25:50 crc kubenswrapper[4755]: I1210 15:25:50.017304 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v7hds" event={"ID":"202dcce7-fa8a-4991-bd7a-661eab1f3274","Type":"ContainerStarted","Data":"3749eff209d81e6de079722b0d580b9cfde208c997351f444cbb737be36e085a"} Dec 10 15:25:50 crc kubenswrapper[4755]: I1210 15:25:50.019725 4755 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 10 15:25:50 crc kubenswrapper[4755]: I1210 15:25:50.025003 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-rbjcx" event={"ID":"0470027c-6ca8-4404-a366-997bf288e1d0","Type":"ContainerStarted","Data":"053da7c6ec3a1a5daaab74e6174dc054be12a1c65c8d1a59dce6deef3e368168"} Dec 10 15:25:50 crc kubenswrapper[4755]: I1210 15:25:50.025069 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-rbjcx" event={"ID":"0470027c-6ca8-4404-a366-997bf288e1d0","Type":"ContainerStarted","Data":"fc7b9817357631a1a5585bbed511caeb2e26ea0e5aca420a6e974c4e54a2d8f1"} Dec 10 15:25:50 crc kubenswrapper[4755]: I1210 15:25:50.034676 4755 generic.go:334] "Generic (PLEG): container finished" podID="8f9f6949-50c9-4d7e-b75f-b990a642d3a7" containerID="cdc8d2847f3f497f28d58b30d269c0afb047671eb0c00c784c4213f6d286abac" exitCode=0 Dec 10 15:25:50 crc kubenswrapper[4755]: I1210 15:25:50.034728 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d89xt" event={"ID":"8f9f6949-50c9-4d7e-b75f-b990a642d3a7","Type":"ContainerDied","Data":"cdc8d2847f3f497f28d58b30d269c0afb047671eb0c00c784c4213f6d286abac"} Dec 10 15:25:50 crc kubenswrapper[4755]: I1210 15:25:50.035303 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d89xt" event={"ID":"8f9f6949-50c9-4d7e-b75f-b990a642d3a7","Type":"ContainerStarted","Data":"c6a19ebdba4a01eeac7f3b58cbb3deb0b3a4ac0e79127df8de696d012044f177"} Dec 10 15:25:50 crc kubenswrapper[4755]: I1210 15:25:50.053815 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-rbjcx" podStartSLOduration=12.053794707 podStartE2EDuration="12.053794707s" podCreationTimestamp="2025-12-10 15:25:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:25:50.051883977 +0000 UTC m=+146.652767619" watchObservedRunningTime="2025-12-10 15:25:50.053794707 +0000 UTC m=+146.654678339" Dec 10 15:25:50 crc kubenswrapper[4755]: I1210 15:25:50.138864 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ck7mx"] Dec 10 15:25:50 crc kubenswrapper[4755]: W1210 15:25:50.145779 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod609745aa_bdb9_440a_b029_fcd706ed320e.slice/crio-afb60fc6d7c531dbd0d1160dd770f5108c23f7c16fa6f1f5c2240b901f36fda9 WatchSource:0}: Error finding container afb60fc6d7c531dbd0d1160dd770f5108c23f7c16fa6f1f5c2240b901f36fda9: Status 404 returned error can't find the container with id afb60fc6d7c531dbd0d1160dd770f5108c23f7c16fa6f1f5c2240b901f36fda9 Dec 10 15:25:50 crc kubenswrapper[4755]: I1210 15:25:50.214437 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 10 15:25:50 crc kubenswrapper[4755]: I1210 15:25:50.215146 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 10 15:25:50 crc kubenswrapper[4755]: I1210 15:25:50.217583 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Dec 10 15:25:50 crc kubenswrapper[4755]: I1210 15:25:50.219979 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Dec 10 15:25:50 crc kubenswrapper[4755]: I1210 15:25:50.221146 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 10 15:25:50 crc kubenswrapper[4755]: I1210 15:25:50.295036 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9slgn"] Dec 10 15:25:50 crc kubenswrapper[4755]: W1210 15:25:50.295134 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d74a8b0_992e_46b5_9364_cc82c84ac2d8.slice/crio-e6ec8fff08a463085a224c901fbae701bea3684412b82b1e1e094ccac93fe13d WatchSource:0}: Error finding container e6ec8fff08a463085a224c901fbae701bea3684412b82b1e1e094ccac93fe13d: Status 404 returned error can't find the container with id e6ec8fff08a463085a224c901fbae701bea3684412b82b1e1e094ccac93fe13d Dec 10 15:25:50 crc kubenswrapper[4755]: I1210 15:25:50.395863 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c51e1051-4c68-4a93-a2df-6eb4deb6f3ca-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"c51e1051-4c68-4a93-a2df-6eb4deb6f3ca\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 10 15:25:50 crc kubenswrapper[4755]: I1210 15:25:50.396149 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c51e1051-4c68-4a93-a2df-6eb4deb6f3ca-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"c51e1051-4c68-4a93-a2df-6eb4deb6f3ca\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 10 15:25:50 crc kubenswrapper[4755]: I1210 15:25:50.496961 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c51e1051-4c68-4a93-a2df-6eb4deb6f3ca-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"c51e1051-4c68-4a93-a2df-6eb4deb6f3ca\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 10 15:25:50 crc kubenswrapper[4755]: I1210 15:25:50.497083 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c51e1051-4c68-4a93-a2df-6eb4deb6f3ca-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"c51e1051-4c68-4a93-a2df-6eb4deb6f3ca\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 10 15:25:50 crc kubenswrapper[4755]: I1210 15:25:50.497610 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c51e1051-4c68-4a93-a2df-6eb4deb6f3ca-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"c51e1051-4c68-4a93-a2df-6eb4deb6f3ca\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 10 15:25:50 crc kubenswrapper[4755]: I1210 15:25:50.516569 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c51e1051-4c68-4a93-a2df-6eb4deb6f3ca-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"c51e1051-4c68-4a93-a2df-6eb4deb6f3ca\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 10 15:25:50 crc kubenswrapper[4755]: I1210 15:25:50.552198 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 10 15:25:50 crc kubenswrapper[4755]: I1210 15:25:50.743710 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 10 15:25:50 crc kubenswrapper[4755]: I1210 15:25:50.746823 4755 patch_prober.go:28] interesting pod/router-default-5444994796-ts8wv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 10 15:25:50 crc kubenswrapper[4755]: [-]has-synced failed: reason withheld Dec 10 15:25:50 crc kubenswrapper[4755]: [+]process-running ok Dec 10 15:25:50 crc kubenswrapper[4755]: healthz check failed Dec 10 15:25:50 crc kubenswrapper[4755]: I1210 15:25:50.746877 4755 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ts8wv" podUID="dd9b14b8-6d7c-4eeb-9748-a2e99daa4293" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 10 15:25:50 crc kubenswrapper[4755]: W1210 15:25:50.752222 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podc51e1051_4c68_4a93_a2df_6eb4deb6f3ca.slice/crio-842bb42ce33808477b4e6a78916adfd330914ab8b9612946eeba7b73b56dbc0f WatchSource:0}: Error finding container 842bb42ce33808477b4e6a78916adfd330914ab8b9612946eeba7b73b56dbc0f: Status 404 returned error can't find the container with id 842bb42ce33808477b4e6a78916adfd330914ab8b9612946eeba7b73b56dbc0f Dec 10 15:25:50 crc kubenswrapper[4755]: I1210 15:25:50.880278 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qccqv"] Dec 10 15:25:50 crc kubenswrapper[4755]: I1210 15:25:50.881489 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qccqv" Dec 10 15:25:50 crc kubenswrapper[4755]: I1210 15:25:50.883522 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 10 15:25:50 crc kubenswrapper[4755]: I1210 15:25:50.889640 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qccqv"] Dec 10 15:25:51 crc kubenswrapper[4755]: I1210 15:25:51.003196 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8rg8\" (UniqueName: \"kubernetes.io/projected/4ca3fb6e-045b-4025-9d18-eb0a13d9128a-kube-api-access-g8rg8\") pod \"redhat-marketplace-qccqv\" (UID: \"4ca3fb6e-045b-4025-9d18-eb0a13d9128a\") " pod="openshift-marketplace/redhat-marketplace-qccqv" Dec 10 15:25:51 crc kubenswrapper[4755]: I1210 15:25:51.003582 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ca3fb6e-045b-4025-9d18-eb0a13d9128a-catalog-content\") pod \"redhat-marketplace-qccqv\" (UID: \"4ca3fb6e-045b-4025-9d18-eb0a13d9128a\") " pod="openshift-marketplace/redhat-marketplace-qccqv" Dec 10 15:25:51 crc kubenswrapper[4755]: I1210 15:25:51.003616 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ca3fb6e-045b-4025-9d18-eb0a13d9128a-utilities\") pod \"redhat-marketplace-qccqv\" (UID: \"4ca3fb6e-045b-4025-9d18-eb0a13d9128a\") " pod="openshift-marketplace/redhat-marketplace-qccqv" Dec 10 15:25:51 crc kubenswrapper[4755]: I1210 15:25:51.006536 4755 patch_prober.go:28] interesting pod/downloads-7954f5f757-gbvgh container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Dec 10 15:25:51 crc kubenswrapper[4755]: I1210 15:25:51.006585 4755 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-gbvgh" podUID="c6d1d322-1622-41d5-afb0-c441b346b8bf" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Dec 10 15:25:51 crc kubenswrapper[4755]: I1210 15:25:51.007666 4755 patch_prober.go:28] interesting pod/downloads-7954f5f757-gbvgh container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Dec 10 15:25:51 crc kubenswrapper[4755]: I1210 15:25:51.007697 4755 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-gbvgh" podUID="c6d1d322-1622-41d5-afb0-c441b346b8bf" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Dec 10 15:25:51 crc kubenswrapper[4755]: I1210 15:25:51.044112 4755 generic.go:334] "Generic (PLEG): container finished" podID="8d212083-766e-4b88-ba08-8570f05f6c94" containerID="5a8f3ef990acd86ea0a2ed264f1039a0a313e1e53fa3e09e93fb45cde45f47fd" exitCode=0 Dec 10 15:25:51 crc kubenswrapper[4755]: I1210 15:25:51.044201 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29422995-knlb5" event={"ID":"8d212083-766e-4b88-ba08-8570f05f6c94","Type":"ContainerDied","Data":"5a8f3ef990acd86ea0a2ed264f1039a0a313e1e53fa3e09e93fb45cde45f47fd"} Dec 10 15:25:51 crc kubenswrapper[4755]: I1210 15:25:51.047146 4755 generic.go:334] "Generic (PLEG): container finished" podID="1d74a8b0-992e-46b5-9364-cc82c84ac2d8" containerID="87ea92a9701ff0033bf5ae783ef59b7e3d51d71b60540c5806f4ff9a72a8c72e" exitCode=0 Dec 10 15:25:51 crc kubenswrapper[4755]: I1210 15:25:51.047208 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9slgn" event={"ID":"1d74a8b0-992e-46b5-9364-cc82c84ac2d8","Type":"ContainerDied","Data":"87ea92a9701ff0033bf5ae783ef59b7e3d51d71b60540c5806f4ff9a72a8c72e"} Dec 10 15:25:51 crc kubenswrapper[4755]: I1210 15:25:51.047233 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9slgn" event={"ID":"1d74a8b0-992e-46b5-9364-cc82c84ac2d8","Type":"ContainerStarted","Data":"e6ec8fff08a463085a224c901fbae701bea3684412b82b1e1e094ccac93fe13d"} Dec 10 15:25:51 crc kubenswrapper[4755]: I1210 15:25:51.049264 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"c51e1051-4c68-4a93-a2df-6eb4deb6f3ca","Type":"ContainerStarted","Data":"842bb42ce33808477b4e6a78916adfd330914ab8b9612946eeba7b73b56dbc0f"} Dec 10 15:25:51 crc kubenswrapper[4755]: I1210 15:25:51.051350 4755 generic.go:334] "Generic (PLEG): container finished" podID="609745aa-bdb9-440a-b029-fcd706ed320e" containerID="48f1cc7970e12f317265de2b7a2407ee911f71b657799143170033c7ac0fb292" exitCode=0 Dec 10 15:25:51 crc kubenswrapper[4755]: I1210 15:25:51.051477 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ck7mx" event={"ID":"609745aa-bdb9-440a-b029-fcd706ed320e","Type":"ContainerDied","Data":"48f1cc7970e12f317265de2b7a2407ee911f71b657799143170033c7ac0fb292"} Dec 10 15:25:51 crc kubenswrapper[4755]: I1210 15:25:51.051526 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ck7mx" event={"ID":"609745aa-bdb9-440a-b029-fcd706ed320e","Type":"ContainerStarted","Data":"afb60fc6d7c531dbd0d1160dd770f5108c23f7c16fa6f1f5c2240b901f36fda9"} Dec 10 15:25:51 crc kubenswrapper[4755]: I1210 15:25:51.054170 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-mqq47" event={"ID":"1189a5c2-6e43-4e4b-8181-d2bd78031673","Type":"ContainerStarted","Data":"0f4e5839a9d0857d1dbeb93bf6c7272db758ce854ae2d68e86fe9feb32381b28"} Dec 10 15:25:51 crc kubenswrapper[4755]: I1210 15:25:51.054234 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-mqq47" event={"ID":"1189a5c2-6e43-4e4b-8181-d2bd78031673","Type":"ContainerStarted","Data":"fb834714129853b62b5ac115923507eecd20ae33bd8c07b59e3ee2c77832d659"} Dec 10 15:25:51 crc kubenswrapper[4755]: I1210 15:25:51.054564 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-mqq47" Dec 10 15:25:51 crc kubenswrapper[4755]: I1210 15:25:51.103362 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-mqq47" podStartSLOduration=128.103302662 podStartE2EDuration="2m8.103302662s" podCreationTimestamp="2025-12-10 15:23:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:25:51.103113127 +0000 UTC m=+147.703996759" watchObservedRunningTime="2025-12-10 15:25:51.103302662 +0000 UTC m=+147.704186304" Dec 10 15:25:51 crc kubenswrapper[4755]: I1210 15:25:51.110066 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ca3fb6e-045b-4025-9d18-eb0a13d9128a-utilities\") pod \"redhat-marketplace-qccqv\" (UID: \"4ca3fb6e-045b-4025-9d18-eb0a13d9128a\") " pod="openshift-marketplace/redhat-marketplace-qccqv" Dec 10 15:25:51 crc kubenswrapper[4755]: I1210 15:25:51.110276 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8rg8\" (UniqueName: \"kubernetes.io/projected/4ca3fb6e-045b-4025-9d18-eb0a13d9128a-kube-api-access-g8rg8\") pod \"redhat-marketplace-qccqv\" (UID: \"4ca3fb6e-045b-4025-9d18-eb0a13d9128a\") " pod="openshift-marketplace/redhat-marketplace-qccqv" Dec 10 15:25:51 crc kubenswrapper[4755]: I1210 15:25:51.110348 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ca3fb6e-045b-4025-9d18-eb0a13d9128a-catalog-content\") pod \"redhat-marketplace-qccqv\" (UID: \"4ca3fb6e-045b-4025-9d18-eb0a13d9128a\") " pod="openshift-marketplace/redhat-marketplace-qccqv" Dec 10 15:25:51 crc kubenswrapper[4755]: I1210 15:25:51.111218 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ca3fb6e-045b-4025-9d18-eb0a13d9128a-utilities\") pod \"redhat-marketplace-qccqv\" (UID: \"4ca3fb6e-045b-4025-9d18-eb0a13d9128a\") " pod="openshift-marketplace/redhat-marketplace-qccqv" Dec 10 15:25:51 crc kubenswrapper[4755]: I1210 15:25:51.111253 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ca3fb6e-045b-4025-9d18-eb0a13d9128a-catalog-content\") pod \"redhat-marketplace-qccqv\" (UID: \"4ca3fb6e-045b-4025-9d18-eb0a13d9128a\") " pod="openshift-marketplace/redhat-marketplace-qccqv" Dec 10 15:25:51 crc kubenswrapper[4755]: I1210 15:25:51.151899 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8rg8\" (UniqueName: \"kubernetes.io/projected/4ca3fb6e-045b-4025-9d18-eb0a13d9128a-kube-api-access-g8rg8\") pod \"redhat-marketplace-qccqv\" (UID: \"4ca3fb6e-045b-4025-9d18-eb0a13d9128a\") " pod="openshift-marketplace/redhat-marketplace-qccqv" Dec 10 15:25:51 crc kubenswrapper[4755]: I1210 15:25:51.224839 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qccqv" Dec 10 15:25:51 crc kubenswrapper[4755]: I1210 15:25:51.235826 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-n6qb5" Dec 10 15:25:51 crc kubenswrapper[4755]: I1210 15:25:51.236019 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-n6qb5" Dec 10 15:25:51 crc kubenswrapper[4755]: I1210 15:25:51.237309 4755 patch_prober.go:28] interesting pod/console-f9d7485db-n6qb5 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Dec 10 15:25:51 crc kubenswrapper[4755]: I1210 15:25:51.237368 4755 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-n6qb5" podUID="24e3bc3c-7e93-4c91-b0a2-85877004fafc" containerName="console" probeResult="failure" output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" Dec 10 15:25:51 crc kubenswrapper[4755]: I1210 15:25:51.252919 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-xx85g" Dec 10 15:25:51 crc kubenswrapper[4755]: I1210 15:25:51.252961 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-xx85g" Dec 10 15:25:51 crc kubenswrapper[4755]: I1210 15:25:51.259591 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-xx85g" Dec 10 15:25:51 crc kubenswrapper[4755]: I1210 15:25:51.304352 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wvrkv"] Dec 10 15:25:51 crc kubenswrapper[4755]: I1210 15:25:51.318042 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wvrkv"] Dec 10 15:25:51 crc kubenswrapper[4755]: I1210 15:25:51.318141 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wvrkv" Dec 10 15:25:51 crc kubenswrapper[4755]: I1210 15:25:51.416623 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84e93abf-4130-4ca2-b263-5a4b29729988-catalog-content\") pod \"redhat-marketplace-wvrkv\" (UID: \"84e93abf-4130-4ca2-b263-5a4b29729988\") " pod="openshift-marketplace/redhat-marketplace-wvrkv" Dec 10 15:25:51 crc kubenswrapper[4755]: I1210 15:25:51.416716 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84e93abf-4130-4ca2-b263-5a4b29729988-utilities\") pod \"redhat-marketplace-wvrkv\" (UID: \"84e93abf-4130-4ca2-b263-5a4b29729988\") " pod="openshift-marketplace/redhat-marketplace-wvrkv" Dec 10 15:25:51 crc kubenswrapper[4755]: I1210 15:25:51.416881 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbfl9\" (UniqueName: \"kubernetes.io/projected/84e93abf-4130-4ca2-b263-5a4b29729988-kube-api-access-kbfl9\") pod \"redhat-marketplace-wvrkv\" (UID: \"84e93abf-4130-4ca2-b263-5a4b29729988\") " pod="openshift-marketplace/redhat-marketplace-wvrkv" Dec 10 15:25:51 crc kubenswrapper[4755]: I1210 15:25:51.518622 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84e93abf-4130-4ca2-b263-5a4b29729988-utilities\") pod \"redhat-marketplace-wvrkv\" (UID: \"84e93abf-4130-4ca2-b263-5a4b29729988\") " pod="openshift-marketplace/redhat-marketplace-wvrkv" Dec 10 15:25:51 crc kubenswrapper[4755]: I1210 15:25:51.519064 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbfl9\" (UniqueName: \"kubernetes.io/projected/84e93abf-4130-4ca2-b263-5a4b29729988-kube-api-access-kbfl9\") pod \"redhat-marketplace-wvrkv\" (UID: \"84e93abf-4130-4ca2-b263-5a4b29729988\") " pod="openshift-marketplace/redhat-marketplace-wvrkv" Dec 10 15:25:51 crc kubenswrapper[4755]: I1210 15:25:51.519128 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84e93abf-4130-4ca2-b263-5a4b29729988-catalog-content\") pod \"redhat-marketplace-wvrkv\" (UID: \"84e93abf-4130-4ca2-b263-5a4b29729988\") " pod="openshift-marketplace/redhat-marketplace-wvrkv" Dec 10 15:25:51 crc kubenswrapper[4755]: I1210 15:25:51.519242 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84e93abf-4130-4ca2-b263-5a4b29729988-utilities\") pod \"redhat-marketplace-wvrkv\" (UID: \"84e93abf-4130-4ca2-b263-5a4b29729988\") " pod="openshift-marketplace/redhat-marketplace-wvrkv" Dec 10 15:25:51 crc kubenswrapper[4755]: I1210 15:25:51.519950 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84e93abf-4130-4ca2-b263-5a4b29729988-catalog-content\") pod \"redhat-marketplace-wvrkv\" (UID: \"84e93abf-4130-4ca2-b263-5a4b29729988\") " pod="openshift-marketplace/redhat-marketplace-wvrkv" Dec 10 15:25:51 crc kubenswrapper[4755]: I1210 15:25:51.552123 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbfl9\" (UniqueName: \"kubernetes.io/projected/84e93abf-4130-4ca2-b263-5a4b29729988-kube-api-access-kbfl9\") pod \"redhat-marketplace-wvrkv\" (UID: \"84e93abf-4130-4ca2-b263-5a4b29729988\") " pod="openshift-marketplace/redhat-marketplace-wvrkv" Dec 10 15:25:51 crc kubenswrapper[4755]: I1210 15:25:51.582710 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qccqv"] Dec 10 15:25:51 crc kubenswrapper[4755]: W1210 15:25:51.591950 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4ca3fb6e_045b_4025_9d18_eb0a13d9128a.slice/crio-3f36b90b7d6d2baeb74ad15e39a6d299dbfe2a4c400ec5f54aed4ffb32524aba WatchSource:0}: Error finding container 3f36b90b7d6d2baeb74ad15e39a6d299dbfe2a4c400ec5f54aed4ffb32524aba: Status 404 returned error can't find the container with id 3f36b90b7d6d2baeb74ad15e39a6d299dbfe2a4c400ec5f54aed4ffb32524aba Dec 10 15:25:51 crc kubenswrapper[4755]: I1210 15:25:51.637088 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wvrkv" Dec 10 15:25:51 crc kubenswrapper[4755]: I1210 15:25:51.699767 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-5bvrp" Dec 10 15:25:51 crc kubenswrapper[4755]: I1210 15:25:51.721706 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 15:25:51 crc kubenswrapper[4755]: I1210 15:25:51.721758 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 15:25:51 crc kubenswrapper[4755]: I1210 15:25:51.721797 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 15:25:51 crc kubenswrapper[4755]: I1210 15:25:51.721862 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 15:25:51 crc kubenswrapper[4755]: I1210 15:25:51.727365 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 15:25:51 crc kubenswrapper[4755]: I1210 15:25:51.727771 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 15:25:51 crc kubenswrapper[4755]: I1210 15:25:51.728146 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 15:25:51 crc kubenswrapper[4755]: I1210 15:25:51.743529 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-ts8wv" Dec 10 15:25:51 crc kubenswrapper[4755]: I1210 15:25:51.748923 4755 patch_prober.go:28] interesting pod/router-default-5444994796-ts8wv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 10 15:25:51 crc kubenswrapper[4755]: [-]has-synced failed: reason withheld Dec 10 15:25:51 crc kubenswrapper[4755]: [+]process-running ok Dec 10 15:25:51 crc kubenswrapper[4755]: healthz check failed Dec 10 15:25:51 crc kubenswrapper[4755]: I1210 15:25:51.748972 4755 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ts8wv" podUID="dd9b14b8-6d7c-4eeb-9748-a2e99daa4293" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 10 15:25:51 crc kubenswrapper[4755]: I1210 15:25:51.848141 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 15:25:51 crc kubenswrapper[4755]: I1210 15:25:51.886956 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-m45gd"] Dec 10 15:25:51 crc kubenswrapper[4755]: I1210 15:25:51.900566 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m45gd" Dec 10 15:25:51 crc kubenswrapper[4755]: I1210 15:25:51.903208 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 10 15:25:51 crc kubenswrapper[4755]: I1210 15:25:51.907317 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-m45gd"] Dec 10 15:25:51 crc kubenswrapper[4755]: I1210 15:25:51.981930 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 15:25:51 crc kubenswrapper[4755]: I1210 15:25:51.984813 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wvrkv"] Dec 10 15:25:51 crc kubenswrapper[4755]: I1210 15:25:51.987623 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 15:25:52 crc kubenswrapper[4755]: I1210 15:25:52.005778 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 15:25:52 crc kubenswrapper[4755]: I1210 15:25:52.025368 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a08c52a2-16c8-48eb-af20-472b5202eb85-utilities\") pod \"redhat-operators-m45gd\" (UID: \"a08c52a2-16c8-48eb-af20-472b5202eb85\") " pod="openshift-marketplace/redhat-operators-m45gd" Dec 10 15:25:52 crc kubenswrapper[4755]: I1210 15:25:52.025581 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhm9k\" (UniqueName: \"kubernetes.io/projected/a08c52a2-16c8-48eb-af20-472b5202eb85-kube-api-access-fhm9k\") pod \"redhat-operators-m45gd\" (UID: \"a08c52a2-16c8-48eb-af20-472b5202eb85\") " pod="openshift-marketplace/redhat-operators-m45gd" Dec 10 15:25:52 crc kubenswrapper[4755]: I1210 15:25:52.025615 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a08c52a2-16c8-48eb-af20-472b5202eb85-catalog-content\") pod \"redhat-operators-m45gd\" (UID: \"a08c52a2-16c8-48eb-af20-472b5202eb85\") " pod="openshift-marketplace/redhat-operators-m45gd" Dec 10 15:25:52 crc kubenswrapper[4755]: I1210 15:25:52.090967 4755 generic.go:334] "Generic (PLEG): container finished" podID="c51e1051-4c68-4a93-a2df-6eb4deb6f3ca" containerID="7e7a39cde5b257fe28c43fcea5977045c62d68503c8abb28478209089ba18af8" exitCode=0 Dec 10 15:25:52 crc kubenswrapper[4755]: I1210 15:25:52.091285 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"c51e1051-4c68-4a93-a2df-6eb4deb6f3ca","Type":"ContainerDied","Data":"7e7a39cde5b257fe28c43fcea5977045c62d68503c8abb28478209089ba18af8"} Dec 10 15:25:52 crc kubenswrapper[4755]: I1210 15:25:52.092866 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wvrkv" event={"ID":"84e93abf-4130-4ca2-b263-5a4b29729988","Type":"ContainerStarted","Data":"f6cbc5551c6159a3087b26320685cbd001965246591af8c764208d6353dbc3a5"} Dec 10 15:25:52 crc kubenswrapper[4755]: I1210 15:25:52.095571 4755 generic.go:334] "Generic (PLEG): container finished" podID="4ca3fb6e-045b-4025-9d18-eb0a13d9128a" containerID="be9a851a2193a391a03c67d8ef8a753e6fb1180aa6156a1e001fa41e56d663b9" exitCode=0 Dec 10 15:25:52 crc kubenswrapper[4755]: I1210 15:25:52.096244 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qccqv" event={"ID":"4ca3fb6e-045b-4025-9d18-eb0a13d9128a","Type":"ContainerDied","Data":"be9a851a2193a391a03c67d8ef8a753e6fb1180aa6156a1e001fa41e56d663b9"} Dec 10 15:25:52 crc kubenswrapper[4755]: I1210 15:25:52.096267 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qccqv" event={"ID":"4ca3fb6e-045b-4025-9d18-eb0a13d9128a","Type":"ContainerStarted","Data":"3f36b90b7d6d2baeb74ad15e39a6d299dbfe2a4c400ec5f54aed4ffb32524aba"} Dec 10 15:25:52 crc kubenswrapper[4755]: I1210 15:25:52.102982 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-xx85g" Dec 10 15:25:52 crc kubenswrapper[4755]: I1210 15:25:52.128613 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhm9k\" (UniqueName: \"kubernetes.io/projected/a08c52a2-16c8-48eb-af20-472b5202eb85-kube-api-access-fhm9k\") pod \"redhat-operators-m45gd\" (UID: \"a08c52a2-16c8-48eb-af20-472b5202eb85\") " pod="openshift-marketplace/redhat-operators-m45gd" Dec 10 15:25:52 crc kubenswrapper[4755]: I1210 15:25:52.128661 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a08c52a2-16c8-48eb-af20-472b5202eb85-catalog-content\") pod \"redhat-operators-m45gd\" (UID: \"a08c52a2-16c8-48eb-af20-472b5202eb85\") " pod="openshift-marketplace/redhat-operators-m45gd" Dec 10 15:25:52 crc kubenswrapper[4755]: I1210 15:25:52.128698 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a08c52a2-16c8-48eb-af20-472b5202eb85-utilities\") pod \"redhat-operators-m45gd\" (UID: \"a08c52a2-16c8-48eb-af20-472b5202eb85\") " pod="openshift-marketplace/redhat-operators-m45gd" Dec 10 15:25:52 crc kubenswrapper[4755]: I1210 15:25:52.129199 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a08c52a2-16c8-48eb-af20-472b5202eb85-utilities\") pod \"redhat-operators-m45gd\" (UID: \"a08c52a2-16c8-48eb-af20-472b5202eb85\") " pod="openshift-marketplace/redhat-operators-m45gd" Dec 10 15:25:52 crc kubenswrapper[4755]: I1210 15:25:52.129960 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a08c52a2-16c8-48eb-af20-472b5202eb85-catalog-content\") pod \"redhat-operators-m45gd\" (UID: \"a08c52a2-16c8-48eb-af20-472b5202eb85\") " pod="openshift-marketplace/redhat-operators-m45gd" Dec 10 15:25:52 crc kubenswrapper[4755]: I1210 15:25:52.166277 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhm9k\" (UniqueName: \"kubernetes.io/projected/a08c52a2-16c8-48eb-af20-472b5202eb85-kube-api-access-fhm9k\") pod \"redhat-operators-m45gd\" (UID: \"a08c52a2-16c8-48eb-af20-472b5202eb85\") " pod="openshift-marketplace/redhat-operators-m45gd" Dec 10 15:25:52 crc kubenswrapper[4755]: I1210 15:25:52.250198 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m45gd" Dec 10 15:25:52 crc kubenswrapper[4755]: I1210 15:25:52.296577 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-j2tvl"] Dec 10 15:25:52 crc kubenswrapper[4755]: I1210 15:25:52.297764 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j2tvl" Dec 10 15:25:52 crc kubenswrapper[4755]: I1210 15:25:52.317641 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-j2tvl"] Dec 10 15:25:52 crc kubenswrapper[4755]: I1210 15:25:52.437202 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f0e6886-1af8-466d-929d-0e0f6ade8a91-catalog-content\") pod \"redhat-operators-j2tvl\" (UID: \"3f0e6886-1af8-466d-929d-0e0f6ade8a91\") " pod="openshift-marketplace/redhat-operators-j2tvl" Dec 10 15:25:52 crc kubenswrapper[4755]: I1210 15:25:52.437612 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f0e6886-1af8-466d-929d-0e0f6ade8a91-utilities\") pod \"redhat-operators-j2tvl\" (UID: \"3f0e6886-1af8-466d-929d-0e0f6ade8a91\") " pod="openshift-marketplace/redhat-operators-j2tvl" Dec 10 15:25:52 crc kubenswrapper[4755]: I1210 15:25:52.437682 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pggdw\" (UniqueName: \"kubernetes.io/projected/3f0e6886-1af8-466d-929d-0e0f6ade8a91-kube-api-access-pggdw\") pod \"redhat-operators-j2tvl\" (UID: \"3f0e6886-1af8-466d-929d-0e0f6ade8a91\") " pod="openshift-marketplace/redhat-operators-j2tvl" Dec 10 15:25:52 crc kubenswrapper[4755]: I1210 15:25:52.540507 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f0e6886-1af8-466d-929d-0e0f6ade8a91-catalog-content\") pod \"redhat-operators-j2tvl\" (UID: \"3f0e6886-1af8-466d-929d-0e0f6ade8a91\") " pod="openshift-marketplace/redhat-operators-j2tvl" Dec 10 15:25:52 crc kubenswrapper[4755]: I1210 15:25:52.541735 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f0e6886-1af8-466d-929d-0e0f6ade8a91-utilities\") pod \"redhat-operators-j2tvl\" (UID: \"3f0e6886-1af8-466d-929d-0e0f6ade8a91\") " pod="openshift-marketplace/redhat-operators-j2tvl" Dec 10 15:25:52 crc kubenswrapper[4755]: I1210 15:25:52.541793 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pggdw\" (UniqueName: \"kubernetes.io/projected/3f0e6886-1af8-466d-929d-0e0f6ade8a91-kube-api-access-pggdw\") pod \"redhat-operators-j2tvl\" (UID: \"3f0e6886-1af8-466d-929d-0e0f6ade8a91\") " pod="openshift-marketplace/redhat-operators-j2tvl" Dec 10 15:25:52 crc kubenswrapper[4755]: I1210 15:25:52.542658 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f0e6886-1af8-466d-929d-0e0f6ade8a91-catalog-content\") pod \"redhat-operators-j2tvl\" (UID: \"3f0e6886-1af8-466d-929d-0e0f6ade8a91\") " pod="openshift-marketplace/redhat-operators-j2tvl" Dec 10 15:25:52 crc kubenswrapper[4755]: I1210 15:25:52.542992 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f0e6886-1af8-466d-929d-0e0f6ade8a91-utilities\") pod \"redhat-operators-j2tvl\" (UID: \"3f0e6886-1af8-466d-929d-0e0f6ade8a91\") " pod="openshift-marketplace/redhat-operators-j2tvl" Dec 10 15:25:52 crc kubenswrapper[4755]: I1210 15:25:52.652338 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29422995-knlb5" Dec 10 15:25:52 crc kubenswrapper[4755]: I1210 15:25:52.681366 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pggdw\" (UniqueName: \"kubernetes.io/projected/3f0e6886-1af8-466d-929d-0e0f6ade8a91-kube-api-access-pggdw\") pod \"redhat-operators-j2tvl\" (UID: \"3f0e6886-1af8-466d-929d-0e0f6ade8a91\") " pod="openshift-marketplace/redhat-operators-j2tvl" Dec 10 15:25:52 crc kubenswrapper[4755]: I1210 15:25:52.745894 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8d212083-766e-4b88-ba08-8570f05f6c94-config-volume\") pod \"8d212083-766e-4b88-ba08-8570f05f6c94\" (UID: \"8d212083-766e-4b88-ba08-8570f05f6c94\") " Dec 10 15:25:52 crc kubenswrapper[4755]: I1210 15:25:52.746977 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d212083-766e-4b88-ba08-8570f05f6c94-config-volume" (OuterVolumeSpecName: "config-volume") pod "8d212083-766e-4b88-ba08-8570f05f6c94" (UID: "8d212083-766e-4b88-ba08-8570f05f6c94"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:25:52 crc kubenswrapper[4755]: I1210 15:25:52.751031 4755 patch_prober.go:28] interesting pod/router-default-5444994796-ts8wv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 10 15:25:52 crc kubenswrapper[4755]: [-]has-synced failed: reason withheld Dec 10 15:25:52 crc kubenswrapper[4755]: [+]process-running ok Dec 10 15:25:52 crc kubenswrapper[4755]: healthz check failed Dec 10 15:25:52 crc kubenswrapper[4755]: I1210 15:25:52.751115 4755 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ts8wv" podUID="dd9b14b8-6d7c-4eeb-9748-a2e99daa4293" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 10 15:25:52 crc kubenswrapper[4755]: I1210 15:25:52.751222 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xb9ll\" (UniqueName: \"kubernetes.io/projected/8d212083-766e-4b88-ba08-8570f05f6c94-kube-api-access-xb9ll\") pod \"8d212083-766e-4b88-ba08-8570f05f6c94\" (UID: \"8d212083-766e-4b88-ba08-8570f05f6c94\") " Dec 10 15:25:52 crc kubenswrapper[4755]: I1210 15:25:52.751560 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8d212083-766e-4b88-ba08-8570f05f6c94-secret-volume\") pod \"8d212083-766e-4b88-ba08-8570f05f6c94\" (UID: \"8d212083-766e-4b88-ba08-8570f05f6c94\") " Dec 10 15:25:52 crc kubenswrapper[4755]: I1210 15:25:52.752259 4755 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8d212083-766e-4b88-ba08-8570f05f6c94-config-volume\") on node \"crc\" DevicePath \"\"" Dec 10 15:25:52 crc kubenswrapper[4755]: I1210 15:25:52.756185 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d212083-766e-4b88-ba08-8570f05f6c94-kube-api-access-xb9ll" (OuterVolumeSpecName: "kube-api-access-xb9ll") pod "8d212083-766e-4b88-ba08-8570f05f6c94" (UID: "8d212083-766e-4b88-ba08-8570f05f6c94"). InnerVolumeSpecName "kube-api-access-xb9ll". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:25:52 crc kubenswrapper[4755]: I1210 15:25:52.757872 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d212083-766e-4b88-ba08-8570f05f6c94-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "8d212083-766e-4b88-ba08-8570f05f6c94" (UID: "8d212083-766e-4b88-ba08-8570f05f6c94"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:25:52 crc kubenswrapper[4755]: I1210 15:25:52.855515 4755 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8d212083-766e-4b88-ba08-8570f05f6c94-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 10 15:25:52 crc kubenswrapper[4755]: I1210 15:25:52.855852 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xb9ll\" (UniqueName: \"kubernetes.io/projected/8d212083-766e-4b88-ba08-8570f05f6c94-kube-api-access-xb9ll\") on node \"crc\" DevicePath \"\"" Dec 10 15:25:52 crc kubenswrapper[4755]: I1210 15:25:52.953772 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j2tvl" Dec 10 15:25:52 crc kubenswrapper[4755]: I1210 15:25:52.963433 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-m45gd"] Dec 10 15:25:52 crc kubenswrapper[4755]: W1210 15:25:52.974501 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda08c52a2_16c8_48eb_af20_472b5202eb85.slice/crio-5d00154c3fc33f12babc8605010ce6270b72167ba5d064a0d87d5aa7cc709195 WatchSource:0}: Error finding container 5d00154c3fc33f12babc8605010ce6270b72167ba5d064a0d87d5aa7cc709195: Status 404 returned error can't find the container with id 5d00154c3fc33f12babc8605010ce6270b72167ba5d064a0d87d5aa7cc709195 Dec 10 15:25:53 crc kubenswrapper[4755]: I1210 15:25:53.154939 4755 generic.go:334] "Generic (PLEG): container finished" podID="84e93abf-4130-4ca2-b263-5a4b29729988" containerID="82f1c5f7620fcd0558a34015b6415531594ca50858e2a57733b627e51f7b6315" exitCode=0 Dec 10 15:25:53 crc kubenswrapper[4755]: I1210 15:25:53.155895 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wvrkv" event={"ID":"84e93abf-4130-4ca2-b263-5a4b29729988","Type":"ContainerDied","Data":"82f1c5f7620fcd0558a34015b6415531594ca50858e2a57733b627e51f7b6315"} Dec 10 15:25:53 crc kubenswrapper[4755]: I1210 15:25:53.159804 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"094639c8dfec4c72fd990bd4a874973902c8ed58c7383801d258e27b091ac394"} Dec 10 15:25:53 crc kubenswrapper[4755]: I1210 15:25:53.166596 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29422995-knlb5" Dec 10 15:25:53 crc kubenswrapper[4755]: I1210 15:25:53.166922 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29422995-knlb5" event={"ID":"8d212083-766e-4b88-ba08-8570f05f6c94","Type":"ContainerDied","Data":"eee42504cbad9db430318ba88a34f0efb4b388e5aab6832ae2663c9a51a1febd"} Dec 10 15:25:53 crc kubenswrapper[4755]: I1210 15:25:53.166955 4755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eee42504cbad9db430318ba88a34f0efb4b388e5aab6832ae2663c9a51a1febd" Dec 10 15:25:53 crc kubenswrapper[4755]: I1210 15:25:53.181723 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"f1d1febd632d02a267e9cbb7778b77d333fab364ce24771af11ede09d95026ed"} Dec 10 15:25:53 crc kubenswrapper[4755]: I1210 15:25:53.181767 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"07f6de378b7c6fe53b623ee8e926be5d49ead148fb3baa5f4d30110c4e088b49"} Dec 10 15:25:53 crc kubenswrapper[4755]: I1210 15:25:53.216631 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"a62701bce49c57534f56220bf84548db73400d737a653d5d9df01203638b556e"} Dec 10 15:25:53 crc kubenswrapper[4755]: I1210 15:25:53.221366 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m45gd" event={"ID":"a08c52a2-16c8-48eb-af20-472b5202eb85","Type":"ContainerStarted","Data":"5d00154c3fc33f12babc8605010ce6270b72167ba5d064a0d87d5aa7cc709195"} Dec 10 15:25:53 crc kubenswrapper[4755]: I1210 15:25:53.483718 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 10 15:25:53 crc kubenswrapper[4755]: I1210 15:25:53.488300 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-j2tvl"] Dec 10 15:25:53 crc kubenswrapper[4755]: I1210 15:25:53.665746 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c51e1051-4c68-4a93-a2df-6eb4deb6f3ca-kubelet-dir\") pod \"c51e1051-4c68-4a93-a2df-6eb4deb6f3ca\" (UID: \"c51e1051-4c68-4a93-a2df-6eb4deb6f3ca\") " Dec 10 15:25:53 crc kubenswrapper[4755]: I1210 15:25:53.665888 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c51e1051-4c68-4a93-a2df-6eb4deb6f3ca-kube-api-access\") pod \"c51e1051-4c68-4a93-a2df-6eb4deb6f3ca\" (UID: \"c51e1051-4c68-4a93-a2df-6eb4deb6f3ca\") " Dec 10 15:25:53 crc kubenswrapper[4755]: I1210 15:25:53.666906 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c51e1051-4c68-4a93-a2df-6eb4deb6f3ca-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "c51e1051-4c68-4a93-a2df-6eb4deb6f3ca" (UID: "c51e1051-4c68-4a93-a2df-6eb4deb6f3ca"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 15:25:53 crc kubenswrapper[4755]: I1210 15:25:53.671454 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c51e1051-4c68-4a93-a2df-6eb4deb6f3ca-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c51e1051-4c68-4a93-a2df-6eb4deb6f3ca" (UID: "c51e1051-4c68-4a93-a2df-6eb4deb6f3ca"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:25:53 crc kubenswrapper[4755]: I1210 15:25:53.746792 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-ts8wv" Dec 10 15:25:53 crc kubenswrapper[4755]: I1210 15:25:53.749274 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-ts8wv" Dec 10 15:25:53 crc kubenswrapper[4755]: I1210 15:25:53.767650 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c51e1051-4c68-4a93-a2df-6eb4deb6f3ca-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 10 15:25:53 crc kubenswrapper[4755]: I1210 15:25:53.767745 4755 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c51e1051-4c68-4a93-a2df-6eb4deb6f3ca-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 10 15:25:54 crc kubenswrapper[4755]: I1210 15:25:54.230180 4755 generic.go:334] "Generic (PLEG): container finished" podID="a08c52a2-16c8-48eb-af20-472b5202eb85" containerID="796e2793f85591aa9094297b128a5917ea0af9dac96cb0f1f205db8e167e7423" exitCode=0 Dec 10 15:25:54 crc kubenswrapper[4755]: I1210 15:25:54.230251 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m45gd" event={"ID":"a08c52a2-16c8-48eb-af20-472b5202eb85","Type":"ContainerDied","Data":"796e2793f85591aa9094297b128a5917ea0af9dac96cb0f1f205db8e167e7423"} Dec 10 15:25:54 crc kubenswrapper[4755]: I1210 15:25:54.235993 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"c51e1051-4c68-4a93-a2df-6eb4deb6f3ca","Type":"ContainerDied","Data":"842bb42ce33808477b4e6a78916adfd330914ab8b9612946eeba7b73b56dbc0f"} Dec 10 15:25:54 crc kubenswrapper[4755]: I1210 15:25:54.236026 4755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="842bb42ce33808477b4e6a78916adfd330914ab8b9612946eeba7b73b56dbc0f" Dec 10 15:25:54 crc kubenswrapper[4755]: I1210 15:25:54.236041 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 10 15:25:54 crc kubenswrapper[4755]: I1210 15:25:54.239747 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"6f65c64c1509bd0027414c51f7ac4890660f7e43bc89bf33ab639bc9250ff3b6"} Dec 10 15:25:54 crc kubenswrapper[4755]: I1210 15:25:54.241860 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"268c3b71a017cbaf3d81091ea82418d41341cf01b4ff00527089c2f19e2cc821"} Dec 10 15:25:54 crc kubenswrapper[4755]: I1210 15:25:54.241931 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 15:25:54 crc kubenswrapper[4755]: I1210 15:25:54.243811 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j2tvl" event={"ID":"3f0e6886-1af8-466d-929d-0e0f6ade8a91","Type":"ContainerStarted","Data":"71863269d0d073a7bc26b15e7a3192a67e09b9cb8893d69320b22912a0b6b102"} Dec 10 15:25:56 crc kubenswrapper[4755]: I1210 15:25:56.519323 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 10 15:25:56 crc kubenswrapper[4755]: E1210 15:25:56.519653 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c51e1051-4c68-4a93-a2df-6eb4deb6f3ca" containerName="pruner" Dec 10 15:25:56 crc kubenswrapper[4755]: I1210 15:25:56.519669 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="c51e1051-4c68-4a93-a2df-6eb4deb6f3ca" containerName="pruner" Dec 10 15:25:56 crc kubenswrapper[4755]: E1210 15:25:56.519679 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d212083-766e-4b88-ba08-8570f05f6c94" containerName="collect-profiles" Dec 10 15:25:56 crc kubenswrapper[4755]: I1210 15:25:56.519689 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d212083-766e-4b88-ba08-8570f05f6c94" containerName="collect-profiles" Dec 10 15:25:56 crc kubenswrapper[4755]: I1210 15:25:56.519814 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d212083-766e-4b88-ba08-8570f05f6c94" containerName="collect-profiles" Dec 10 15:25:56 crc kubenswrapper[4755]: I1210 15:25:56.519841 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="c51e1051-4c68-4a93-a2df-6eb4deb6f3ca" containerName="pruner" Dec 10 15:25:56 crc kubenswrapper[4755]: I1210 15:25:56.520287 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 10 15:25:56 crc kubenswrapper[4755]: I1210 15:25:56.522498 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 10 15:25:56 crc kubenswrapper[4755]: I1210 15:25:56.523283 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 10 15:25:56 crc kubenswrapper[4755]: I1210 15:25:56.568506 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 10 15:25:56 crc kubenswrapper[4755]: I1210 15:25:56.610054 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/414022a7-6963-4246-8e6a-5b738a14e774-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"414022a7-6963-4246-8e6a-5b738a14e774\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 10 15:25:56 crc kubenswrapper[4755]: I1210 15:25:56.610195 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/414022a7-6963-4246-8e6a-5b738a14e774-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"414022a7-6963-4246-8e6a-5b738a14e774\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 10 15:25:56 crc kubenswrapper[4755]: I1210 15:25:56.711954 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/414022a7-6963-4246-8e6a-5b738a14e774-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"414022a7-6963-4246-8e6a-5b738a14e774\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 10 15:25:56 crc kubenswrapper[4755]: I1210 15:25:56.712052 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/414022a7-6963-4246-8e6a-5b738a14e774-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"414022a7-6963-4246-8e6a-5b738a14e774\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 10 15:25:56 crc kubenswrapper[4755]: I1210 15:25:56.712190 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/414022a7-6963-4246-8e6a-5b738a14e774-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"414022a7-6963-4246-8e6a-5b738a14e774\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 10 15:25:56 crc kubenswrapper[4755]: I1210 15:25:56.735456 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/414022a7-6963-4246-8e6a-5b738a14e774-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"414022a7-6963-4246-8e6a-5b738a14e774\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 10 15:25:56 crc kubenswrapper[4755]: I1210 15:25:56.805768 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-wl9hp" Dec 10 15:25:56 crc kubenswrapper[4755]: I1210 15:25:56.842548 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 10 15:25:57 crc kubenswrapper[4755]: I1210 15:25:57.559953 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 10 15:25:57 crc kubenswrapper[4755]: W1210 15:25:57.572266 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod414022a7_6963_4246_8e6a_5b738a14e774.slice/crio-cd586869107fb8ff715b51960c1f0bdb4fd261f75f90b7c427d8aef61bce4fb4 WatchSource:0}: Error finding container cd586869107fb8ff715b51960c1f0bdb4fd261f75f90b7c427d8aef61bce4fb4: Status 404 returned error can't find the container with id cd586869107fb8ff715b51960c1f0bdb4fd261f75f90b7c427d8aef61bce4fb4 Dec 10 15:25:58 crc kubenswrapper[4755]: I1210 15:25:58.269639 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j2tvl" event={"ID":"3f0e6886-1af8-466d-929d-0e0f6ade8a91","Type":"ContainerStarted","Data":"0e4d2369e7aea64cdb6ff9baa5f5320ebecd5fa7c3f442f345dd031aafd38353"} Dec 10 15:25:58 crc kubenswrapper[4755]: I1210 15:25:58.270845 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"414022a7-6963-4246-8e6a-5b738a14e774","Type":"ContainerStarted","Data":"cd586869107fb8ff715b51960c1f0bdb4fd261f75f90b7c427d8aef61bce4fb4"} Dec 10 15:25:59 crc kubenswrapper[4755]: I1210 15:25:59.277343 4755 generic.go:334] "Generic (PLEG): container finished" podID="3f0e6886-1af8-466d-929d-0e0f6ade8a91" containerID="0e4d2369e7aea64cdb6ff9baa5f5320ebecd5fa7c3f442f345dd031aafd38353" exitCode=0 Dec 10 15:25:59 crc kubenswrapper[4755]: I1210 15:25:59.277382 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j2tvl" event={"ID":"3f0e6886-1af8-466d-929d-0e0f6ade8a91","Type":"ContainerDied","Data":"0e4d2369e7aea64cdb6ff9baa5f5320ebecd5fa7c3f442f345dd031aafd38353"} Dec 10 15:26:00 crc kubenswrapper[4755]: I1210 15:26:00.285171 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"414022a7-6963-4246-8e6a-5b738a14e774","Type":"ContainerStarted","Data":"8a962a94e71c6ea594ec68b91a7c88c3b9d7d27fab2140dfcda5c4052d34e80c"} Dec 10 15:26:00 crc kubenswrapper[4755]: I1210 15:26:00.307637 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=4.30762081 podStartE2EDuration="4.30762081s" podCreationTimestamp="2025-12-10 15:25:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:26:00.303793951 +0000 UTC m=+156.904677583" watchObservedRunningTime="2025-12-10 15:26:00.30762081 +0000 UTC m=+156.908504442" Dec 10 15:26:01 crc kubenswrapper[4755]: I1210 15:26:01.009997 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-gbvgh" Dec 10 15:26:01 crc kubenswrapper[4755]: I1210 15:26:01.269122 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-n6qb5" Dec 10 15:26:01 crc kubenswrapper[4755]: I1210 15:26:01.273986 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-n6qb5" Dec 10 15:26:01 crc kubenswrapper[4755]: I1210 15:26:01.293984 4755 generic.go:334] "Generic (PLEG): container finished" podID="414022a7-6963-4246-8e6a-5b738a14e774" containerID="8a962a94e71c6ea594ec68b91a7c88c3b9d7d27fab2140dfcda5c4052d34e80c" exitCode=0 Dec 10 15:26:01 crc kubenswrapper[4755]: I1210 15:26:01.294362 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"414022a7-6963-4246-8e6a-5b738a14e774","Type":"ContainerDied","Data":"8a962a94e71c6ea594ec68b91a7c88c3b9d7d27fab2140dfcda5c4052d34e80c"} Dec 10 15:26:05 crc kubenswrapper[4755]: I1210 15:26:05.384821 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 10 15:26:05 crc kubenswrapper[4755]: I1210 15:26:05.576780 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/414022a7-6963-4246-8e6a-5b738a14e774-kubelet-dir\") pod \"414022a7-6963-4246-8e6a-5b738a14e774\" (UID: \"414022a7-6963-4246-8e6a-5b738a14e774\") " Dec 10 15:26:05 crc kubenswrapper[4755]: I1210 15:26:05.576896 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/414022a7-6963-4246-8e6a-5b738a14e774-kube-api-access\") pod \"414022a7-6963-4246-8e6a-5b738a14e774\" (UID: \"414022a7-6963-4246-8e6a-5b738a14e774\") " Dec 10 15:26:05 crc kubenswrapper[4755]: I1210 15:26:05.576934 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/414022a7-6963-4246-8e6a-5b738a14e774-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "414022a7-6963-4246-8e6a-5b738a14e774" (UID: "414022a7-6963-4246-8e6a-5b738a14e774"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 15:26:05 crc kubenswrapper[4755]: I1210 15:26:05.577156 4755 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/414022a7-6963-4246-8e6a-5b738a14e774-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 10 15:26:05 crc kubenswrapper[4755]: I1210 15:26:05.583738 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/414022a7-6963-4246-8e6a-5b738a14e774-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "414022a7-6963-4246-8e6a-5b738a14e774" (UID: "414022a7-6963-4246-8e6a-5b738a14e774"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:26:05 crc kubenswrapper[4755]: I1210 15:26:05.677942 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/414022a7-6963-4246-8e6a-5b738a14e774-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 10 15:26:06 crc kubenswrapper[4755]: I1210 15:26:06.082176 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/17673130-8212-4f8f-8859-92774f0ee202-metrics-certs\") pod \"network-metrics-daemon-q5ctz\" (UID: \"17673130-8212-4f8f-8859-92774f0ee202\") " pod="openshift-multus/network-metrics-daemon-q5ctz" Dec 10 15:26:06 crc kubenswrapper[4755]: I1210 15:26:06.090407 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/17673130-8212-4f8f-8859-92774f0ee202-metrics-certs\") pod \"network-metrics-daemon-q5ctz\" (UID: \"17673130-8212-4f8f-8859-92774f0ee202\") " pod="openshift-multus/network-metrics-daemon-q5ctz" Dec 10 15:26:06 crc kubenswrapper[4755]: I1210 15:26:06.098406 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5ctz" Dec 10 15:26:06 crc kubenswrapper[4755]: I1210 15:26:06.322664 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"414022a7-6963-4246-8e6a-5b738a14e774","Type":"ContainerDied","Data":"cd586869107fb8ff715b51960c1f0bdb4fd261f75f90b7c427d8aef61bce4fb4"} Dec 10 15:26:06 crc kubenswrapper[4755]: I1210 15:26:06.322703 4755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd586869107fb8ff715b51960c1f0bdb4fd261f75f90b7c427d8aef61bce4fb4" Dec 10 15:26:06 crc kubenswrapper[4755]: I1210 15:26:06.322733 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 10 15:26:09 crc kubenswrapper[4755]: I1210 15:26:09.588847 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-mqq47" Dec 10 15:26:10 crc kubenswrapper[4755]: I1210 15:26:10.359267 4755 patch_prober.go:28] interesting pod/machine-config-daemon-ggt8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 15:26:10 crc kubenswrapper[4755]: I1210 15:26:10.359329 4755 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 15:26:12 crc kubenswrapper[4755]: I1210 15:26:12.784635 4755 patch_prober.go:28] interesting pod/router-default-5444994796-ts8wv container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 10 15:26:12 crc kubenswrapper[4755]: I1210 15:26:12.784930 4755 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-ts8wv" podUID="dd9b14b8-6d7c-4eeb-9748-a2e99daa4293" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 10 15:26:21 crc kubenswrapper[4755]: I1210 15:26:21.653783 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dhh5x" Dec 10 15:26:28 crc kubenswrapper[4755]: I1210 15:26:28.317906 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 10 15:26:28 crc kubenswrapper[4755]: E1210 15:26:28.319631 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="414022a7-6963-4246-8e6a-5b738a14e774" containerName="pruner" Dec 10 15:26:28 crc kubenswrapper[4755]: I1210 15:26:28.319676 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="414022a7-6963-4246-8e6a-5b738a14e774" containerName="pruner" Dec 10 15:26:28 crc kubenswrapper[4755]: I1210 15:26:28.319877 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="414022a7-6963-4246-8e6a-5b738a14e774" containerName="pruner" Dec 10 15:26:28 crc kubenswrapper[4755]: I1210 15:26:28.320616 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 10 15:26:28 crc kubenswrapper[4755]: I1210 15:26:28.325390 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 10 15:26:28 crc kubenswrapper[4755]: I1210 15:26:28.339212 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 10 15:26:28 crc kubenswrapper[4755]: I1210 15:26:28.339872 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 10 15:26:28 crc kubenswrapper[4755]: I1210 15:26:28.480063 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/aca50107-d329-40c9-895f-a539e3f6afe3-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"aca50107-d329-40c9-895f-a539e3f6afe3\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 10 15:26:28 crc kubenswrapper[4755]: I1210 15:26:28.480114 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/aca50107-d329-40c9-895f-a539e3f6afe3-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"aca50107-d329-40c9-895f-a539e3f6afe3\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 10 15:26:28 crc kubenswrapper[4755]: I1210 15:26:28.581279 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/aca50107-d329-40c9-895f-a539e3f6afe3-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"aca50107-d329-40c9-895f-a539e3f6afe3\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 10 15:26:28 crc kubenswrapper[4755]: I1210 15:26:28.581328 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/aca50107-d329-40c9-895f-a539e3f6afe3-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"aca50107-d329-40c9-895f-a539e3f6afe3\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 10 15:26:28 crc kubenswrapper[4755]: I1210 15:26:28.581381 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/aca50107-d329-40c9-895f-a539e3f6afe3-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"aca50107-d329-40c9-895f-a539e3f6afe3\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 10 15:26:28 crc kubenswrapper[4755]: I1210 15:26:28.600494 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/aca50107-d329-40c9-895f-a539e3f6afe3-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"aca50107-d329-40c9-895f-a539e3f6afe3\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 10 15:26:28 crc kubenswrapper[4755]: I1210 15:26:28.663926 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 10 15:26:30 crc kubenswrapper[4755]: E1210 15:26:30.282115 4755 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Dec 10 15:26:30 crc kubenswrapper[4755]: E1210 15:26:30.282299 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7bbmb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-v7hds_openshift-marketplace(202dcce7-fa8a-4991-bd7a-661eab1f3274): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 10 15:26:30 crc kubenswrapper[4755]: E1210 15:26:30.283543 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-v7hds" podUID="202dcce7-fa8a-4991-bd7a-661eab1f3274" Dec 10 15:26:30 crc kubenswrapper[4755]: E1210 15:26:30.986846 4755 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Dec 10 15:26:30 crc kubenswrapper[4755]: E1210 15:26:30.987010 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wczkt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-9slgn_openshift-marketplace(1d74a8b0-992e-46b5-9364-cc82c84ac2d8): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 10 15:26:30 crc kubenswrapper[4755]: E1210 15:26:30.988162 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-9slgn" podUID="1d74a8b0-992e-46b5-9364-cc82c84ac2d8" Dec 10 15:26:31 crc kubenswrapper[4755]: I1210 15:26:31.985982 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 15:26:32 crc kubenswrapper[4755]: E1210 15:26:32.704803 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-v7hds" podUID="202dcce7-fa8a-4991-bd7a-661eab1f3274" Dec 10 15:26:32 crc kubenswrapper[4755]: E1210 15:26:32.705112 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9slgn" podUID="1d74a8b0-992e-46b5-9364-cc82c84ac2d8" Dec 10 15:26:32 crc kubenswrapper[4755]: E1210 15:26:32.813322 4755 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Dec 10 15:26:32 crc kubenswrapper[4755]: E1210 15:26:32.813522 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8kxhw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-d89xt_openshift-marketplace(8f9f6949-50c9-4d7e-b75f-b990a642d3a7): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 10 15:26:32 crc kubenswrapper[4755]: E1210 15:26:32.814707 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-d89xt" podUID="8f9f6949-50c9-4d7e-b75f-b990a642d3a7" Dec 10 15:26:32 crc kubenswrapper[4755]: I1210 15:26:32.926709 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 10 15:26:32 crc kubenswrapper[4755]: I1210 15:26:32.931035 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 10 15:26:32 crc kubenswrapper[4755]: I1210 15:26:32.948516 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6299042d-c9f7-4651-a239-5b75017b83cb-var-lock\") pod \"installer-9-crc\" (UID: \"6299042d-c9f7-4651-a239-5b75017b83cb\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 10 15:26:32 crc kubenswrapper[4755]: I1210 15:26:32.948563 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6299042d-c9f7-4651-a239-5b75017b83cb-kube-api-access\") pod \"installer-9-crc\" (UID: \"6299042d-c9f7-4651-a239-5b75017b83cb\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 10 15:26:32 crc kubenswrapper[4755]: I1210 15:26:32.948621 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6299042d-c9f7-4651-a239-5b75017b83cb-kubelet-dir\") pod \"installer-9-crc\" (UID: \"6299042d-c9f7-4651-a239-5b75017b83cb\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 10 15:26:32 crc kubenswrapper[4755]: I1210 15:26:32.951402 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 10 15:26:33 crc kubenswrapper[4755]: I1210 15:26:33.057526 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6299042d-c9f7-4651-a239-5b75017b83cb-kubelet-dir\") pod \"installer-9-crc\" (UID: \"6299042d-c9f7-4651-a239-5b75017b83cb\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 10 15:26:33 crc kubenswrapper[4755]: I1210 15:26:33.057619 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6299042d-c9f7-4651-a239-5b75017b83cb-var-lock\") pod \"installer-9-crc\" (UID: \"6299042d-c9f7-4651-a239-5b75017b83cb\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 10 15:26:33 crc kubenswrapper[4755]: I1210 15:26:33.057643 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6299042d-c9f7-4651-a239-5b75017b83cb-kube-api-access\") pod \"installer-9-crc\" (UID: \"6299042d-c9f7-4651-a239-5b75017b83cb\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 10 15:26:33 crc kubenswrapper[4755]: I1210 15:26:33.057748 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6299042d-c9f7-4651-a239-5b75017b83cb-var-lock\") pod \"installer-9-crc\" (UID: \"6299042d-c9f7-4651-a239-5b75017b83cb\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 10 15:26:33 crc kubenswrapper[4755]: I1210 15:26:33.057908 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6299042d-c9f7-4651-a239-5b75017b83cb-kubelet-dir\") pod \"installer-9-crc\" (UID: \"6299042d-c9f7-4651-a239-5b75017b83cb\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 10 15:26:33 crc kubenswrapper[4755]: I1210 15:26:33.082134 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6299042d-c9f7-4651-a239-5b75017b83cb-kube-api-access\") pod \"installer-9-crc\" (UID: \"6299042d-c9f7-4651-a239-5b75017b83cb\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 10 15:26:33 crc kubenswrapper[4755]: I1210 15:26:33.256927 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 10 15:26:34 crc kubenswrapper[4755]: E1210 15:26:34.308016 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-d89xt" podUID="8f9f6949-50c9-4d7e-b75f-b990a642d3a7" Dec 10 15:26:35 crc kubenswrapper[4755]: E1210 15:26:35.015968 4755 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Dec 10 15:26:35 crc kubenswrapper[4755]: E1210 15:26:35.016105 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g8rg8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-qccqv_openshift-marketplace(4ca3fb6e-045b-4025-9d18-eb0a13d9128a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 10 15:26:35 crc kubenswrapper[4755]: E1210 15:26:35.017273 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-qccqv" podUID="4ca3fb6e-045b-4025-9d18-eb0a13d9128a" Dec 10 15:26:37 crc kubenswrapper[4755]: E1210 15:26:37.796667 4755 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Dec 10 15:26:37 crc kubenswrapper[4755]: E1210 15:26:37.797214 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nzhqn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-ck7mx_openshift-marketplace(609745aa-bdb9-440a-b029-fcd706ed320e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 10 15:26:37 crc kubenswrapper[4755]: E1210 15:26:37.799942 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-ck7mx" podUID="609745aa-bdb9-440a-b029-fcd706ed320e" Dec 10 15:26:38 crc kubenswrapper[4755]: E1210 15:26:38.069122 4755 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Dec 10 15:26:38 crc kubenswrapper[4755]: E1210 15:26:38.069314 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kbfl9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-wvrkv_openshift-marketplace(84e93abf-4130-4ca2-b263-5a4b29729988): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 10 15:26:38 crc kubenswrapper[4755]: E1210 15:26:38.070627 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-wvrkv" podUID="84e93abf-4130-4ca2-b263-5a4b29729988" Dec 10 15:26:38 crc kubenswrapper[4755]: E1210 15:26:38.131620 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-qccqv" podUID="4ca3fb6e-045b-4025-9d18-eb0a13d9128a" Dec 10 15:26:38 crc kubenswrapper[4755]: E1210 15:26:38.818543 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-wvrkv" podUID="84e93abf-4130-4ca2-b263-5a4b29729988" Dec 10 15:26:38 crc kubenswrapper[4755]: E1210 15:26:38.824278 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-ck7mx" podUID="609745aa-bdb9-440a-b029-fcd706ed320e" Dec 10 15:26:39 crc kubenswrapper[4755]: I1210 15:26:39.240433 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 10 15:26:39 crc kubenswrapper[4755]: I1210 15:26:39.267581 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-q5ctz"] Dec 10 15:26:39 crc kubenswrapper[4755]: I1210 15:26:39.318754 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 10 15:26:39 crc kubenswrapper[4755]: I1210 15:26:39.494536 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j2tvl" event={"ID":"3f0e6886-1af8-466d-929d-0e0f6ade8a91","Type":"ContainerStarted","Data":"c64cc4cb495c95ad2a8f508f6d883a43cdb0fd52426a5931c3aa0ab660cdbbfe"} Dec 10 15:26:39 crc kubenswrapper[4755]: I1210 15:26:39.496277 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-q5ctz" event={"ID":"17673130-8212-4f8f-8859-92774f0ee202","Type":"ContainerStarted","Data":"7c220b5697a9f82f80d1be72312c2a2cfd333cb79b16330f37538fdefc6bb6e0"} Dec 10 15:26:39 crc kubenswrapper[4755]: I1210 15:26:39.497681 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"6299042d-c9f7-4651-a239-5b75017b83cb","Type":"ContainerStarted","Data":"cf7148ab81d3fdb535c5eae766a48fdd27f676157e127f950db030ff5050ca88"} Dec 10 15:26:39 crc kubenswrapper[4755]: I1210 15:26:39.504736 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m45gd" event={"ID":"a08c52a2-16c8-48eb-af20-472b5202eb85","Type":"ContainerStarted","Data":"2f59242985087d103f024953a7ff981164ac271230042a3bf31727b85bc17944"} Dec 10 15:26:39 crc kubenswrapper[4755]: I1210 15:26:39.508582 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"aca50107-d329-40c9-895f-a539e3f6afe3","Type":"ContainerStarted","Data":"1ea41a2bb38ace38b0d8cbf273690e3b543318b6f9bca793df8edc304fe22e39"} Dec 10 15:26:40 crc kubenswrapper[4755]: I1210 15:26:40.358835 4755 patch_prober.go:28] interesting pod/machine-config-daemon-ggt8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 15:26:40 crc kubenswrapper[4755]: I1210 15:26:40.359141 4755 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 15:26:40 crc kubenswrapper[4755]: I1210 15:26:40.533675 4755 generic.go:334] "Generic (PLEG): container finished" podID="a08c52a2-16c8-48eb-af20-472b5202eb85" containerID="2f59242985087d103f024953a7ff981164ac271230042a3bf31727b85bc17944" exitCode=0 Dec 10 15:26:40 crc kubenswrapper[4755]: I1210 15:26:40.533754 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m45gd" event={"ID":"a08c52a2-16c8-48eb-af20-472b5202eb85","Type":"ContainerDied","Data":"2f59242985087d103f024953a7ff981164ac271230042a3bf31727b85bc17944"} Dec 10 15:26:40 crc kubenswrapper[4755]: I1210 15:26:40.536393 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"aca50107-d329-40c9-895f-a539e3f6afe3","Type":"ContainerStarted","Data":"0fb72279544b988062358c5c833b37d23f366047e8d9bb6692d6c04d7fd3f6d9"} Dec 10 15:26:40 crc kubenswrapper[4755]: I1210 15:26:40.543156 4755 generic.go:334] "Generic (PLEG): container finished" podID="3f0e6886-1af8-466d-929d-0e0f6ade8a91" containerID="c64cc4cb495c95ad2a8f508f6d883a43cdb0fd52426a5931c3aa0ab660cdbbfe" exitCode=0 Dec 10 15:26:40 crc kubenswrapper[4755]: I1210 15:26:40.543202 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j2tvl" event={"ID":"3f0e6886-1af8-466d-929d-0e0f6ade8a91","Type":"ContainerDied","Data":"c64cc4cb495c95ad2a8f508f6d883a43cdb0fd52426a5931c3aa0ab660cdbbfe"} Dec 10 15:26:40 crc kubenswrapper[4755]: I1210 15:26:40.546816 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-q5ctz" event={"ID":"17673130-8212-4f8f-8859-92774f0ee202","Type":"ContainerStarted","Data":"cc8bcb73e5d29e75a958f2fde4c6b234a21c770b9f185f5ca307d33dfd97da6b"} Dec 10 15:26:40 crc kubenswrapper[4755]: I1210 15:26:40.568440 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"6299042d-c9f7-4651-a239-5b75017b83cb","Type":"ContainerStarted","Data":"af1e30fd16c8b4352b5df55f5369d576846ddd951f1f37bcab70ae26e41f8135"} Dec 10 15:26:40 crc kubenswrapper[4755]: I1210 15:26:40.600102 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=12.60007713 podStartE2EDuration="12.60007713s" podCreationTimestamp="2025-12-10 15:26:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:26:40.594378197 +0000 UTC m=+197.195261839" watchObservedRunningTime="2025-12-10 15:26:40.60007713 +0000 UTC m=+197.200960762" Dec 10 15:26:40 crc kubenswrapper[4755]: I1210 15:26:40.621145 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=8.621124941 podStartE2EDuration="8.621124941s" podCreationTimestamp="2025-12-10 15:26:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:26:40.6186245 +0000 UTC m=+197.219508132" watchObservedRunningTime="2025-12-10 15:26:40.621124941 +0000 UTC m=+197.222008583" Dec 10 15:26:41 crc kubenswrapper[4755]: I1210 15:26:41.577639 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-q5ctz" event={"ID":"17673130-8212-4f8f-8859-92774f0ee202","Type":"ContainerStarted","Data":"f7f08ad38ea01e9b76b8eb50620858824965da263057c3d79f6c12049498686d"} Dec 10 15:26:42 crc kubenswrapper[4755]: I1210 15:26:42.583700 4755 generic.go:334] "Generic (PLEG): container finished" podID="aca50107-d329-40c9-895f-a539e3f6afe3" containerID="0fb72279544b988062358c5c833b37d23f366047e8d9bb6692d6c04d7fd3f6d9" exitCode=0 Dec 10 15:26:42 crc kubenswrapper[4755]: I1210 15:26:42.583953 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"aca50107-d329-40c9-895f-a539e3f6afe3","Type":"ContainerDied","Data":"0fb72279544b988062358c5c833b37d23f366047e8d9bb6692d6c04d7fd3f6d9"} Dec 10 15:26:42 crc kubenswrapper[4755]: I1210 15:26:42.600927 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-q5ctz" podStartSLOduration=179.60088402 podStartE2EDuration="2m59.60088402s" podCreationTimestamp="2025-12-10 15:23:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:26:42.597596057 +0000 UTC m=+199.198479689" watchObservedRunningTime="2025-12-10 15:26:42.60088402 +0000 UTC m=+199.201767662" Dec 10 15:26:43 crc kubenswrapper[4755]: I1210 15:26:43.809091 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 10 15:26:43 crc kubenswrapper[4755]: I1210 15:26:43.900118 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/aca50107-d329-40c9-895f-a539e3f6afe3-kube-api-access\") pod \"aca50107-d329-40c9-895f-a539e3f6afe3\" (UID: \"aca50107-d329-40c9-895f-a539e3f6afe3\") " Dec 10 15:26:43 crc kubenswrapper[4755]: I1210 15:26:43.900201 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/aca50107-d329-40c9-895f-a539e3f6afe3-kubelet-dir\") pod \"aca50107-d329-40c9-895f-a539e3f6afe3\" (UID: \"aca50107-d329-40c9-895f-a539e3f6afe3\") " Dec 10 15:26:43 crc kubenswrapper[4755]: I1210 15:26:43.900360 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aca50107-d329-40c9-895f-a539e3f6afe3-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "aca50107-d329-40c9-895f-a539e3f6afe3" (UID: "aca50107-d329-40c9-895f-a539e3f6afe3"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 15:26:43 crc kubenswrapper[4755]: I1210 15:26:43.900696 4755 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/aca50107-d329-40c9-895f-a539e3f6afe3-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 10 15:26:43 crc kubenswrapper[4755]: I1210 15:26:43.921971 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aca50107-d329-40c9-895f-a539e3f6afe3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "aca50107-d329-40c9-895f-a539e3f6afe3" (UID: "aca50107-d329-40c9-895f-a539e3f6afe3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:26:44 crc kubenswrapper[4755]: I1210 15:26:44.001155 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/aca50107-d329-40c9-895f-a539e3f6afe3-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 10 15:26:44 crc kubenswrapper[4755]: I1210 15:26:44.598221 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"aca50107-d329-40c9-895f-a539e3f6afe3","Type":"ContainerDied","Data":"1ea41a2bb38ace38b0d8cbf273690e3b543318b6f9bca793df8edc304fe22e39"} Dec 10 15:26:44 crc kubenswrapper[4755]: I1210 15:26:44.598263 4755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ea41a2bb38ace38b0d8cbf273690e3b543318b6f9bca793df8edc304fe22e39" Dec 10 15:26:44 crc kubenswrapper[4755]: I1210 15:26:44.598732 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 10 15:26:52 crc kubenswrapper[4755]: I1210 15:26:52.648612 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j2tvl" event={"ID":"3f0e6886-1af8-466d-929d-0e0f6ade8a91","Type":"ContainerStarted","Data":"e6744df224429fb037e220c86797f05c74124c03a407f63cae917c1161defd27"} Dec 10 15:26:52 crc kubenswrapper[4755]: I1210 15:26:52.672040 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-j2tvl" podStartSLOduration=14.773093284 podStartE2EDuration="1m0.67201304s" podCreationTimestamp="2025-12-10 15:25:52 +0000 UTC" firstStartedPulling="2025-12-10 15:26:05.325975227 +0000 UTC m=+161.926858869" lastFinishedPulling="2025-12-10 15:26:51.224894993 +0000 UTC m=+207.825778625" observedRunningTime="2025-12-10 15:26:52.665567166 +0000 UTC m=+209.266450818" watchObservedRunningTime="2025-12-10 15:26:52.67201304 +0000 UTC m=+209.272896672" Dec 10 15:26:52 crc kubenswrapper[4755]: I1210 15:26:52.954449 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-j2tvl" Dec 10 15:26:52 crc kubenswrapper[4755]: I1210 15:26:52.954555 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-j2tvl" Dec 10 15:26:54 crc kubenswrapper[4755]: I1210 15:26:54.026605 4755 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-j2tvl" podUID="3f0e6886-1af8-466d-929d-0e0f6ade8a91" containerName="registry-server" probeResult="failure" output=< Dec 10 15:26:54 crc kubenswrapper[4755]: timeout: failed to connect service ":50051" within 1s Dec 10 15:26:54 crc kubenswrapper[4755]: > Dec 10 15:26:59 crc kubenswrapper[4755]: I1210 15:26:59.692633 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m45gd" event={"ID":"a08c52a2-16c8-48eb-af20-472b5202eb85","Type":"ContainerStarted","Data":"d755429e1151584ac1e91b47a807eb9e763166c3bfa62ff828404f0bfb613a58"} Dec 10 15:26:59 crc kubenswrapper[4755]: I1210 15:26:59.695450 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wvrkv" event={"ID":"84e93abf-4130-4ca2-b263-5a4b29729988","Type":"ContainerStarted","Data":"3679e8337fb5fc7c55e8040662b356f2841a98d51ddef403fbc0f045cc7ddead"} Dec 10 15:26:59 crc kubenswrapper[4755]: I1210 15:26:59.697363 4755 generic.go:334] "Generic (PLEG): container finished" podID="4ca3fb6e-045b-4025-9d18-eb0a13d9128a" containerID="e4a555d655ca374a04c3d9e5b7119292ffcb9a9c975e716028ab4bf46e04ee5d" exitCode=0 Dec 10 15:26:59 crc kubenswrapper[4755]: I1210 15:26:59.697427 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qccqv" event={"ID":"4ca3fb6e-045b-4025-9d18-eb0a13d9128a","Type":"ContainerDied","Data":"e4a555d655ca374a04c3d9e5b7119292ffcb9a9c975e716028ab4bf46e04ee5d"} Dec 10 15:26:59 crc kubenswrapper[4755]: I1210 15:26:59.700312 4755 generic.go:334] "Generic (PLEG): container finished" podID="609745aa-bdb9-440a-b029-fcd706ed320e" containerID="be0a1a5652b0266445b8ea6771ddb5b1a8f75a0d64880a6f083551cdfe041615" exitCode=0 Dec 10 15:26:59 crc kubenswrapper[4755]: I1210 15:26:59.700380 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ck7mx" event={"ID":"609745aa-bdb9-440a-b029-fcd706ed320e","Type":"ContainerDied","Data":"be0a1a5652b0266445b8ea6771ddb5b1a8f75a0d64880a6f083551cdfe041615"} Dec 10 15:26:59 crc kubenswrapper[4755]: I1210 15:26:59.703031 4755 generic.go:334] "Generic (PLEG): container finished" podID="202dcce7-fa8a-4991-bd7a-661eab1f3274" containerID="010eb66a79f0de81c0e4e3a0ca0d5e96137a423e2a4d76853616d0fbe9c653ff" exitCode=0 Dec 10 15:26:59 crc kubenswrapper[4755]: I1210 15:26:59.703085 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v7hds" event={"ID":"202dcce7-fa8a-4991-bd7a-661eab1f3274","Type":"ContainerDied","Data":"010eb66a79f0de81c0e4e3a0ca0d5e96137a423e2a4d76853616d0fbe9c653ff"} Dec 10 15:26:59 crc kubenswrapper[4755]: I1210 15:26:59.706659 4755 generic.go:334] "Generic (PLEG): container finished" podID="1d74a8b0-992e-46b5-9364-cc82c84ac2d8" containerID="d45e287edba741397b5c31bf51737e748d3ef6064328b228e267a903a8f52d02" exitCode=0 Dec 10 15:26:59 crc kubenswrapper[4755]: I1210 15:26:59.706740 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9slgn" event={"ID":"1d74a8b0-992e-46b5-9364-cc82c84ac2d8","Type":"ContainerDied","Data":"d45e287edba741397b5c31bf51737e748d3ef6064328b228e267a903a8f52d02"} Dec 10 15:26:59 crc kubenswrapper[4755]: I1210 15:26:59.712670 4755 generic.go:334] "Generic (PLEG): container finished" podID="8f9f6949-50c9-4d7e-b75f-b990a642d3a7" containerID="5c18e1a6ffeff96fb5e01ddbd35c47a5da5a8fcaf4719544394106328bce1cfd" exitCode=0 Dec 10 15:26:59 crc kubenswrapper[4755]: I1210 15:26:59.712740 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d89xt" event={"ID":"8f9f6949-50c9-4d7e-b75f-b990a642d3a7","Type":"ContainerDied","Data":"5c18e1a6ffeff96fb5e01ddbd35c47a5da5a8fcaf4719544394106328bce1cfd"} Dec 10 15:26:59 crc kubenswrapper[4755]: I1210 15:26:59.713071 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-m45gd" podStartSLOduration=4.211513026 podStartE2EDuration="1m8.713053664s" podCreationTimestamp="2025-12-10 15:25:51 +0000 UTC" firstStartedPulling="2025-12-10 15:25:54.235160731 +0000 UTC m=+150.836044363" lastFinishedPulling="2025-12-10 15:26:58.736701369 +0000 UTC m=+215.337585001" observedRunningTime="2025-12-10 15:26:59.711651513 +0000 UTC m=+216.312535175" watchObservedRunningTime="2025-12-10 15:26:59.713053664 +0000 UTC m=+216.313937296" Dec 10 15:27:00 crc kubenswrapper[4755]: I1210 15:27:00.727585 4755 generic.go:334] "Generic (PLEG): container finished" podID="84e93abf-4130-4ca2-b263-5a4b29729988" containerID="3679e8337fb5fc7c55e8040662b356f2841a98d51ddef403fbc0f045cc7ddead" exitCode=0 Dec 10 15:27:00 crc kubenswrapper[4755]: I1210 15:27:00.727942 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wvrkv" event={"ID":"84e93abf-4130-4ca2-b263-5a4b29729988","Type":"ContainerDied","Data":"3679e8337fb5fc7c55e8040662b356f2841a98d51ddef403fbc0f045cc7ddead"} Dec 10 15:27:00 crc kubenswrapper[4755]: I1210 15:27:00.732174 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qccqv" event={"ID":"4ca3fb6e-045b-4025-9d18-eb0a13d9128a","Type":"ContainerStarted","Data":"083e0327882a15b636a692c5e5f4acbdfbe6fd26d3c1231f1507c34e96b81ccd"} Dec 10 15:27:00 crc kubenswrapper[4755]: I1210 15:27:00.734843 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v7hds" event={"ID":"202dcce7-fa8a-4991-bd7a-661eab1f3274","Type":"ContainerStarted","Data":"abd8b54041aab17d948f2a3f005f066036edfb27361f4b1131439f3b98cb6247"} Dec 10 15:27:00 crc kubenswrapper[4755]: I1210 15:27:00.737113 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9slgn" event={"ID":"1d74a8b0-992e-46b5-9364-cc82c84ac2d8","Type":"ContainerStarted","Data":"1aceb1896102ed5a5b588a38ec5e4b03f41528f63e4f3394facaaa1092800d70"} Dec 10 15:27:00 crc kubenswrapper[4755]: I1210 15:27:00.739619 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d89xt" event={"ID":"8f9f6949-50c9-4d7e-b75f-b990a642d3a7","Type":"ContainerStarted","Data":"746e14ccb915e96b51025fe2de451e92c08e8070b20e9848a49ea027ec8945f4"} Dec 10 15:27:00 crc kubenswrapper[4755]: I1210 15:27:00.786270 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9slgn" podStartSLOduration=2.660263519 podStartE2EDuration="1m11.786248791s" podCreationTimestamp="2025-12-10 15:25:49 +0000 UTC" firstStartedPulling="2025-12-10 15:25:51.050273206 +0000 UTC m=+147.651156838" lastFinishedPulling="2025-12-10 15:27:00.176258488 +0000 UTC m=+216.777142110" observedRunningTime="2025-12-10 15:27:00.780078165 +0000 UTC m=+217.380961797" watchObservedRunningTime="2025-12-10 15:27:00.786248791 +0000 UTC m=+217.387132423" Dec 10 15:27:00 crc kubenswrapper[4755]: I1210 15:27:00.824705 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qccqv" podStartSLOduration=2.827458805 podStartE2EDuration="1m10.824682738s" podCreationTimestamp="2025-12-10 15:25:50 +0000 UTC" firstStartedPulling="2025-12-10 15:25:52.109669666 +0000 UTC m=+148.710553308" lastFinishedPulling="2025-12-10 15:27:00.106893609 +0000 UTC m=+216.707777241" observedRunningTime="2025-12-10 15:27:00.821221198 +0000 UTC m=+217.422104840" watchObservedRunningTime="2025-12-10 15:27:00.824682738 +0000 UTC m=+217.425566370" Dec 10 15:27:00 crc kubenswrapper[4755]: I1210 15:27:00.847288 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-v7hds" podStartSLOduration=2.39358073 podStartE2EDuration="1m12.847265512s" podCreationTimestamp="2025-12-10 15:25:48 +0000 UTC" firstStartedPulling="2025-12-10 15:25:50.019379284 +0000 UTC m=+146.620262916" lastFinishedPulling="2025-12-10 15:27:00.473064066 +0000 UTC m=+217.073947698" observedRunningTime="2025-12-10 15:27:00.846702816 +0000 UTC m=+217.447586448" watchObservedRunningTime="2025-12-10 15:27:00.847265512 +0000 UTC m=+217.448149144" Dec 10 15:27:01 crc kubenswrapper[4755]: I1210 15:27:01.225543 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qccqv" Dec 10 15:27:01 crc kubenswrapper[4755]: I1210 15:27:01.225616 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qccqv" Dec 10 15:27:01 crc kubenswrapper[4755]: I1210 15:27:01.746542 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wvrkv" event={"ID":"84e93abf-4130-4ca2-b263-5a4b29729988","Type":"ContainerStarted","Data":"8d3687ddcd7838061d167fe8a5b887b5c7bdfdab94447a28cbf4151500df21ea"} Dec 10 15:27:01 crc kubenswrapper[4755]: I1210 15:27:01.749279 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ck7mx" event={"ID":"609745aa-bdb9-440a-b029-fcd706ed320e","Type":"ContainerStarted","Data":"28115a76a0e836ceaebb60c0c0cc98faefbb1ad872ffbffa327ce216f51930be"} Dec 10 15:27:01 crc kubenswrapper[4755]: I1210 15:27:01.762099 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-d89xt" podStartSLOduration=3.481015569 podStartE2EDuration="1m13.76208135s" podCreationTimestamp="2025-12-10 15:25:48 +0000 UTC" firstStartedPulling="2025-12-10 15:25:50.036141779 +0000 UTC m=+146.637025411" lastFinishedPulling="2025-12-10 15:27:00.31720756 +0000 UTC m=+216.918091192" observedRunningTime="2025-12-10 15:27:00.869797655 +0000 UTC m=+217.470681297" watchObservedRunningTime="2025-12-10 15:27:01.76208135 +0000 UTC m=+218.362964982" Dec 10 15:27:01 crc kubenswrapper[4755]: I1210 15:27:01.762743 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wvrkv" podStartSLOduration=2.595473238 podStartE2EDuration="1m10.762737759s" podCreationTimestamp="2025-12-10 15:25:51 +0000 UTC" firstStartedPulling="2025-12-10 15:25:53.162839765 +0000 UTC m=+149.763723397" lastFinishedPulling="2025-12-10 15:27:01.330104286 +0000 UTC m=+217.930987918" observedRunningTime="2025-12-10 15:27:01.760930747 +0000 UTC m=+218.361814379" watchObservedRunningTime="2025-12-10 15:27:01.762737759 +0000 UTC m=+218.363621391" Dec 10 15:27:01 crc kubenswrapper[4755]: I1210 15:27:01.778805 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ck7mx" podStartSLOduration=2.271138814 podStartE2EDuration="1m12.778786316s" podCreationTimestamp="2025-12-10 15:25:49 +0000 UTC" firstStartedPulling="2025-12-10 15:25:51.055238385 +0000 UTC m=+147.656122017" lastFinishedPulling="2025-12-10 15:27:01.562885887 +0000 UTC m=+218.163769519" observedRunningTime="2025-12-10 15:27:01.778482548 +0000 UTC m=+218.379366180" watchObservedRunningTime="2025-12-10 15:27:01.778786316 +0000 UTC m=+218.379669948" Dec 10 15:27:02 crc kubenswrapper[4755]: I1210 15:27:02.251379 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-m45gd" Dec 10 15:27:02 crc kubenswrapper[4755]: I1210 15:27:02.251801 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-m45gd" Dec 10 15:27:02 crc kubenswrapper[4755]: I1210 15:27:02.280636 4755 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-qccqv" podUID="4ca3fb6e-045b-4025-9d18-eb0a13d9128a" containerName="registry-server" probeResult="failure" output=< Dec 10 15:27:02 crc kubenswrapper[4755]: timeout: failed to connect service ":50051" within 1s Dec 10 15:27:02 crc kubenswrapper[4755]: > Dec 10 15:27:02 crc kubenswrapper[4755]: I1210 15:27:02.998635 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-j2tvl" Dec 10 15:27:03 crc kubenswrapper[4755]: I1210 15:27:03.034648 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-j2tvl" Dec 10 15:27:03 crc kubenswrapper[4755]: I1210 15:27:03.297598 4755 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-m45gd" podUID="a08c52a2-16c8-48eb-af20-472b5202eb85" containerName="registry-server" probeResult="failure" output=< Dec 10 15:27:03 crc kubenswrapper[4755]: timeout: failed to connect service ":50051" within 1s Dec 10 15:27:03 crc kubenswrapper[4755]: > Dec 10 15:27:06 crc kubenswrapper[4755]: I1210 15:27:06.326875 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-j2tvl"] Dec 10 15:27:06 crc kubenswrapper[4755]: I1210 15:27:06.327730 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-j2tvl" podUID="3f0e6886-1af8-466d-929d-0e0f6ade8a91" containerName="registry-server" containerID="cri-o://e6744df224429fb037e220c86797f05c74124c03a407f63cae917c1161defd27" gracePeriod=2 Dec 10 15:27:07 crc kubenswrapper[4755]: I1210 15:27:07.276086 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j2tvl" Dec 10 15:27:07 crc kubenswrapper[4755]: I1210 15:27:07.325809 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pggdw\" (UniqueName: \"kubernetes.io/projected/3f0e6886-1af8-466d-929d-0e0f6ade8a91-kube-api-access-pggdw\") pod \"3f0e6886-1af8-466d-929d-0e0f6ade8a91\" (UID: \"3f0e6886-1af8-466d-929d-0e0f6ade8a91\") " Dec 10 15:27:07 crc kubenswrapper[4755]: I1210 15:27:07.325969 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f0e6886-1af8-466d-929d-0e0f6ade8a91-catalog-content\") pod \"3f0e6886-1af8-466d-929d-0e0f6ade8a91\" (UID: \"3f0e6886-1af8-466d-929d-0e0f6ade8a91\") " Dec 10 15:27:07 crc kubenswrapper[4755]: I1210 15:27:07.326020 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f0e6886-1af8-466d-929d-0e0f6ade8a91-utilities\") pod \"3f0e6886-1af8-466d-929d-0e0f6ade8a91\" (UID: \"3f0e6886-1af8-466d-929d-0e0f6ade8a91\") " Dec 10 15:27:07 crc kubenswrapper[4755]: I1210 15:27:07.327069 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f0e6886-1af8-466d-929d-0e0f6ade8a91-utilities" (OuterVolumeSpecName: "utilities") pod "3f0e6886-1af8-466d-929d-0e0f6ade8a91" (UID: "3f0e6886-1af8-466d-929d-0e0f6ade8a91"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:27:07 crc kubenswrapper[4755]: I1210 15:27:07.331285 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f0e6886-1af8-466d-929d-0e0f6ade8a91-kube-api-access-pggdw" (OuterVolumeSpecName: "kube-api-access-pggdw") pod "3f0e6886-1af8-466d-929d-0e0f6ade8a91" (UID: "3f0e6886-1af8-466d-929d-0e0f6ade8a91"). InnerVolumeSpecName "kube-api-access-pggdw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:27:07 crc kubenswrapper[4755]: I1210 15:27:07.427508 4755 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f0e6886-1af8-466d-929d-0e0f6ade8a91-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 15:27:07 crc kubenswrapper[4755]: I1210 15:27:07.427548 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pggdw\" (UniqueName: \"kubernetes.io/projected/3f0e6886-1af8-466d-929d-0e0f6ade8a91-kube-api-access-pggdw\") on node \"crc\" DevicePath \"\"" Dec 10 15:27:07 crc kubenswrapper[4755]: I1210 15:27:07.435089 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f0e6886-1af8-466d-929d-0e0f6ade8a91-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3f0e6886-1af8-466d-929d-0e0f6ade8a91" (UID: "3f0e6886-1af8-466d-929d-0e0f6ade8a91"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:27:07 crc kubenswrapper[4755]: I1210 15:27:07.529140 4755 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f0e6886-1af8-466d-929d-0e0f6ade8a91-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 15:27:07 crc kubenswrapper[4755]: I1210 15:27:07.780861 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j2tvl" event={"ID":"3f0e6886-1af8-466d-929d-0e0f6ade8a91","Type":"ContainerDied","Data":"e6744df224429fb037e220c86797f05c74124c03a407f63cae917c1161defd27"} Dec 10 15:27:07 crc kubenswrapper[4755]: I1210 15:27:07.780898 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j2tvl" Dec 10 15:27:07 crc kubenswrapper[4755]: I1210 15:27:07.780919 4755 scope.go:117] "RemoveContainer" containerID="e6744df224429fb037e220c86797f05c74124c03a407f63cae917c1161defd27" Dec 10 15:27:07 crc kubenswrapper[4755]: I1210 15:27:07.780784 4755 generic.go:334] "Generic (PLEG): container finished" podID="3f0e6886-1af8-466d-929d-0e0f6ade8a91" containerID="e6744df224429fb037e220c86797f05c74124c03a407f63cae917c1161defd27" exitCode=0 Dec 10 15:27:07 crc kubenswrapper[4755]: I1210 15:27:07.783247 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j2tvl" event={"ID":"3f0e6886-1af8-466d-929d-0e0f6ade8a91","Type":"ContainerDied","Data":"71863269d0d073a7bc26b15e7a3192a67e09b9cb8893d69320b22912a0b6b102"} Dec 10 15:27:07 crc kubenswrapper[4755]: I1210 15:27:07.799502 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-j2tvl"] Dec 10 15:27:07 crc kubenswrapper[4755]: I1210 15:27:07.801154 4755 scope.go:117] "RemoveContainer" containerID="c64cc4cb495c95ad2a8f508f6d883a43cdb0fd52426a5931c3aa0ab660cdbbfe" Dec 10 15:27:07 crc kubenswrapper[4755]: I1210 15:27:07.803271 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-j2tvl"] Dec 10 15:27:07 crc kubenswrapper[4755]: I1210 15:27:07.816414 4755 scope.go:117] "RemoveContainer" containerID="0e4d2369e7aea64cdb6ff9baa5f5320ebecd5fa7c3f442f345dd031aafd38353" Dec 10 15:27:07 crc kubenswrapper[4755]: I1210 15:27:07.831900 4755 scope.go:117] "RemoveContainer" containerID="e6744df224429fb037e220c86797f05c74124c03a407f63cae917c1161defd27" Dec 10 15:27:07 crc kubenswrapper[4755]: E1210 15:27:07.832568 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6744df224429fb037e220c86797f05c74124c03a407f63cae917c1161defd27\": container with ID starting with e6744df224429fb037e220c86797f05c74124c03a407f63cae917c1161defd27 not found: ID does not exist" containerID="e6744df224429fb037e220c86797f05c74124c03a407f63cae917c1161defd27" Dec 10 15:27:07 crc kubenswrapper[4755]: I1210 15:27:07.832599 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6744df224429fb037e220c86797f05c74124c03a407f63cae917c1161defd27"} err="failed to get container status \"e6744df224429fb037e220c86797f05c74124c03a407f63cae917c1161defd27\": rpc error: code = NotFound desc = could not find container \"e6744df224429fb037e220c86797f05c74124c03a407f63cae917c1161defd27\": container with ID starting with e6744df224429fb037e220c86797f05c74124c03a407f63cae917c1161defd27 not found: ID does not exist" Dec 10 15:27:07 crc kubenswrapper[4755]: I1210 15:27:07.832647 4755 scope.go:117] "RemoveContainer" containerID="c64cc4cb495c95ad2a8f508f6d883a43cdb0fd52426a5931c3aa0ab660cdbbfe" Dec 10 15:27:07 crc kubenswrapper[4755]: E1210 15:27:07.832927 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c64cc4cb495c95ad2a8f508f6d883a43cdb0fd52426a5931c3aa0ab660cdbbfe\": container with ID starting with c64cc4cb495c95ad2a8f508f6d883a43cdb0fd52426a5931c3aa0ab660cdbbfe not found: ID does not exist" containerID="c64cc4cb495c95ad2a8f508f6d883a43cdb0fd52426a5931c3aa0ab660cdbbfe" Dec 10 15:27:07 crc kubenswrapper[4755]: I1210 15:27:07.832963 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c64cc4cb495c95ad2a8f508f6d883a43cdb0fd52426a5931c3aa0ab660cdbbfe"} err="failed to get container status \"c64cc4cb495c95ad2a8f508f6d883a43cdb0fd52426a5931c3aa0ab660cdbbfe\": rpc error: code = NotFound desc = could not find container \"c64cc4cb495c95ad2a8f508f6d883a43cdb0fd52426a5931c3aa0ab660cdbbfe\": container with ID starting with c64cc4cb495c95ad2a8f508f6d883a43cdb0fd52426a5931c3aa0ab660cdbbfe not found: ID does not exist" Dec 10 15:27:07 crc kubenswrapper[4755]: I1210 15:27:07.832981 4755 scope.go:117] "RemoveContainer" containerID="0e4d2369e7aea64cdb6ff9baa5f5320ebecd5fa7c3f442f345dd031aafd38353" Dec 10 15:27:07 crc kubenswrapper[4755]: E1210 15:27:07.833483 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e4d2369e7aea64cdb6ff9baa5f5320ebecd5fa7c3f442f345dd031aafd38353\": container with ID starting with 0e4d2369e7aea64cdb6ff9baa5f5320ebecd5fa7c3f442f345dd031aafd38353 not found: ID does not exist" containerID="0e4d2369e7aea64cdb6ff9baa5f5320ebecd5fa7c3f442f345dd031aafd38353" Dec 10 15:27:07 crc kubenswrapper[4755]: I1210 15:27:07.833522 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e4d2369e7aea64cdb6ff9baa5f5320ebecd5fa7c3f442f345dd031aafd38353"} err="failed to get container status \"0e4d2369e7aea64cdb6ff9baa5f5320ebecd5fa7c3f442f345dd031aafd38353\": rpc error: code = NotFound desc = could not find container \"0e4d2369e7aea64cdb6ff9baa5f5320ebecd5fa7c3f442f345dd031aafd38353\": container with ID starting with 0e4d2369e7aea64cdb6ff9baa5f5320ebecd5fa7c3f442f345dd031aafd38353 not found: ID does not exist" Dec 10 15:27:09 crc kubenswrapper[4755]: I1210 15:27:09.003203 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-v7hds" Dec 10 15:27:09 crc kubenswrapper[4755]: I1210 15:27:09.004316 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-v7hds" Dec 10 15:27:09 crc kubenswrapper[4755]: I1210 15:27:09.054086 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-v7hds" Dec 10 15:27:09 crc kubenswrapper[4755]: I1210 15:27:09.205276 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-d89xt" Dec 10 15:27:09 crc kubenswrapper[4755]: I1210 15:27:09.205329 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-d89xt" Dec 10 15:27:09 crc kubenswrapper[4755]: I1210 15:27:09.247404 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-d89xt" Dec 10 15:27:09 crc kubenswrapper[4755]: I1210 15:27:09.674823 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-ck7mx" Dec 10 15:27:09 crc kubenswrapper[4755]: I1210 15:27:09.674868 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ck7mx" Dec 10 15:27:09 crc kubenswrapper[4755]: I1210 15:27:09.701639 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9slgn" Dec 10 15:27:09 crc kubenswrapper[4755]: I1210 15:27:09.701701 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9slgn" Dec 10 15:27:09 crc kubenswrapper[4755]: I1210 15:27:09.713086 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ck7mx" Dec 10 15:27:09 crc kubenswrapper[4755]: I1210 15:27:09.739521 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9slgn" Dec 10 15:27:09 crc kubenswrapper[4755]: I1210 15:27:09.766834 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f0e6886-1af8-466d-929d-0e0f6ade8a91" path="/var/lib/kubelet/pods/3f0e6886-1af8-466d-929d-0e0f6ade8a91/volumes" Dec 10 15:27:09 crc kubenswrapper[4755]: I1210 15:27:09.836849 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-d89xt" Dec 10 15:27:09 crc kubenswrapper[4755]: I1210 15:27:09.837001 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9slgn" Dec 10 15:27:09 crc kubenswrapper[4755]: I1210 15:27:09.837159 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-v7hds" Dec 10 15:27:09 crc kubenswrapper[4755]: I1210 15:27:09.850859 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ck7mx" Dec 10 15:27:10 crc kubenswrapper[4755]: I1210 15:27:10.359726 4755 patch_prober.go:28] interesting pod/machine-config-daemon-ggt8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 15:27:10 crc kubenswrapper[4755]: I1210 15:27:10.359798 4755 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 15:27:10 crc kubenswrapper[4755]: I1210 15:27:10.359838 4755 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" Dec 10 15:27:10 crc kubenswrapper[4755]: I1210 15:27:10.360407 4755 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a40c4bdaa23a60a665b8f565720d79b68cac62d40246be94fc6cd314b1bb3656"} pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 10 15:27:10 crc kubenswrapper[4755]: I1210 15:27:10.360513 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" containerName="machine-config-daemon" containerID="cri-o://a40c4bdaa23a60a665b8f565720d79b68cac62d40246be94fc6cd314b1bb3656" gracePeriod=600 Dec 10 15:27:10 crc kubenswrapper[4755]: I1210 15:27:10.924379 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9slgn"] Dec 10 15:27:11 crc kubenswrapper[4755]: I1210 15:27:11.275662 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qccqv" Dec 10 15:27:11 crc kubenswrapper[4755]: I1210 15:27:11.313911 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qccqv" Dec 10 15:27:11 crc kubenswrapper[4755]: I1210 15:27:11.526927 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ck7mx"] Dec 10 15:27:11 crc kubenswrapper[4755]: I1210 15:27:11.638085 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wvrkv" Dec 10 15:27:11 crc kubenswrapper[4755]: I1210 15:27:11.638162 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wvrkv" Dec 10 15:27:11 crc kubenswrapper[4755]: I1210 15:27:11.679046 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wvrkv" Dec 10 15:27:11 crc kubenswrapper[4755]: I1210 15:27:11.804138 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9slgn" podUID="1d74a8b0-992e-46b5-9364-cc82c84ac2d8" containerName="registry-server" containerID="cri-o://1aceb1896102ed5a5b588a38ec5e4b03f41528f63e4f3394facaaa1092800d70" gracePeriod=2 Dec 10 15:27:11 crc kubenswrapper[4755]: I1210 15:27:11.804887 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-ck7mx" podUID="609745aa-bdb9-440a-b029-fcd706ed320e" containerName="registry-server" containerID="cri-o://28115a76a0e836ceaebb60c0c0cc98faefbb1ad872ffbffa327ce216f51930be" gracePeriod=2 Dec 10 15:27:11 crc kubenswrapper[4755]: I1210 15:27:11.878970 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wvrkv" Dec 10 15:27:12 crc kubenswrapper[4755]: I1210 15:27:12.293754 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-m45gd" Dec 10 15:27:12 crc kubenswrapper[4755]: I1210 15:27:12.334652 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-m45gd" Dec 10 15:27:13 crc kubenswrapper[4755]: I1210 15:27:13.923184 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wvrkv"] Dec 10 15:27:13 crc kubenswrapper[4755]: I1210 15:27:13.923722 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-wvrkv" podUID="84e93abf-4130-4ca2-b263-5a4b29729988" containerName="registry-server" containerID="cri-o://8d3687ddcd7838061d167fe8a5b887b5c7bdfdab94447a28cbf4151500df21ea" gracePeriod=2 Dec 10 15:27:15 crc kubenswrapper[4755]: I1210 15:27:15.170564 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-f47gb"] Dec 10 15:27:16 crc kubenswrapper[4755]: I1210 15:27:16.419450 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9slgn_1d74a8b0-992e-46b5-9364-cc82c84ac2d8/registry-server/0.log" Dec 10 15:27:16 crc kubenswrapper[4755]: I1210 15:27:16.421606 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9slgn" Dec 10 15:27:16 crc kubenswrapper[4755]: I1210 15:27:16.511844 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wczkt\" (UniqueName: \"kubernetes.io/projected/1d74a8b0-992e-46b5-9364-cc82c84ac2d8-kube-api-access-wczkt\") pod \"1d74a8b0-992e-46b5-9364-cc82c84ac2d8\" (UID: \"1d74a8b0-992e-46b5-9364-cc82c84ac2d8\") " Dec 10 15:27:16 crc kubenswrapper[4755]: I1210 15:27:16.512064 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d74a8b0-992e-46b5-9364-cc82c84ac2d8-catalog-content\") pod \"1d74a8b0-992e-46b5-9364-cc82c84ac2d8\" (UID: \"1d74a8b0-992e-46b5-9364-cc82c84ac2d8\") " Dec 10 15:27:16 crc kubenswrapper[4755]: I1210 15:27:16.512109 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d74a8b0-992e-46b5-9364-cc82c84ac2d8-utilities\") pod \"1d74a8b0-992e-46b5-9364-cc82c84ac2d8\" (UID: \"1d74a8b0-992e-46b5-9364-cc82c84ac2d8\") " Dec 10 15:27:16 crc kubenswrapper[4755]: I1210 15:27:16.513255 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d74a8b0-992e-46b5-9364-cc82c84ac2d8-utilities" (OuterVolumeSpecName: "utilities") pod "1d74a8b0-992e-46b5-9364-cc82c84ac2d8" (UID: "1d74a8b0-992e-46b5-9364-cc82c84ac2d8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:27:16 crc kubenswrapper[4755]: I1210 15:27:16.539087 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d74a8b0-992e-46b5-9364-cc82c84ac2d8-kube-api-access-wczkt" (OuterVolumeSpecName: "kube-api-access-wczkt") pod "1d74a8b0-992e-46b5-9364-cc82c84ac2d8" (UID: "1d74a8b0-992e-46b5-9364-cc82c84ac2d8"). InnerVolumeSpecName "kube-api-access-wczkt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:27:16 crc kubenswrapper[4755]: I1210 15:27:16.574200 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d74a8b0-992e-46b5-9364-cc82c84ac2d8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d74a8b0-992e-46b5-9364-cc82c84ac2d8" (UID: "1d74a8b0-992e-46b5-9364-cc82c84ac2d8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:27:16 crc kubenswrapper[4755]: I1210 15:27:16.613140 4755 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d74a8b0-992e-46b5-9364-cc82c84ac2d8-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 15:27:16 crc kubenswrapper[4755]: I1210 15:27:16.613175 4755 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d74a8b0-992e-46b5-9364-cc82c84ac2d8-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 15:27:16 crc kubenswrapper[4755]: I1210 15:27:16.613188 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wczkt\" (UniqueName: \"kubernetes.io/projected/1d74a8b0-992e-46b5-9364-cc82c84ac2d8-kube-api-access-wczkt\") on node \"crc\" DevicePath \"\"" Dec 10 15:27:16 crc kubenswrapper[4755]: I1210 15:27:16.626335 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wvrkv" Dec 10 15:27:16 crc kubenswrapper[4755]: I1210 15:27:16.659398 4755 generic.go:334] "Generic (PLEG): container finished" podID="b132a8b9-1c99-414d-8773-229bf36b305d" containerID="a40c4bdaa23a60a665b8f565720d79b68cac62d40246be94fc6cd314b1bb3656" exitCode=0 Dec 10 15:27:16 crc kubenswrapper[4755]: I1210 15:27:16.659445 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" event={"ID":"b132a8b9-1c99-414d-8773-229bf36b305d","Type":"ContainerDied","Data":"a40c4bdaa23a60a665b8f565720d79b68cac62d40246be94fc6cd314b1bb3656"} Dec 10 15:27:16 crc kubenswrapper[4755]: I1210 15:27:16.714283 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84e93abf-4130-4ca2-b263-5a4b29729988-catalog-content\") pod \"84e93abf-4130-4ca2-b263-5a4b29729988\" (UID: \"84e93abf-4130-4ca2-b263-5a4b29729988\") " Dec 10 15:27:16 crc kubenswrapper[4755]: I1210 15:27:16.714352 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84e93abf-4130-4ca2-b263-5a4b29729988-utilities\") pod \"84e93abf-4130-4ca2-b263-5a4b29729988\" (UID: \"84e93abf-4130-4ca2-b263-5a4b29729988\") " Dec 10 15:27:16 crc kubenswrapper[4755]: I1210 15:27:16.714383 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kbfl9\" (UniqueName: \"kubernetes.io/projected/84e93abf-4130-4ca2-b263-5a4b29729988-kube-api-access-kbfl9\") pod \"84e93abf-4130-4ca2-b263-5a4b29729988\" (UID: \"84e93abf-4130-4ca2-b263-5a4b29729988\") " Dec 10 15:27:16 crc kubenswrapper[4755]: I1210 15:27:16.715197 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/84e93abf-4130-4ca2-b263-5a4b29729988-utilities" (OuterVolumeSpecName: "utilities") pod "84e93abf-4130-4ca2-b263-5a4b29729988" (UID: "84e93abf-4130-4ca2-b263-5a4b29729988"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:27:16 crc kubenswrapper[4755]: I1210 15:27:16.717853 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84e93abf-4130-4ca2-b263-5a4b29729988-kube-api-access-kbfl9" (OuterVolumeSpecName: "kube-api-access-kbfl9") pod "84e93abf-4130-4ca2-b263-5a4b29729988" (UID: "84e93abf-4130-4ca2-b263-5a4b29729988"). InnerVolumeSpecName "kube-api-access-kbfl9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:27:16 crc kubenswrapper[4755]: I1210 15:27:16.738997 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-ck7mx_609745aa-bdb9-440a-b029-fcd706ed320e/registry-server/0.log" Dec 10 15:27:16 crc kubenswrapper[4755]: I1210 15:27:16.739961 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ck7mx" Dec 10 15:27:16 crc kubenswrapper[4755]: I1210 15:27:16.744294 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/84e93abf-4130-4ca2-b263-5a4b29729988-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "84e93abf-4130-4ca2-b263-5a4b29729988" (UID: "84e93abf-4130-4ca2-b263-5a4b29729988"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:27:16 crc kubenswrapper[4755]: I1210 15:27:16.815155 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/609745aa-bdb9-440a-b029-fcd706ed320e-catalog-content\") pod \"609745aa-bdb9-440a-b029-fcd706ed320e\" (UID: \"609745aa-bdb9-440a-b029-fcd706ed320e\") " Dec 10 15:27:16 crc kubenswrapper[4755]: I1210 15:27:16.815253 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzhqn\" (UniqueName: \"kubernetes.io/projected/609745aa-bdb9-440a-b029-fcd706ed320e-kube-api-access-nzhqn\") pod \"609745aa-bdb9-440a-b029-fcd706ed320e\" (UID: \"609745aa-bdb9-440a-b029-fcd706ed320e\") " Dec 10 15:27:16 crc kubenswrapper[4755]: I1210 15:27:16.815346 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/609745aa-bdb9-440a-b029-fcd706ed320e-utilities\") pod \"609745aa-bdb9-440a-b029-fcd706ed320e\" (UID: \"609745aa-bdb9-440a-b029-fcd706ed320e\") " Dec 10 15:27:16 crc kubenswrapper[4755]: I1210 15:27:16.815676 4755 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84e93abf-4130-4ca2-b263-5a4b29729988-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 15:27:16 crc kubenswrapper[4755]: I1210 15:27:16.815693 4755 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84e93abf-4130-4ca2-b263-5a4b29729988-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 15:27:16 crc kubenswrapper[4755]: I1210 15:27:16.815706 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kbfl9\" (UniqueName: \"kubernetes.io/projected/84e93abf-4130-4ca2-b263-5a4b29729988-kube-api-access-kbfl9\") on node \"crc\" DevicePath \"\"" Dec 10 15:27:16 crc kubenswrapper[4755]: I1210 15:27:16.816185 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/609745aa-bdb9-440a-b029-fcd706ed320e-utilities" (OuterVolumeSpecName: "utilities") pod "609745aa-bdb9-440a-b029-fcd706ed320e" (UID: "609745aa-bdb9-440a-b029-fcd706ed320e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:27:16 crc kubenswrapper[4755]: I1210 15:27:16.819531 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/609745aa-bdb9-440a-b029-fcd706ed320e-kube-api-access-nzhqn" (OuterVolumeSpecName: "kube-api-access-nzhqn") pod "609745aa-bdb9-440a-b029-fcd706ed320e" (UID: "609745aa-bdb9-440a-b029-fcd706ed320e"). InnerVolumeSpecName "kube-api-access-nzhqn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:27:16 crc kubenswrapper[4755]: I1210 15:27:16.866673 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/609745aa-bdb9-440a-b029-fcd706ed320e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "609745aa-bdb9-440a-b029-fcd706ed320e" (UID: "609745aa-bdb9-440a-b029-fcd706ed320e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:27:16 crc kubenswrapper[4755]: I1210 15:27:16.918196 4755 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/609745aa-bdb9-440a-b029-fcd706ed320e-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 15:27:16 crc kubenswrapper[4755]: I1210 15:27:16.918246 4755 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/609745aa-bdb9-440a-b029-fcd706ed320e-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 15:27:16 crc kubenswrapper[4755]: I1210 15:27:16.918261 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzhqn\" (UniqueName: \"kubernetes.io/projected/609745aa-bdb9-440a-b029-fcd706ed320e-kube-api-access-nzhqn\") on node \"crc\" DevicePath \"\"" Dec 10 15:27:17 crc kubenswrapper[4755]: I1210 15:27:17.666944 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-ck7mx_609745aa-bdb9-440a-b029-fcd706ed320e/registry-server/0.log" Dec 10 15:27:17 crc kubenswrapper[4755]: I1210 15:27:17.668082 4755 generic.go:334] "Generic (PLEG): container finished" podID="609745aa-bdb9-440a-b029-fcd706ed320e" containerID="28115a76a0e836ceaebb60c0c0cc98faefbb1ad872ffbffa327ce216f51930be" exitCode=137 Dec 10 15:27:17 crc kubenswrapper[4755]: I1210 15:27:17.668153 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ck7mx" Dec 10 15:27:17 crc kubenswrapper[4755]: I1210 15:27:17.668152 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ck7mx" event={"ID":"609745aa-bdb9-440a-b029-fcd706ed320e","Type":"ContainerDied","Data":"28115a76a0e836ceaebb60c0c0cc98faefbb1ad872ffbffa327ce216f51930be"} Dec 10 15:27:17 crc kubenswrapper[4755]: I1210 15:27:17.668278 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ck7mx" event={"ID":"609745aa-bdb9-440a-b029-fcd706ed320e","Type":"ContainerDied","Data":"afb60fc6d7c531dbd0d1160dd770f5108c23f7c16fa6f1f5c2240b901f36fda9"} Dec 10 15:27:17 crc kubenswrapper[4755]: I1210 15:27:17.668302 4755 scope.go:117] "RemoveContainer" containerID="28115a76a0e836ceaebb60c0c0cc98faefbb1ad872ffbffa327ce216f51930be" Dec 10 15:27:17 crc kubenswrapper[4755]: I1210 15:27:17.669580 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9slgn_1d74a8b0-992e-46b5-9364-cc82c84ac2d8/registry-server/0.log" Dec 10 15:27:17 crc kubenswrapper[4755]: I1210 15:27:17.671605 4755 generic.go:334] "Generic (PLEG): container finished" podID="1d74a8b0-992e-46b5-9364-cc82c84ac2d8" containerID="1aceb1896102ed5a5b588a38ec5e4b03f41528f63e4f3394facaaa1092800d70" exitCode=137 Dec 10 15:27:17 crc kubenswrapper[4755]: I1210 15:27:17.671701 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9slgn" event={"ID":"1d74a8b0-992e-46b5-9364-cc82c84ac2d8","Type":"ContainerDied","Data":"1aceb1896102ed5a5b588a38ec5e4b03f41528f63e4f3394facaaa1092800d70"} Dec 10 15:27:17 crc kubenswrapper[4755]: I1210 15:27:17.671762 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9slgn" Dec 10 15:27:17 crc kubenswrapper[4755]: I1210 15:27:17.671784 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9slgn" event={"ID":"1d74a8b0-992e-46b5-9364-cc82c84ac2d8","Type":"ContainerDied","Data":"e6ec8fff08a463085a224c901fbae701bea3684412b82b1e1e094ccac93fe13d"} Dec 10 15:27:17 crc kubenswrapper[4755]: I1210 15:27:17.675448 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" event={"ID":"b132a8b9-1c99-414d-8773-229bf36b305d","Type":"ContainerStarted","Data":"f86ac3eae537ebb7a44f728c6faf4f748c2bb88ff37965117af600f929730d8f"} Dec 10 15:27:17 crc kubenswrapper[4755]: I1210 15:27:17.678190 4755 generic.go:334] "Generic (PLEG): container finished" podID="84e93abf-4130-4ca2-b263-5a4b29729988" containerID="8d3687ddcd7838061d167fe8a5b887b5c7bdfdab94447a28cbf4151500df21ea" exitCode=0 Dec 10 15:27:17 crc kubenswrapper[4755]: I1210 15:27:17.678235 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wvrkv" event={"ID":"84e93abf-4130-4ca2-b263-5a4b29729988","Type":"ContainerDied","Data":"8d3687ddcd7838061d167fe8a5b887b5c7bdfdab94447a28cbf4151500df21ea"} Dec 10 15:27:17 crc kubenswrapper[4755]: I1210 15:27:17.678263 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wvrkv" event={"ID":"84e93abf-4130-4ca2-b263-5a4b29729988","Type":"ContainerDied","Data":"f6cbc5551c6159a3087b26320685cbd001965246591af8c764208d6353dbc3a5"} Dec 10 15:27:17 crc kubenswrapper[4755]: I1210 15:27:17.678596 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wvrkv" Dec 10 15:27:17 crc kubenswrapper[4755]: I1210 15:27:17.686737 4755 scope.go:117] "RemoveContainer" containerID="be0a1a5652b0266445b8ea6771ddb5b1a8f75a0d64880a6f083551cdfe041615" Dec 10 15:27:17 crc kubenswrapper[4755]: I1210 15:27:17.710275 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9slgn"] Dec 10 15:27:17 crc kubenswrapper[4755]: I1210 15:27:17.719620 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9slgn"] Dec 10 15:27:17 crc kubenswrapper[4755]: I1210 15:27:17.724720 4755 scope.go:117] "RemoveContainer" containerID="48f1cc7970e12f317265de2b7a2407ee911f71b657799143170033c7ac0fb292" Dec 10 15:27:17 crc kubenswrapper[4755]: I1210 15:27:17.727236 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ck7mx"] Dec 10 15:27:17 crc kubenswrapper[4755]: I1210 15:27:17.730590 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-ck7mx"] Dec 10 15:27:17 crc kubenswrapper[4755]: I1210 15:27:17.730966 4755 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 10 15:27:17 crc kubenswrapper[4755]: I1210 15:27:17.731295 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://ef3871ddc16d2fc1b89a6f120390cf066424bd9ed4cca98f8dea4430c40c6c07" gracePeriod=15 Dec 10 15:27:17 crc kubenswrapper[4755]: I1210 15:27:17.731491 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://16e58653dca08c9b33671279553975bca9507ffeeefea161119cec624813f167" gracePeriod=15 Dec 10 15:27:17 crc kubenswrapper[4755]: I1210 15:27:17.731525 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://08895c3ab20248c4bd70ccfae6442bdf8e76a602e605b043f0f00166624ada17" gracePeriod=15 Dec 10 15:27:17 crc kubenswrapper[4755]: I1210 15:27:17.731551 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://c5b2467645116856afa1e1e37501c7b453b03da193a96d1075886b85839a86ac" gracePeriod=15 Dec 10 15:27:17 crc kubenswrapper[4755]: I1210 15:27:17.731653 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://fea18ef6c319336321d97c2a55449903b837b5a47da785da3ff5308244751943" gracePeriod=15 Dec 10 15:27:17 crc kubenswrapper[4755]: I1210 15:27:17.733543 4755 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 10 15:27:17 crc kubenswrapper[4755]: E1210 15:27:17.733811 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 10 15:27:17 crc kubenswrapper[4755]: I1210 15:27:17.733834 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 10 15:27:17 crc kubenswrapper[4755]: E1210 15:27:17.733877 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="609745aa-bdb9-440a-b029-fcd706ed320e" containerName="registry-server" Dec 10 15:27:17 crc kubenswrapper[4755]: I1210 15:27:17.733888 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="609745aa-bdb9-440a-b029-fcd706ed320e" containerName="registry-server" Dec 10 15:27:17 crc kubenswrapper[4755]: E1210 15:27:17.733900 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aca50107-d329-40c9-895f-a539e3f6afe3" containerName="pruner" Dec 10 15:27:17 crc kubenswrapper[4755]: I1210 15:27:17.733909 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="aca50107-d329-40c9-895f-a539e3f6afe3" containerName="pruner" Dec 10 15:27:17 crc kubenswrapper[4755]: E1210 15:27:17.733921 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84e93abf-4130-4ca2-b263-5a4b29729988" containerName="extract-utilities" Dec 10 15:27:17 crc kubenswrapper[4755]: I1210 15:27:17.733930 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="84e93abf-4130-4ca2-b263-5a4b29729988" containerName="extract-utilities" Dec 10 15:27:17 crc kubenswrapper[4755]: E1210 15:27:17.733946 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="609745aa-bdb9-440a-b029-fcd706ed320e" containerName="extract-utilities" Dec 10 15:27:17 crc kubenswrapper[4755]: I1210 15:27:17.733955 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="609745aa-bdb9-440a-b029-fcd706ed320e" containerName="extract-utilities" Dec 10 15:27:17 crc kubenswrapper[4755]: E1210 15:27:17.733963 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Dec 10 15:27:17 crc kubenswrapper[4755]: I1210 15:27:17.733971 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Dec 10 15:27:17 crc kubenswrapper[4755]: E1210 15:27:17.733982 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 10 15:27:17 crc kubenswrapper[4755]: I1210 15:27:17.733990 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 10 15:27:17 crc kubenswrapper[4755]: E1210 15:27:17.734001 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84e93abf-4130-4ca2-b263-5a4b29729988" containerName="registry-server" Dec 10 15:27:17 crc kubenswrapper[4755]: I1210 15:27:17.734010 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="84e93abf-4130-4ca2-b263-5a4b29729988" containerName="registry-server" Dec 10 15:27:17 crc kubenswrapper[4755]: E1210 15:27:17.734023 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 10 15:27:17 crc kubenswrapper[4755]: I1210 15:27:17.734031 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 10 15:27:17 crc kubenswrapper[4755]: E1210 15:27:17.734043 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d74a8b0-992e-46b5-9364-cc82c84ac2d8" containerName="extract-utilities" Dec 10 15:27:17 crc kubenswrapper[4755]: I1210 15:27:17.734051 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d74a8b0-992e-46b5-9364-cc82c84ac2d8" containerName="extract-utilities" Dec 10 15:27:17 crc kubenswrapper[4755]: E1210 15:27:17.734063 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f0e6886-1af8-466d-929d-0e0f6ade8a91" containerName="extract-utilities" Dec 10 15:27:17 crc kubenswrapper[4755]: I1210 15:27:17.734071 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f0e6886-1af8-466d-929d-0e0f6ade8a91" containerName="extract-utilities" Dec 10 15:27:17 crc kubenswrapper[4755]: E1210 15:27:17.734080 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d74a8b0-992e-46b5-9364-cc82c84ac2d8" containerName="extract-content" Dec 10 15:27:17 crc kubenswrapper[4755]: I1210 15:27:17.734088 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d74a8b0-992e-46b5-9364-cc82c84ac2d8" containerName="extract-content" Dec 10 15:27:17 crc kubenswrapper[4755]: E1210 15:27:17.734100 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f0e6886-1af8-466d-929d-0e0f6ade8a91" containerName="registry-server" Dec 10 15:27:17 crc kubenswrapper[4755]: I1210 15:27:17.734108 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f0e6886-1af8-466d-929d-0e0f6ade8a91" containerName="registry-server" Dec 10 15:27:17 crc kubenswrapper[4755]: E1210 15:27:17.734119 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 10 15:27:17 crc kubenswrapper[4755]: I1210 15:27:17.734127 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 10 15:27:17 crc kubenswrapper[4755]: E1210 15:27:17.734136 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="609745aa-bdb9-440a-b029-fcd706ed320e" containerName="extract-content" Dec 10 15:27:17 crc kubenswrapper[4755]: I1210 15:27:17.734143 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="609745aa-bdb9-440a-b029-fcd706ed320e" containerName="extract-content" Dec 10 15:27:17 crc kubenswrapper[4755]: E1210 15:27:17.734153 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f0e6886-1af8-466d-929d-0e0f6ade8a91" containerName="extract-content" Dec 10 15:27:17 crc kubenswrapper[4755]: I1210 15:27:17.734162 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f0e6886-1af8-466d-929d-0e0f6ade8a91" containerName="extract-content" Dec 10 15:27:17 crc kubenswrapper[4755]: E1210 15:27:17.734171 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d74a8b0-992e-46b5-9364-cc82c84ac2d8" containerName="registry-server" Dec 10 15:27:17 crc kubenswrapper[4755]: I1210 15:27:17.734181 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d74a8b0-992e-46b5-9364-cc82c84ac2d8" containerName="registry-server" Dec 10 15:27:17 crc kubenswrapper[4755]: E1210 15:27:17.734193 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 10 15:27:17 crc kubenswrapper[4755]: I1210 15:27:17.734200 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 10 15:27:17 crc kubenswrapper[4755]: E1210 15:27:17.734209 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84e93abf-4130-4ca2-b263-5a4b29729988" containerName="extract-content" Dec 10 15:27:17 crc kubenswrapper[4755]: I1210 15:27:17.734216 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="84e93abf-4130-4ca2-b263-5a4b29729988" containerName="extract-content" Dec 10 15:27:17 crc kubenswrapper[4755]: I1210 15:27:17.740690 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 10 15:27:17 crc kubenswrapper[4755]: I1210 15:27:17.740754 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f0e6886-1af8-466d-929d-0e0f6ade8a91" containerName="registry-server" Dec 10 15:27:17 crc kubenswrapper[4755]: I1210 15:27:17.740768 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="84e93abf-4130-4ca2-b263-5a4b29729988" containerName="registry-server" Dec 10 15:27:17 crc kubenswrapper[4755]: I1210 15:27:17.740780 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d74a8b0-992e-46b5-9364-cc82c84ac2d8" containerName="registry-server" Dec 10 15:27:17 crc kubenswrapper[4755]: I1210 15:27:17.740798 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 10 15:27:17 crc kubenswrapper[4755]: I1210 15:27:17.740811 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 10 15:27:17 crc kubenswrapper[4755]: I1210 15:27:17.740821 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="aca50107-d329-40c9-895f-a539e3f6afe3" containerName="pruner" Dec 10 15:27:17 crc kubenswrapper[4755]: I1210 15:27:17.740833 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 10 15:27:17 crc kubenswrapper[4755]: I1210 15:27:17.740846 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="609745aa-bdb9-440a-b029-fcd706ed320e" containerName="registry-server" Dec 10 15:27:17 crc kubenswrapper[4755]: I1210 15:27:17.740866 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 10 15:27:17 crc kubenswrapper[4755]: I1210 15:27:17.753727 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wvrkv"] Dec 10 15:27:17 crc kubenswrapper[4755]: I1210 15:27:17.755157 4755 scope.go:117] "RemoveContainer" containerID="28115a76a0e836ceaebb60c0c0cc98faefbb1ad872ffbffa327ce216f51930be" Dec 10 15:27:17 crc kubenswrapper[4755]: E1210 15:27:17.755936 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28115a76a0e836ceaebb60c0c0cc98faefbb1ad872ffbffa327ce216f51930be\": container with ID starting with 28115a76a0e836ceaebb60c0c0cc98faefbb1ad872ffbffa327ce216f51930be not found: ID does not exist" containerID="28115a76a0e836ceaebb60c0c0cc98faefbb1ad872ffbffa327ce216f51930be" Dec 10 15:27:17 crc kubenswrapper[4755]: I1210 15:27:17.755974 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28115a76a0e836ceaebb60c0c0cc98faefbb1ad872ffbffa327ce216f51930be"} err="failed to get container status \"28115a76a0e836ceaebb60c0c0cc98faefbb1ad872ffbffa327ce216f51930be\": rpc error: code = NotFound desc = could not find container \"28115a76a0e836ceaebb60c0c0cc98faefbb1ad872ffbffa327ce216f51930be\": container with ID starting with 28115a76a0e836ceaebb60c0c0cc98faefbb1ad872ffbffa327ce216f51930be not found: ID does not exist" Dec 10 15:27:17 crc kubenswrapper[4755]: I1210 15:27:17.755997 4755 scope.go:117] "RemoveContainer" containerID="be0a1a5652b0266445b8ea6771ddb5b1a8f75a0d64880a6f083551cdfe041615" Dec 10 15:27:17 crc kubenswrapper[4755]: E1210 15:27:17.757669 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be0a1a5652b0266445b8ea6771ddb5b1a8f75a0d64880a6f083551cdfe041615\": container with ID starting with be0a1a5652b0266445b8ea6771ddb5b1a8f75a0d64880a6f083551cdfe041615 not found: ID does not exist" containerID="be0a1a5652b0266445b8ea6771ddb5b1a8f75a0d64880a6f083551cdfe041615" Dec 10 15:27:17 crc kubenswrapper[4755]: I1210 15:27:17.757720 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be0a1a5652b0266445b8ea6771ddb5b1a8f75a0d64880a6f083551cdfe041615"} err="failed to get container status \"be0a1a5652b0266445b8ea6771ddb5b1a8f75a0d64880a6f083551cdfe041615\": rpc error: code = NotFound desc = could not find container \"be0a1a5652b0266445b8ea6771ddb5b1a8f75a0d64880a6f083551cdfe041615\": container with ID starting with be0a1a5652b0266445b8ea6771ddb5b1a8f75a0d64880a6f083551cdfe041615 not found: ID does not exist" Dec 10 15:27:17 crc kubenswrapper[4755]: I1210 15:27:17.757753 4755 scope.go:117] "RemoveContainer" containerID="48f1cc7970e12f317265de2b7a2407ee911f71b657799143170033c7ac0fb292" Dec 10 15:27:17 crc kubenswrapper[4755]: E1210 15:27:17.758594 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48f1cc7970e12f317265de2b7a2407ee911f71b657799143170033c7ac0fb292\": container with ID starting with 48f1cc7970e12f317265de2b7a2407ee911f71b657799143170033c7ac0fb292 not found: ID does not exist" containerID="48f1cc7970e12f317265de2b7a2407ee911f71b657799143170033c7ac0fb292" Dec 10 15:27:17 crc kubenswrapper[4755]: I1210 15:27:17.758635 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48f1cc7970e12f317265de2b7a2407ee911f71b657799143170033c7ac0fb292"} err="failed to get container status \"48f1cc7970e12f317265de2b7a2407ee911f71b657799143170033c7ac0fb292\": rpc error: code = NotFound desc = could not find container \"48f1cc7970e12f317265de2b7a2407ee911f71b657799143170033c7ac0fb292\": container with ID starting with 48f1cc7970e12f317265de2b7a2407ee911f71b657799143170033c7ac0fb292 not found: ID does not exist" Dec 10 15:27:17 crc kubenswrapper[4755]: I1210 15:27:17.758659 4755 scope.go:117] "RemoveContainer" containerID="1aceb1896102ed5a5b588a38ec5e4b03f41528f63e4f3394facaaa1092800d70" Dec 10 15:27:17 crc kubenswrapper[4755]: I1210 15:27:17.767810 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d74a8b0-992e-46b5-9364-cc82c84ac2d8" path="/var/lib/kubelet/pods/1d74a8b0-992e-46b5-9364-cc82c84ac2d8/volumes" Dec 10 15:27:17 crc kubenswrapper[4755]: I1210 15:27:17.768712 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="609745aa-bdb9-440a-b029-fcd706ed320e" path="/var/lib/kubelet/pods/609745aa-bdb9-440a-b029-fcd706ed320e/volumes" Dec 10 15:27:17 crc kubenswrapper[4755]: I1210 15:27:17.769620 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-wvrkv"] Dec 10 15:27:17 crc kubenswrapper[4755]: I1210 15:27:17.769656 4755 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 10 15:27:17 crc kubenswrapper[4755]: I1210 15:27:17.770395 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 10 15:27:17 crc kubenswrapper[4755]: I1210 15:27:17.802992 4755 scope.go:117] "RemoveContainer" containerID="d45e287edba741397b5c31bf51737e748d3ef6064328b228e267a903a8f52d02" Dec 10 15:27:17 crc kubenswrapper[4755]: E1210 15:27:17.812158 4755 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.18:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 10 15:27:17 crc kubenswrapper[4755]: I1210 15:27:17.827664 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 15:27:17 crc kubenswrapper[4755]: I1210 15:27:17.827708 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 10 15:27:17 crc kubenswrapper[4755]: I1210 15:27:17.827724 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 10 15:27:17 crc kubenswrapper[4755]: I1210 15:27:17.827756 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 10 15:27:17 crc kubenswrapper[4755]: I1210 15:27:17.827796 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 10 15:27:17 crc kubenswrapper[4755]: I1210 15:27:17.827842 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 10 15:27:17 crc kubenswrapper[4755]: I1210 15:27:17.827860 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 15:27:17 crc kubenswrapper[4755]: I1210 15:27:17.827883 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 15:27:17 crc kubenswrapper[4755]: I1210 15:27:17.830391 4755 scope.go:117] "RemoveContainer" containerID="87ea92a9701ff0033bf5ae783ef59b7e3d51d71b60540c5806f4ff9a72a8c72e" Dec 10 15:27:17 crc kubenswrapper[4755]: I1210 15:27:17.862872 4755 scope.go:117] "RemoveContainer" containerID="1aceb1896102ed5a5b588a38ec5e4b03f41528f63e4f3394facaaa1092800d70" Dec 10 15:27:17 crc kubenswrapper[4755]: E1210 15:27:17.866672 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1aceb1896102ed5a5b588a38ec5e4b03f41528f63e4f3394facaaa1092800d70\": container with ID starting with 1aceb1896102ed5a5b588a38ec5e4b03f41528f63e4f3394facaaa1092800d70 not found: ID does not exist" containerID="1aceb1896102ed5a5b588a38ec5e4b03f41528f63e4f3394facaaa1092800d70" Dec 10 15:27:17 crc kubenswrapper[4755]: I1210 15:27:17.866731 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1aceb1896102ed5a5b588a38ec5e4b03f41528f63e4f3394facaaa1092800d70"} err="failed to get container status \"1aceb1896102ed5a5b588a38ec5e4b03f41528f63e4f3394facaaa1092800d70\": rpc error: code = NotFound desc = could not find container \"1aceb1896102ed5a5b588a38ec5e4b03f41528f63e4f3394facaaa1092800d70\": container with ID starting with 1aceb1896102ed5a5b588a38ec5e4b03f41528f63e4f3394facaaa1092800d70 not found: ID does not exist" Dec 10 15:27:17 crc kubenswrapper[4755]: I1210 15:27:17.866762 4755 scope.go:117] "RemoveContainer" containerID="d45e287edba741397b5c31bf51737e748d3ef6064328b228e267a903a8f52d02" Dec 10 15:27:17 crc kubenswrapper[4755]: E1210 15:27:17.867607 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d45e287edba741397b5c31bf51737e748d3ef6064328b228e267a903a8f52d02\": container with ID starting with d45e287edba741397b5c31bf51737e748d3ef6064328b228e267a903a8f52d02 not found: ID does not exist" containerID="d45e287edba741397b5c31bf51737e748d3ef6064328b228e267a903a8f52d02" Dec 10 15:27:17 crc kubenswrapper[4755]: I1210 15:27:17.867667 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d45e287edba741397b5c31bf51737e748d3ef6064328b228e267a903a8f52d02"} err="failed to get container status \"d45e287edba741397b5c31bf51737e748d3ef6064328b228e267a903a8f52d02\": rpc error: code = NotFound desc = could not find container \"d45e287edba741397b5c31bf51737e748d3ef6064328b228e267a903a8f52d02\": container with ID starting with d45e287edba741397b5c31bf51737e748d3ef6064328b228e267a903a8f52d02 not found: ID does not exist" Dec 10 15:27:17 crc kubenswrapper[4755]: I1210 15:27:17.867699 4755 scope.go:117] "RemoveContainer" containerID="87ea92a9701ff0033bf5ae783ef59b7e3d51d71b60540c5806f4ff9a72a8c72e" Dec 10 15:27:17 crc kubenswrapper[4755]: E1210 15:27:17.868416 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87ea92a9701ff0033bf5ae783ef59b7e3d51d71b60540c5806f4ff9a72a8c72e\": container with ID starting with 87ea92a9701ff0033bf5ae783ef59b7e3d51d71b60540c5806f4ff9a72a8c72e not found: ID does not exist" containerID="87ea92a9701ff0033bf5ae783ef59b7e3d51d71b60540c5806f4ff9a72a8c72e" Dec 10 15:27:17 crc kubenswrapper[4755]: I1210 15:27:17.868449 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87ea92a9701ff0033bf5ae783ef59b7e3d51d71b60540c5806f4ff9a72a8c72e"} err="failed to get container status \"87ea92a9701ff0033bf5ae783ef59b7e3d51d71b60540c5806f4ff9a72a8c72e\": rpc error: code = NotFound desc = could not find container \"87ea92a9701ff0033bf5ae783ef59b7e3d51d71b60540c5806f4ff9a72a8c72e\": container with ID starting with 87ea92a9701ff0033bf5ae783ef59b7e3d51d71b60540c5806f4ff9a72a8c72e not found: ID does not exist" Dec 10 15:27:17 crc kubenswrapper[4755]: I1210 15:27:17.868525 4755 scope.go:117] "RemoveContainer" containerID="8d3687ddcd7838061d167fe8a5b887b5c7bdfdab94447a28cbf4151500df21ea" Dec 10 15:27:17 crc kubenswrapper[4755]: I1210 15:27:17.883826 4755 scope.go:117] "RemoveContainer" containerID="3679e8337fb5fc7c55e8040662b356f2841a98d51ddef403fbc0f045cc7ddead" Dec 10 15:27:17 crc kubenswrapper[4755]: I1210 15:27:17.909868 4755 scope.go:117] "RemoveContainer" containerID="82f1c5f7620fcd0558a34015b6415531594ca50858e2a57733b627e51f7b6315" Dec 10 15:27:17 crc kubenswrapper[4755]: I1210 15:27:17.924286 4755 scope.go:117] "RemoveContainer" containerID="8d3687ddcd7838061d167fe8a5b887b5c7bdfdab94447a28cbf4151500df21ea" Dec 10 15:27:17 crc kubenswrapper[4755]: E1210 15:27:17.924955 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d3687ddcd7838061d167fe8a5b887b5c7bdfdab94447a28cbf4151500df21ea\": container with ID starting with 8d3687ddcd7838061d167fe8a5b887b5c7bdfdab94447a28cbf4151500df21ea not found: ID does not exist" containerID="8d3687ddcd7838061d167fe8a5b887b5c7bdfdab94447a28cbf4151500df21ea" Dec 10 15:27:17 crc kubenswrapper[4755]: I1210 15:27:17.924990 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d3687ddcd7838061d167fe8a5b887b5c7bdfdab94447a28cbf4151500df21ea"} err="failed to get container status \"8d3687ddcd7838061d167fe8a5b887b5c7bdfdab94447a28cbf4151500df21ea\": rpc error: code = NotFound desc = could not find container \"8d3687ddcd7838061d167fe8a5b887b5c7bdfdab94447a28cbf4151500df21ea\": container with ID starting with 8d3687ddcd7838061d167fe8a5b887b5c7bdfdab94447a28cbf4151500df21ea not found: ID does not exist" Dec 10 15:27:17 crc kubenswrapper[4755]: I1210 15:27:17.925011 4755 scope.go:117] "RemoveContainer" containerID="3679e8337fb5fc7c55e8040662b356f2841a98d51ddef403fbc0f045cc7ddead" Dec 10 15:27:17 crc kubenswrapper[4755]: E1210 15:27:17.925393 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3679e8337fb5fc7c55e8040662b356f2841a98d51ddef403fbc0f045cc7ddead\": container with ID starting with 3679e8337fb5fc7c55e8040662b356f2841a98d51ddef403fbc0f045cc7ddead not found: ID does not exist" containerID="3679e8337fb5fc7c55e8040662b356f2841a98d51ddef403fbc0f045cc7ddead" Dec 10 15:27:17 crc kubenswrapper[4755]: I1210 15:27:17.925413 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3679e8337fb5fc7c55e8040662b356f2841a98d51ddef403fbc0f045cc7ddead"} err="failed to get container status \"3679e8337fb5fc7c55e8040662b356f2841a98d51ddef403fbc0f045cc7ddead\": rpc error: code = NotFound desc = could not find container \"3679e8337fb5fc7c55e8040662b356f2841a98d51ddef403fbc0f045cc7ddead\": container with ID starting with 3679e8337fb5fc7c55e8040662b356f2841a98d51ddef403fbc0f045cc7ddead not found: ID does not exist" Dec 10 15:27:17 crc kubenswrapper[4755]: I1210 15:27:17.925425 4755 scope.go:117] "RemoveContainer" containerID="82f1c5f7620fcd0558a34015b6415531594ca50858e2a57733b627e51f7b6315" Dec 10 15:27:17 crc kubenswrapper[4755]: E1210 15:27:17.925742 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82f1c5f7620fcd0558a34015b6415531594ca50858e2a57733b627e51f7b6315\": container with ID starting with 82f1c5f7620fcd0558a34015b6415531594ca50858e2a57733b627e51f7b6315 not found: ID does not exist" containerID="82f1c5f7620fcd0558a34015b6415531594ca50858e2a57733b627e51f7b6315" Dec 10 15:27:17 crc kubenswrapper[4755]: I1210 15:27:17.925781 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82f1c5f7620fcd0558a34015b6415531594ca50858e2a57733b627e51f7b6315"} err="failed to get container status \"82f1c5f7620fcd0558a34015b6415531594ca50858e2a57733b627e51f7b6315\": rpc error: code = NotFound desc = could not find container \"82f1c5f7620fcd0558a34015b6415531594ca50858e2a57733b627e51f7b6315\": container with ID starting with 82f1c5f7620fcd0558a34015b6415531594ca50858e2a57733b627e51f7b6315 not found: ID does not exist" Dec 10 15:27:17 crc kubenswrapper[4755]: I1210 15:27:17.929482 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 10 15:27:17 crc kubenswrapper[4755]: I1210 15:27:17.929533 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 15:27:17 crc kubenswrapper[4755]: I1210 15:27:17.929558 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 15:27:17 crc kubenswrapper[4755]: I1210 15:27:17.929593 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 15:27:17 crc kubenswrapper[4755]: I1210 15:27:17.929646 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 10 15:27:17 crc kubenswrapper[4755]: I1210 15:27:17.929667 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 10 15:27:17 crc kubenswrapper[4755]: I1210 15:27:17.929701 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 10 15:27:17 crc kubenswrapper[4755]: I1210 15:27:17.929678 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 15:27:17 crc kubenswrapper[4755]: I1210 15:27:17.929724 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 15:27:17 crc kubenswrapper[4755]: I1210 15:27:17.929731 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 15:27:17 crc kubenswrapper[4755]: I1210 15:27:17.929679 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 10 15:27:17 crc kubenswrapper[4755]: I1210 15:27:17.929746 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 10 15:27:17 crc kubenswrapper[4755]: I1210 15:27:17.929826 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 10 15:27:17 crc kubenswrapper[4755]: I1210 15:27:17.929892 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 10 15:27:17 crc kubenswrapper[4755]: I1210 15:27:17.929906 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 10 15:27:17 crc kubenswrapper[4755]: I1210 15:27:17.930001 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 10 15:27:18 crc kubenswrapper[4755]: I1210 15:27:18.112729 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 10 15:27:18 crc kubenswrapper[4755]: W1210 15:27:18.128868 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-f41b23222e8d6ca745e4f5987d27825726b43ddf4b0fc592d424df4d50a0d3d3 WatchSource:0}: Error finding container f41b23222e8d6ca745e4f5987d27825726b43ddf4b0fc592d424df4d50a0d3d3: Status 404 returned error can't find the container with id f41b23222e8d6ca745e4f5987d27825726b43ddf4b0fc592d424df4d50a0d3d3 Dec 10 15:27:18 crc kubenswrapper[4755]: E1210 15:27:18.131652 4755 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.18:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187fe427b437a7c1 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:27:18.130927553 +0000 UTC m=+234.731811185,LastTimestamp:2025-12-10 15:27:18.130927553 +0000 UTC m=+234.731811185,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:27:18 crc kubenswrapper[4755]: I1210 15:27:18.693347 4755 generic.go:334] "Generic (PLEG): container finished" podID="6299042d-c9f7-4651-a239-5b75017b83cb" containerID="af1e30fd16c8b4352b5df55f5369d576846ddd951f1f37bcab70ae26e41f8135" exitCode=0 Dec 10 15:27:18 crc kubenswrapper[4755]: I1210 15:27:18.693649 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"6299042d-c9f7-4651-a239-5b75017b83cb","Type":"ContainerDied","Data":"af1e30fd16c8b4352b5df55f5369d576846ddd951f1f37bcab70ae26e41f8135"} Dec 10 15:27:18 crc kubenswrapper[4755]: I1210 15:27:18.694590 4755 status_manager.go:851] "Failed to get status for pod" podUID="6299042d-c9f7-4651-a239-5b75017b83cb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.18:6443: connect: connection refused" Dec 10 15:27:18 crc kubenswrapper[4755]: I1210 15:27:18.695243 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"ad72216bc68127f2b6c40ff2777d63bef5b422a2451861dbf06d428429ed4282"} Dec 10 15:27:18 crc kubenswrapper[4755]: I1210 15:27:18.695262 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"f41b23222e8d6ca745e4f5987d27825726b43ddf4b0fc592d424df4d50a0d3d3"} Dec 10 15:27:18 crc kubenswrapper[4755]: E1210 15:27:18.695808 4755 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.18:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 10 15:27:18 crc kubenswrapper[4755]: I1210 15:27:18.695898 4755 status_manager.go:851] "Failed to get status for pod" podUID="6299042d-c9f7-4651-a239-5b75017b83cb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.18:6443: connect: connection refused" Dec 10 15:27:18 crc kubenswrapper[4755]: I1210 15:27:18.700787 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 10 15:27:18 crc kubenswrapper[4755]: I1210 15:27:18.701428 4755 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="16e58653dca08c9b33671279553975bca9507ffeeefea161119cec624813f167" exitCode=0 Dec 10 15:27:18 crc kubenswrapper[4755]: I1210 15:27:18.701455 4755 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="fea18ef6c319336321d97c2a55449903b837b5a47da785da3ff5308244751943" exitCode=0 Dec 10 15:27:18 crc kubenswrapper[4755]: I1210 15:27:18.701514 4755 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="08895c3ab20248c4bd70ccfae6442bdf8e76a602e605b043f0f00166624ada17" exitCode=0 Dec 10 15:27:18 crc kubenswrapper[4755]: I1210 15:27:18.701524 4755 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c5b2467645116856afa1e1e37501c7b453b03da193a96d1075886b85839a86ac" exitCode=2 Dec 10 15:27:19 crc kubenswrapper[4755]: I1210 15:27:19.764119 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84e93abf-4130-4ca2-b263-5a4b29729988" path="/var/lib/kubelet/pods/84e93abf-4130-4ca2-b263-5a4b29729988/volumes" Dec 10 15:27:20 crc kubenswrapper[4755]: I1210 15:27:20.071673 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 10 15:27:20 crc kubenswrapper[4755]: I1210 15:27:20.072966 4755 status_manager.go:851] "Failed to get status for pod" podUID="6299042d-c9f7-4651-a239-5b75017b83cb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.18:6443: connect: connection refused" Dec 10 15:27:20 crc kubenswrapper[4755]: I1210 15:27:20.170111 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6299042d-c9f7-4651-a239-5b75017b83cb-kubelet-dir\") pod \"6299042d-c9f7-4651-a239-5b75017b83cb\" (UID: \"6299042d-c9f7-4651-a239-5b75017b83cb\") " Dec 10 15:27:20 crc kubenswrapper[4755]: I1210 15:27:20.170369 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6299042d-c9f7-4651-a239-5b75017b83cb-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "6299042d-c9f7-4651-a239-5b75017b83cb" (UID: "6299042d-c9f7-4651-a239-5b75017b83cb"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 15:27:20 crc kubenswrapper[4755]: I1210 15:27:20.170644 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6299042d-c9f7-4651-a239-5b75017b83cb-var-lock\") pod \"6299042d-c9f7-4651-a239-5b75017b83cb\" (UID: \"6299042d-c9f7-4651-a239-5b75017b83cb\") " Dec 10 15:27:20 crc kubenswrapper[4755]: I1210 15:27:20.170688 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6299042d-c9f7-4651-a239-5b75017b83cb-kube-api-access\") pod \"6299042d-c9f7-4651-a239-5b75017b83cb\" (UID: \"6299042d-c9f7-4651-a239-5b75017b83cb\") " Dec 10 15:27:20 crc kubenswrapper[4755]: I1210 15:27:20.170721 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6299042d-c9f7-4651-a239-5b75017b83cb-var-lock" (OuterVolumeSpecName: "var-lock") pod "6299042d-c9f7-4651-a239-5b75017b83cb" (UID: "6299042d-c9f7-4651-a239-5b75017b83cb"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 15:27:20 crc kubenswrapper[4755]: I1210 15:27:20.170932 4755 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6299042d-c9f7-4651-a239-5b75017b83cb-var-lock\") on node \"crc\" DevicePath \"\"" Dec 10 15:27:20 crc kubenswrapper[4755]: I1210 15:27:20.170957 4755 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6299042d-c9f7-4651-a239-5b75017b83cb-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 10 15:27:20 crc kubenswrapper[4755]: I1210 15:27:20.176799 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6299042d-c9f7-4651-a239-5b75017b83cb-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "6299042d-c9f7-4651-a239-5b75017b83cb" (UID: "6299042d-c9f7-4651-a239-5b75017b83cb"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:27:20 crc kubenswrapper[4755]: I1210 15:27:20.272456 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6299042d-c9f7-4651-a239-5b75017b83cb-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 10 15:27:20 crc kubenswrapper[4755]: I1210 15:27:20.715499 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 10 15:27:20 crc kubenswrapper[4755]: I1210 15:27:20.716247 4755 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ef3871ddc16d2fc1b89a6f120390cf066424bd9ed4cca98f8dea4430c40c6c07" exitCode=0 Dec 10 15:27:20 crc kubenswrapper[4755]: I1210 15:27:20.717723 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"6299042d-c9f7-4651-a239-5b75017b83cb","Type":"ContainerDied","Data":"cf7148ab81d3fdb535c5eae766a48fdd27f676157e127f950db030ff5050ca88"} Dec 10 15:27:20 crc kubenswrapper[4755]: I1210 15:27:20.717759 4755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf7148ab81d3fdb535c5eae766a48fdd27f676157e127f950db030ff5050ca88" Dec 10 15:27:20 crc kubenswrapper[4755]: I1210 15:27:20.717792 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 10 15:27:20 crc kubenswrapper[4755]: I1210 15:27:20.732303 4755 status_manager.go:851] "Failed to get status for pod" podUID="6299042d-c9f7-4651-a239-5b75017b83cb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.18:6443: connect: connection refused" Dec 10 15:27:20 crc kubenswrapper[4755]: E1210 15:27:20.827168 4755 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.18:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-mqq47" volumeName="registry-storage" Dec 10 15:27:20 crc kubenswrapper[4755]: I1210 15:27:20.857176 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 10 15:27:20 crc kubenswrapper[4755]: I1210 15:27:20.858297 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 15:27:20 crc kubenswrapper[4755]: I1210 15:27:20.858947 4755 status_manager.go:851] "Failed to get status for pod" podUID="6299042d-c9f7-4651-a239-5b75017b83cb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.18:6443: connect: connection refused" Dec 10 15:27:20 crc kubenswrapper[4755]: I1210 15:27:20.859453 4755 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.18:6443: connect: connection refused" Dec 10 15:27:20 crc kubenswrapper[4755]: I1210 15:27:20.982275 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 10 15:27:20 crc kubenswrapper[4755]: I1210 15:27:20.982327 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 10 15:27:20 crc kubenswrapper[4755]: I1210 15:27:20.982376 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 10 15:27:20 crc kubenswrapper[4755]: I1210 15:27:20.982398 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 15:27:20 crc kubenswrapper[4755]: I1210 15:27:20.982453 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 15:27:20 crc kubenswrapper[4755]: I1210 15:27:20.982543 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 15:27:20 crc kubenswrapper[4755]: I1210 15:27:20.982692 4755 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Dec 10 15:27:20 crc kubenswrapper[4755]: I1210 15:27:20.982705 4755 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 10 15:27:20 crc kubenswrapper[4755]: I1210 15:27:20.982714 4755 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 10 15:27:21 crc kubenswrapper[4755]: I1210 15:27:21.724785 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 10 15:27:21 crc kubenswrapper[4755]: I1210 15:27:21.725680 4755 scope.go:117] "RemoveContainer" containerID="16e58653dca08c9b33671279553975bca9507ffeeefea161119cec624813f167" Dec 10 15:27:21 crc kubenswrapper[4755]: I1210 15:27:21.725744 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 15:27:21 crc kubenswrapper[4755]: I1210 15:27:21.740021 4755 status_manager.go:851] "Failed to get status for pod" podUID="6299042d-c9f7-4651-a239-5b75017b83cb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.18:6443: connect: connection refused" Dec 10 15:27:21 crc kubenswrapper[4755]: I1210 15:27:21.740231 4755 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.18:6443: connect: connection refused" Dec 10 15:27:21 crc kubenswrapper[4755]: I1210 15:27:21.742312 4755 scope.go:117] "RemoveContainer" containerID="fea18ef6c319336321d97c2a55449903b837b5a47da785da3ff5308244751943" Dec 10 15:27:21 crc kubenswrapper[4755]: I1210 15:27:21.757277 4755 scope.go:117] "RemoveContainer" containerID="08895c3ab20248c4bd70ccfae6442bdf8e76a602e605b043f0f00166624ada17" Dec 10 15:27:21 crc kubenswrapper[4755]: I1210 15:27:21.764023 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Dec 10 15:27:21 crc kubenswrapper[4755]: I1210 15:27:21.770894 4755 scope.go:117] "RemoveContainer" containerID="c5b2467645116856afa1e1e37501c7b453b03da193a96d1075886b85839a86ac" Dec 10 15:27:21 crc kubenswrapper[4755]: I1210 15:27:21.781300 4755 scope.go:117] "RemoveContainer" containerID="ef3871ddc16d2fc1b89a6f120390cf066424bd9ed4cca98f8dea4430c40c6c07" Dec 10 15:27:21 crc kubenswrapper[4755]: I1210 15:27:21.795020 4755 scope.go:117] "RemoveContainer" containerID="17ca0b6cc1b8b0bf9c34e03c494d1fc1594db013eccf862027ec8749ea51f6e0" Dec 10 15:27:23 crc kubenswrapper[4755]: E1210 15:27:23.605998 4755 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.18:6443: connect: connection refused" Dec 10 15:27:23 crc kubenswrapper[4755]: E1210 15:27:23.607340 4755 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.18:6443: connect: connection refused" Dec 10 15:27:23 crc kubenswrapper[4755]: E1210 15:27:23.607759 4755 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.18:6443: connect: connection refused" Dec 10 15:27:23 crc kubenswrapper[4755]: E1210 15:27:23.608243 4755 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.18:6443: connect: connection refused" Dec 10 15:27:23 crc kubenswrapper[4755]: E1210 15:27:23.608584 4755 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.18:6443: connect: connection refused" Dec 10 15:27:23 crc kubenswrapper[4755]: I1210 15:27:23.608602 4755 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Dec 10 15:27:23 crc kubenswrapper[4755]: E1210 15:27:23.608849 4755 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.18:6443: connect: connection refused" interval="200ms" Dec 10 15:27:23 crc kubenswrapper[4755]: I1210 15:27:23.761307 4755 status_manager.go:851] "Failed to get status for pod" podUID="6299042d-c9f7-4651-a239-5b75017b83cb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.18:6443: connect: connection refused" Dec 10 15:27:23 crc kubenswrapper[4755]: E1210 15:27:23.809745 4755 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.18:6443: connect: connection refused" interval="400ms" Dec 10 15:27:24 crc kubenswrapper[4755]: E1210 15:27:24.211082 4755 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.18:6443: connect: connection refused" interval="800ms" Dec 10 15:27:25 crc kubenswrapper[4755]: E1210 15:27:25.020796 4755 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.18:6443: connect: connection refused" interval="1.6s" Dec 10 15:27:25 crc kubenswrapper[4755]: E1210 15:27:25.490091 4755 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.18:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187fe427b437a7c1 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 15:27:18.130927553 +0000 UTC m=+234.731811185,LastTimestamp:2025-12-10 15:27:18.130927553 +0000 UTC m=+234.731811185,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 15:27:26 crc kubenswrapper[4755]: E1210 15:27:26.622529 4755 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.18:6443: connect: connection refused" interval="3.2s" Dec 10 15:27:29 crc kubenswrapper[4755]: E1210 15:27:29.824184 4755 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.18:6443: connect: connection refused" interval="6.4s" Dec 10 15:27:32 crc kubenswrapper[4755]: I1210 15:27:32.757007 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 15:27:32 crc kubenswrapper[4755]: I1210 15:27:32.758657 4755 status_manager.go:851] "Failed to get status for pod" podUID="6299042d-c9f7-4651-a239-5b75017b83cb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.18:6443: connect: connection refused" Dec 10 15:27:32 crc kubenswrapper[4755]: I1210 15:27:32.774678 4755 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="59e6db97-7e22-4fec-924e-20e90f463887" Dec 10 15:27:32 crc kubenswrapper[4755]: I1210 15:27:32.774719 4755 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="59e6db97-7e22-4fec-924e-20e90f463887" Dec 10 15:27:32 crc kubenswrapper[4755]: E1210 15:27:32.775245 4755 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.18:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 15:27:32 crc kubenswrapper[4755]: I1210 15:27:32.775761 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 15:27:32 crc kubenswrapper[4755]: I1210 15:27:32.785503 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 10 15:27:32 crc kubenswrapper[4755]: I1210 15:27:32.785567 4755 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="104d730ea666ffd2d639ba7fc8d1ac5b444586788a62656fd260cb249f82b1ae" exitCode=1 Dec 10 15:27:32 crc kubenswrapper[4755]: I1210 15:27:32.785603 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"104d730ea666ffd2d639ba7fc8d1ac5b444586788a62656fd260cb249f82b1ae"} Dec 10 15:27:32 crc kubenswrapper[4755]: I1210 15:27:32.786143 4755 scope.go:117] "RemoveContainer" containerID="104d730ea666ffd2d639ba7fc8d1ac5b444586788a62656fd260cb249f82b1ae" Dec 10 15:27:32 crc kubenswrapper[4755]: I1210 15:27:32.786414 4755 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.18:6443: connect: connection refused" Dec 10 15:27:32 crc kubenswrapper[4755]: I1210 15:27:32.786942 4755 status_manager.go:851] "Failed to get status for pod" podUID="6299042d-c9f7-4651-a239-5b75017b83cb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.18:6443: connect: connection refused" Dec 10 15:27:32 crc kubenswrapper[4755]: W1210 15:27:32.803447 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-63ceb662aaa1fd5038acaa0f8d9f2096fa660b12f5988af9c41f2c8b9bc21b9e WatchSource:0}: Error finding container 63ceb662aaa1fd5038acaa0f8d9f2096fa660b12f5988af9c41f2c8b9bc21b9e: Status 404 returned error can't find the container with id 63ceb662aaa1fd5038acaa0f8d9f2096fa660b12f5988af9c41f2c8b9bc21b9e Dec 10 15:27:33 crc kubenswrapper[4755]: I1210 15:27:33.764951 4755 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.18:6443: connect: connection refused" Dec 10 15:27:33 crc kubenswrapper[4755]: I1210 15:27:33.765595 4755 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.18:6443: connect: connection refused" Dec 10 15:27:33 crc kubenswrapper[4755]: I1210 15:27:33.765845 4755 status_manager.go:851] "Failed to get status for pod" podUID="6299042d-c9f7-4651-a239-5b75017b83cb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.18:6443: connect: connection refused" Dec 10 15:27:33 crc kubenswrapper[4755]: I1210 15:27:33.796839 4755 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="cf157d608bad694e55c13d32d78cc978f68feb49cad956a01d87dba8f1c5a0ed" exitCode=0 Dec 10 15:27:33 crc kubenswrapper[4755]: I1210 15:27:33.796961 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"cf157d608bad694e55c13d32d78cc978f68feb49cad956a01d87dba8f1c5a0ed"} Dec 10 15:27:33 crc kubenswrapper[4755]: I1210 15:27:33.797034 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"63ceb662aaa1fd5038acaa0f8d9f2096fa660b12f5988af9c41f2c8b9bc21b9e"} Dec 10 15:27:33 crc kubenswrapper[4755]: I1210 15:27:33.797759 4755 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="59e6db97-7e22-4fec-924e-20e90f463887" Dec 10 15:27:33 crc kubenswrapper[4755]: I1210 15:27:33.797809 4755 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="59e6db97-7e22-4fec-924e-20e90f463887" Dec 10 15:27:33 crc kubenswrapper[4755]: I1210 15:27:33.798091 4755 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.18:6443: connect: connection refused" Dec 10 15:27:33 crc kubenswrapper[4755]: E1210 15:27:33.798459 4755 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.18:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 15:27:33 crc kubenswrapper[4755]: I1210 15:27:33.799103 4755 status_manager.go:851] "Failed to get status for pod" podUID="6299042d-c9f7-4651-a239-5b75017b83cb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.18:6443: connect: connection refused" Dec 10 15:27:33 crc kubenswrapper[4755]: I1210 15:27:33.799409 4755 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.18:6443: connect: connection refused" Dec 10 15:27:33 crc kubenswrapper[4755]: I1210 15:27:33.801270 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 10 15:27:33 crc kubenswrapper[4755]: I1210 15:27:33.801335 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"61ffc5514f84ff7bd82f4c86c9f767bb13f7448c8e0e71805fdac7804a0935a9"} Dec 10 15:27:33 crc kubenswrapper[4755]: I1210 15:27:33.802148 4755 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.18:6443: connect: connection refused" Dec 10 15:27:33 crc kubenswrapper[4755]: I1210 15:27:33.802617 4755 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.18:6443: connect: connection refused" Dec 10 15:27:33 crc kubenswrapper[4755]: I1210 15:27:33.803021 4755 status_manager.go:851] "Failed to get status for pod" podUID="6299042d-c9f7-4651-a239-5b75017b83cb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.18:6443: connect: connection refused" Dec 10 15:27:34 crc kubenswrapper[4755]: I1210 15:27:34.383819 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 10 15:27:34 crc kubenswrapper[4755]: I1210 15:27:34.627744 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 10 15:27:34 crc kubenswrapper[4755]: I1210 15:27:34.631362 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 10 15:27:34 crc kubenswrapper[4755]: I1210 15:27:34.808617 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"4f7bb1b70b6c115753c8edf25674755806b9073c1b803d0b35e6206d234bf16e"} Dec 10 15:27:34 crc kubenswrapper[4755]: I1210 15:27:34.808668 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"b04bff881f450baacf6681109eb9caf8bc10f1c891e282abea01c1fc86327abe"} Dec 10 15:27:34 crc kubenswrapper[4755]: I1210 15:27:34.808687 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"e626d53021e7c64535cb37d1ef6014715c89c88a101854a5afe91a8b185b07d0"} Dec 10 15:27:34 crc kubenswrapper[4755]: I1210 15:27:34.808699 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f5b530af1c4266a5bebf0a7f61c74cae57ebe568fc398230e989a08c2e3bcd10"} Dec 10 15:27:35 crc kubenswrapper[4755]: I1210 15:27:35.819671 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"660a728ebbda300a1a90d5d006b871200f8875e16619581dcd7d0a71bfdd5b79"} Dec 10 15:27:35 crc kubenswrapper[4755]: I1210 15:27:35.820077 4755 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="59e6db97-7e22-4fec-924e-20e90f463887" Dec 10 15:27:35 crc kubenswrapper[4755]: I1210 15:27:35.820104 4755 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="59e6db97-7e22-4fec-924e-20e90f463887" Dec 10 15:27:37 crc kubenswrapper[4755]: I1210 15:27:37.775845 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 15:27:37 crc kubenswrapper[4755]: I1210 15:27:37.777239 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 15:27:37 crc kubenswrapper[4755]: I1210 15:27:37.781427 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 15:27:40 crc kubenswrapper[4755]: I1210 15:27:40.830747 4755 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 15:27:40 crc kubenswrapper[4755]: I1210 15:27:40.849022 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 15:27:40 crc kubenswrapper[4755]: I1210 15:27:40.849074 4755 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="59e6db97-7e22-4fec-924e-20e90f463887" Dec 10 15:27:40 crc kubenswrapper[4755]: I1210 15:27:40.849097 4755 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="59e6db97-7e22-4fec-924e-20e90f463887" Dec 10 15:27:40 crc kubenswrapper[4755]: I1210 15:27:40.852649 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 15:27:40 crc kubenswrapper[4755]: I1210 15:27:40.856642 4755 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="0d0474e4-71da-4e5c-8668-50d031e457b9" Dec 10 15:27:41 crc kubenswrapper[4755]: I1210 15:27:41.685498 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-f47gb" podUID="1645de9b-f227-4d9d-885f-ffd58e5bef69" containerName="oauth-openshift" containerID="cri-o://2f21b859d93a04a953b6fe2a81322f9db43ef41d669bfe80eb66c37c1b8f8559" gracePeriod=15 Dec 10 15:27:41 crc kubenswrapper[4755]: I1210 15:27:41.854591 4755 generic.go:334] "Generic (PLEG): container finished" podID="1645de9b-f227-4d9d-885f-ffd58e5bef69" containerID="2f21b859d93a04a953b6fe2a81322f9db43ef41d669bfe80eb66c37c1b8f8559" exitCode=0 Dec 10 15:27:41 crc kubenswrapper[4755]: I1210 15:27:41.854668 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-f47gb" event={"ID":"1645de9b-f227-4d9d-885f-ffd58e5bef69","Type":"ContainerDied","Data":"2f21b859d93a04a953b6fe2a81322f9db43ef41d669bfe80eb66c37c1b8f8559"} Dec 10 15:27:41 crc kubenswrapper[4755]: I1210 15:27:41.854862 4755 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="59e6db97-7e22-4fec-924e-20e90f463887" Dec 10 15:27:41 crc kubenswrapper[4755]: I1210 15:27:41.854874 4755 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="59e6db97-7e22-4fec-924e-20e90f463887" Dec 10 15:27:42 crc kubenswrapper[4755]: I1210 15:27:42.217570 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-f47gb" Dec 10 15:27:42 crc kubenswrapper[4755]: I1210 15:27:42.244519 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1645de9b-f227-4d9d-885f-ffd58e5bef69-audit-policies\") pod \"1645de9b-f227-4d9d-885f-ffd58e5bef69\" (UID: \"1645de9b-f227-4d9d-885f-ffd58e5bef69\") " Dec 10 15:27:42 crc kubenswrapper[4755]: I1210 15:27:42.244584 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1645de9b-f227-4d9d-885f-ffd58e5bef69-v4-0-config-system-trusted-ca-bundle\") pod \"1645de9b-f227-4d9d-885f-ffd58e5bef69\" (UID: \"1645de9b-f227-4d9d-885f-ffd58e5bef69\") " Dec 10 15:27:42 crc kubenswrapper[4755]: I1210 15:27:42.244613 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/1645de9b-f227-4d9d-885f-ffd58e5bef69-v4-0-config-user-template-provider-selection\") pod \"1645de9b-f227-4d9d-885f-ffd58e5bef69\" (UID: \"1645de9b-f227-4d9d-885f-ffd58e5bef69\") " Dec 10 15:27:42 crc kubenswrapper[4755]: I1210 15:27:42.244638 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/1645de9b-f227-4d9d-885f-ffd58e5bef69-v4-0-config-system-cliconfig\") pod \"1645de9b-f227-4d9d-885f-ffd58e5bef69\" (UID: \"1645de9b-f227-4d9d-885f-ffd58e5bef69\") " Dec 10 15:27:42 crc kubenswrapper[4755]: I1210 15:27:42.244661 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/1645de9b-f227-4d9d-885f-ffd58e5bef69-v4-0-config-system-session\") pod \"1645de9b-f227-4d9d-885f-ffd58e5bef69\" (UID: \"1645de9b-f227-4d9d-885f-ffd58e5bef69\") " Dec 10 15:27:42 crc kubenswrapper[4755]: I1210 15:27:42.244679 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/1645de9b-f227-4d9d-885f-ffd58e5bef69-v4-0-config-system-router-certs\") pod \"1645de9b-f227-4d9d-885f-ffd58e5bef69\" (UID: \"1645de9b-f227-4d9d-885f-ffd58e5bef69\") " Dec 10 15:27:42 crc kubenswrapper[4755]: I1210 15:27:42.244726 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/1645de9b-f227-4d9d-885f-ffd58e5bef69-v4-0-config-system-service-ca\") pod \"1645de9b-f227-4d9d-885f-ffd58e5bef69\" (UID: \"1645de9b-f227-4d9d-885f-ffd58e5bef69\") " Dec 10 15:27:42 crc kubenswrapper[4755]: I1210 15:27:42.244755 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/1645de9b-f227-4d9d-885f-ffd58e5bef69-v4-0-config-user-idp-0-file-data\") pod \"1645de9b-f227-4d9d-885f-ffd58e5bef69\" (UID: \"1645de9b-f227-4d9d-885f-ffd58e5bef69\") " Dec 10 15:27:42 crc kubenswrapper[4755]: I1210 15:27:42.244771 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/1645de9b-f227-4d9d-885f-ffd58e5bef69-v4-0-config-user-template-error\") pod \"1645de9b-f227-4d9d-885f-ffd58e5bef69\" (UID: \"1645de9b-f227-4d9d-885f-ffd58e5bef69\") " Dec 10 15:27:42 crc kubenswrapper[4755]: I1210 15:27:42.244791 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/1645de9b-f227-4d9d-885f-ffd58e5bef69-v4-0-config-system-ocp-branding-template\") pod \"1645de9b-f227-4d9d-885f-ffd58e5bef69\" (UID: \"1645de9b-f227-4d9d-885f-ffd58e5bef69\") " Dec 10 15:27:42 crc kubenswrapper[4755]: I1210 15:27:42.244819 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/1645de9b-f227-4d9d-885f-ffd58e5bef69-v4-0-config-system-serving-cert\") pod \"1645de9b-f227-4d9d-885f-ffd58e5bef69\" (UID: \"1645de9b-f227-4d9d-885f-ffd58e5bef69\") " Dec 10 15:27:42 crc kubenswrapper[4755]: I1210 15:27:42.244847 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/1645de9b-f227-4d9d-885f-ffd58e5bef69-v4-0-config-user-template-login\") pod \"1645de9b-f227-4d9d-885f-ffd58e5bef69\" (UID: \"1645de9b-f227-4d9d-885f-ffd58e5bef69\") " Dec 10 15:27:42 crc kubenswrapper[4755]: I1210 15:27:42.244866 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1645de9b-f227-4d9d-885f-ffd58e5bef69-audit-dir\") pod \"1645de9b-f227-4d9d-885f-ffd58e5bef69\" (UID: \"1645de9b-f227-4d9d-885f-ffd58e5bef69\") " Dec 10 15:27:42 crc kubenswrapper[4755]: I1210 15:27:42.244885 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bhg7z\" (UniqueName: \"kubernetes.io/projected/1645de9b-f227-4d9d-885f-ffd58e5bef69-kube-api-access-bhg7z\") pod \"1645de9b-f227-4d9d-885f-ffd58e5bef69\" (UID: \"1645de9b-f227-4d9d-885f-ffd58e5bef69\") " Dec 10 15:27:42 crc kubenswrapper[4755]: I1210 15:27:42.245574 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1645de9b-f227-4d9d-885f-ffd58e5bef69-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "1645de9b-f227-4d9d-885f-ffd58e5bef69" (UID: "1645de9b-f227-4d9d-885f-ffd58e5bef69"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:27:42 crc kubenswrapper[4755]: I1210 15:27:42.245599 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1645de9b-f227-4d9d-885f-ffd58e5bef69-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "1645de9b-f227-4d9d-885f-ffd58e5bef69" (UID: "1645de9b-f227-4d9d-885f-ffd58e5bef69"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:27:42 crc kubenswrapper[4755]: I1210 15:27:42.245645 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1645de9b-f227-4d9d-885f-ffd58e5bef69-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "1645de9b-f227-4d9d-885f-ffd58e5bef69" (UID: "1645de9b-f227-4d9d-885f-ffd58e5bef69"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 15:27:42 crc kubenswrapper[4755]: I1210 15:27:42.245821 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1645de9b-f227-4d9d-885f-ffd58e5bef69-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "1645de9b-f227-4d9d-885f-ffd58e5bef69" (UID: "1645de9b-f227-4d9d-885f-ffd58e5bef69"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:27:42 crc kubenswrapper[4755]: I1210 15:27:42.246321 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1645de9b-f227-4d9d-885f-ffd58e5bef69-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "1645de9b-f227-4d9d-885f-ffd58e5bef69" (UID: "1645de9b-f227-4d9d-885f-ffd58e5bef69"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:27:42 crc kubenswrapper[4755]: I1210 15:27:42.249927 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1645de9b-f227-4d9d-885f-ffd58e5bef69-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "1645de9b-f227-4d9d-885f-ffd58e5bef69" (UID: "1645de9b-f227-4d9d-885f-ffd58e5bef69"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:27:42 crc kubenswrapper[4755]: I1210 15:27:42.250174 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1645de9b-f227-4d9d-885f-ffd58e5bef69-kube-api-access-bhg7z" (OuterVolumeSpecName: "kube-api-access-bhg7z") pod "1645de9b-f227-4d9d-885f-ffd58e5bef69" (UID: "1645de9b-f227-4d9d-885f-ffd58e5bef69"). InnerVolumeSpecName "kube-api-access-bhg7z". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:27:42 crc kubenswrapper[4755]: I1210 15:27:42.250393 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1645de9b-f227-4d9d-885f-ffd58e5bef69-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "1645de9b-f227-4d9d-885f-ffd58e5bef69" (UID: "1645de9b-f227-4d9d-885f-ffd58e5bef69"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:27:42 crc kubenswrapper[4755]: I1210 15:27:42.251590 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1645de9b-f227-4d9d-885f-ffd58e5bef69-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "1645de9b-f227-4d9d-885f-ffd58e5bef69" (UID: "1645de9b-f227-4d9d-885f-ffd58e5bef69"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:27:42 crc kubenswrapper[4755]: I1210 15:27:42.257746 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1645de9b-f227-4d9d-885f-ffd58e5bef69-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "1645de9b-f227-4d9d-885f-ffd58e5bef69" (UID: "1645de9b-f227-4d9d-885f-ffd58e5bef69"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:27:42 crc kubenswrapper[4755]: I1210 15:27:42.258702 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1645de9b-f227-4d9d-885f-ffd58e5bef69-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "1645de9b-f227-4d9d-885f-ffd58e5bef69" (UID: "1645de9b-f227-4d9d-885f-ffd58e5bef69"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:27:42 crc kubenswrapper[4755]: I1210 15:27:42.259648 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1645de9b-f227-4d9d-885f-ffd58e5bef69-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "1645de9b-f227-4d9d-885f-ffd58e5bef69" (UID: "1645de9b-f227-4d9d-885f-ffd58e5bef69"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:27:42 crc kubenswrapper[4755]: I1210 15:27:42.263406 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1645de9b-f227-4d9d-885f-ffd58e5bef69-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "1645de9b-f227-4d9d-885f-ffd58e5bef69" (UID: "1645de9b-f227-4d9d-885f-ffd58e5bef69"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:27:42 crc kubenswrapper[4755]: I1210 15:27:42.263579 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1645de9b-f227-4d9d-885f-ffd58e5bef69-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "1645de9b-f227-4d9d-885f-ffd58e5bef69" (UID: "1645de9b-f227-4d9d-885f-ffd58e5bef69"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:27:42 crc kubenswrapper[4755]: I1210 15:27:42.345461 4755 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/1645de9b-f227-4d9d-885f-ffd58e5bef69-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 15:27:42 crc kubenswrapper[4755]: I1210 15:27:42.345523 4755 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/1645de9b-f227-4d9d-885f-ffd58e5bef69-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 10 15:27:42 crc kubenswrapper[4755]: I1210 15:27:42.345537 4755 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1645de9b-f227-4d9d-885f-ffd58e5bef69-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 10 15:27:42 crc kubenswrapper[4755]: I1210 15:27:42.345553 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bhg7z\" (UniqueName: \"kubernetes.io/projected/1645de9b-f227-4d9d-885f-ffd58e5bef69-kube-api-access-bhg7z\") on node \"crc\" DevicePath \"\"" Dec 10 15:27:42 crc kubenswrapper[4755]: I1210 15:27:42.345565 4755 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1645de9b-f227-4d9d-885f-ffd58e5bef69-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 10 15:27:42 crc kubenswrapper[4755]: I1210 15:27:42.345575 4755 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1645de9b-f227-4d9d-885f-ffd58e5bef69-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:27:42 crc kubenswrapper[4755]: I1210 15:27:42.345584 4755 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/1645de9b-f227-4d9d-885f-ffd58e5bef69-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 10 15:27:42 crc kubenswrapper[4755]: I1210 15:27:42.345594 4755 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/1645de9b-f227-4d9d-885f-ffd58e5bef69-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 10 15:27:42 crc kubenswrapper[4755]: I1210 15:27:42.345603 4755 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/1645de9b-f227-4d9d-885f-ffd58e5bef69-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 10 15:27:42 crc kubenswrapper[4755]: I1210 15:27:42.345612 4755 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/1645de9b-f227-4d9d-885f-ffd58e5bef69-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 10 15:27:42 crc kubenswrapper[4755]: I1210 15:27:42.345621 4755 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/1645de9b-f227-4d9d-885f-ffd58e5bef69-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 10 15:27:42 crc kubenswrapper[4755]: I1210 15:27:42.345629 4755 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/1645de9b-f227-4d9d-885f-ffd58e5bef69-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 10 15:27:42 crc kubenswrapper[4755]: I1210 15:27:42.345637 4755 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/1645de9b-f227-4d9d-885f-ffd58e5bef69-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 10 15:27:42 crc kubenswrapper[4755]: I1210 15:27:42.345646 4755 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/1645de9b-f227-4d9d-885f-ffd58e5bef69-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 10 15:27:42 crc kubenswrapper[4755]: I1210 15:27:42.864675 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-f47gb" Dec 10 15:27:42 crc kubenswrapper[4755]: I1210 15:27:42.864836 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-f47gb" event={"ID":"1645de9b-f227-4d9d-885f-ffd58e5bef69","Type":"ContainerDied","Data":"d931452a3974fc8e3333733937d9d22330cb6644bfc53e5f825a9c54f964a6aa"} Dec 10 15:27:42 crc kubenswrapper[4755]: I1210 15:27:42.864922 4755 scope.go:117] "RemoveContainer" containerID="2f21b859d93a04a953b6fe2a81322f9db43ef41d669bfe80eb66c37c1b8f8559" Dec 10 15:27:42 crc kubenswrapper[4755]: I1210 15:27:42.865614 4755 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="59e6db97-7e22-4fec-924e-20e90f463887" Dec 10 15:27:42 crc kubenswrapper[4755]: I1210 15:27:42.865632 4755 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="59e6db97-7e22-4fec-924e-20e90f463887" Dec 10 15:27:43 crc kubenswrapper[4755]: I1210 15:27:43.769216 4755 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="0d0474e4-71da-4e5c-8668-50d031e457b9" Dec 10 15:27:44 crc kubenswrapper[4755]: I1210 15:27:44.387761 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 10 15:27:50 crc kubenswrapper[4755]: I1210 15:27:50.598065 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 10 15:27:50 crc kubenswrapper[4755]: I1210 15:27:50.659843 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Dec 10 15:27:50 crc kubenswrapper[4755]: I1210 15:27:50.927521 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 10 15:27:51 crc kubenswrapper[4755]: I1210 15:27:51.021566 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 10 15:27:51 crc kubenswrapper[4755]: I1210 15:27:51.310440 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Dec 10 15:27:51 crc kubenswrapper[4755]: I1210 15:27:51.469607 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 10 15:27:51 crc kubenswrapper[4755]: I1210 15:27:51.488999 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 10 15:27:51 crc kubenswrapper[4755]: I1210 15:27:51.649790 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 10 15:27:51 crc kubenswrapper[4755]: I1210 15:27:51.742590 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 10 15:27:51 crc kubenswrapper[4755]: I1210 15:27:51.980889 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 10 15:27:53 crc kubenswrapper[4755]: I1210 15:27:53.064250 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Dec 10 15:27:53 crc kubenswrapper[4755]: I1210 15:27:53.089327 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 10 15:27:53 crc kubenswrapper[4755]: I1210 15:27:53.283459 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 10 15:27:53 crc kubenswrapper[4755]: I1210 15:27:53.454308 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 10 15:27:53 crc kubenswrapper[4755]: I1210 15:27:53.543611 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 10 15:27:53 crc kubenswrapper[4755]: I1210 15:27:53.554177 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 10 15:27:53 crc kubenswrapper[4755]: I1210 15:27:53.558421 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 10 15:27:53 crc kubenswrapper[4755]: I1210 15:27:53.641775 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 10 15:27:53 crc kubenswrapper[4755]: I1210 15:27:53.741056 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Dec 10 15:27:53 crc kubenswrapper[4755]: I1210 15:27:53.781086 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Dec 10 15:27:53 crc kubenswrapper[4755]: I1210 15:27:53.817975 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 10 15:27:53 crc kubenswrapper[4755]: I1210 15:27:53.835713 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 10 15:27:53 crc kubenswrapper[4755]: I1210 15:27:53.931803 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 10 15:27:54 crc kubenswrapper[4755]: I1210 15:27:54.011962 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 10 15:27:54 crc kubenswrapper[4755]: I1210 15:27:54.012164 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 10 15:27:54 crc kubenswrapper[4755]: I1210 15:27:54.032694 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 10 15:27:54 crc kubenswrapper[4755]: I1210 15:27:54.039206 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Dec 10 15:27:54 crc kubenswrapper[4755]: I1210 15:27:54.163377 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 10 15:27:54 crc kubenswrapper[4755]: I1210 15:27:54.230753 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 10 15:27:54 crc kubenswrapper[4755]: I1210 15:27:54.409136 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 10 15:27:54 crc kubenswrapper[4755]: I1210 15:27:54.445503 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 10 15:27:54 crc kubenswrapper[4755]: I1210 15:27:54.478650 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Dec 10 15:27:54 crc kubenswrapper[4755]: I1210 15:27:54.491907 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 10 15:27:54 crc kubenswrapper[4755]: I1210 15:27:54.548524 4755 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 10 15:27:54 crc kubenswrapper[4755]: I1210 15:27:54.596728 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Dec 10 15:27:54 crc kubenswrapper[4755]: I1210 15:27:54.685220 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 10 15:27:54 crc kubenswrapper[4755]: I1210 15:27:54.706035 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Dec 10 15:27:54 crc kubenswrapper[4755]: I1210 15:27:54.734743 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Dec 10 15:27:54 crc kubenswrapper[4755]: I1210 15:27:54.936410 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 10 15:27:54 crc kubenswrapper[4755]: I1210 15:27:54.937178 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 10 15:27:54 crc kubenswrapper[4755]: I1210 15:27:54.937186 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 10 15:27:54 crc kubenswrapper[4755]: I1210 15:27:54.943722 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Dec 10 15:27:55 crc kubenswrapper[4755]: I1210 15:27:55.074320 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 10 15:27:55 crc kubenswrapper[4755]: I1210 15:27:55.092803 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 10 15:27:55 crc kubenswrapper[4755]: I1210 15:27:55.280365 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 10 15:27:55 crc kubenswrapper[4755]: I1210 15:27:55.296774 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 10 15:27:55 crc kubenswrapper[4755]: I1210 15:27:55.327940 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 10 15:27:55 crc kubenswrapper[4755]: I1210 15:27:55.387682 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 10 15:27:55 crc kubenswrapper[4755]: I1210 15:27:55.427061 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Dec 10 15:27:55 crc kubenswrapper[4755]: I1210 15:27:55.450099 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 10 15:27:55 crc kubenswrapper[4755]: I1210 15:27:55.450491 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 10 15:27:55 crc kubenswrapper[4755]: I1210 15:27:55.462973 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 10 15:27:55 crc kubenswrapper[4755]: I1210 15:27:55.520760 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 10 15:27:55 crc kubenswrapper[4755]: I1210 15:27:55.531622 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 10 15:27:55 crc kubenswrapper[4755]: I1210 15:27:55.541036 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 10 15:27:55 crc kubenswrapper[4755]: I1210 15:27:55.660023 4755 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 10 15:27:55 crc kubenswrapper[4755]: I1210 15:27:55.664212 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Dec 10 15:27:55 crc kubenswrapper[4755]: I1210 15:27:55.741600 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 10 15:27:55 crc kubenswrapper[4755]: I1210 15:27:55.816273 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 10 15:27:55 crc kubenswrapper[4755]: I1210 15:27:55.908661 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 10 15:27:55 crc kubenswrapper[4755]: I1210 15:27:55.914150 4755 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 10 15:27:56 crc kubenswrapper[4755]: I1210 15:27:56.063302 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 10 15:27:56 crc kubenswrapper[4755]: I1210 15:27:56.085221 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 10 15:27:56 crc kubenswrapper[4755]: I1210 15:27:56.280609 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 10 15:27:56 crc kubenswrapper[4755]: I1210 15:27:56.284302 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 10 15:27:56 crc kubenswrapper[4755]: I1210 15:27:56.306177 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 10 15:27:56 crc kubenswrapper[4755]: I1210 15:27:56.308151 4755 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 10 15:27:56 crc kubenswrapper[4755]: I1210 15:27:56.315037 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-f47gb","openshift-kube-apiserver/kube-apiserver-crc"] Dec 10 15:27:56 crc kubenswrapper[4755]: I1210 15:27:56.315105 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 10 15:27:56 crc kubenswrapper[4755]: I1210 15:27:56.320778 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 15:27:56 crc kubenswrapper[4755]: I1210 15:27:56.341337 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=16.341317541 podStartE2EDuration="16.341317541s" podCreationTimestamp="2025-12-10 15:27:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:27:56.337332616 +0000 UTC m=+272.938216258" watchObservedRunningTime="2025-12-10 15:27:56.341317541 +0000 UTC m=+272.942201183" Dec 10 15:27:56 crc kubenswrapper[4755]: I1210 15:27:56.422760 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 10 15:27:56 crc kubenswrapper[4755]: I1210 15:27:56.641873 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Dec 10 15:27:56 crc kubenswrapper[4755]: I1210 15:27:56.643996 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 10 15:27:56 crc kubenswrapper[4755]: I1210 15:27:56.892342 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 10 15:27:56 crc kubenswrapper[4755]: I1210 15:27:56.956513 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 10 15:27:56 crc kubenswrapper[4755]: I1210 15:27:56.989295 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 10 15:27:57 crc kubenswrapper[4755]: I1210 15:27:57.065509 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 10 15:27:57 crc kubenswrapper[4755]: I1210 15:27:57.168612 4755 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Dec 10 15:27:57 crc kubenswrapper[4755]: I1210 15:27:57.238860 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 10 15:27:57 crc kubenswrapper[4755]: I1210 15:27:57.393169 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 10 15:27:57 crc kubenswrapper[4755]: I1210 15:27:57.540882 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 10 15:27:57 crc kubenswrapper[4755]: I1210 15:27:57.709362 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 10 15:27:57 crc kubenswrapper[4755]: I1210 15:27:57.726515 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Dec 10 15:27:57 crc kubenswrapper[4755]: I1210 15:27:57.768075 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1645de9b-f227-4d9d-885f-ffd58e5bef69" path="/var/lib/kubelet/pods/1645de9b-f227-4d9d-885f-ffd58e5bef69/volumes" Dec 10 15:27:57 crc kubenswrapper[4755]: I1210 15:27:57.835441 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Dec 10 15:27:57 crc kubenswrapper[4755]: I1210 15:27:57.927574 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 10 15:27:57 crc kubenswrapper[4755]: I1210 15:27:57.928691 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 10 15:27:57 crc kubenswrapper[4755]: I1210 15:27:57.965454 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 10 15:27:58 crc kubenswrapper[4755]: I1210 15:27:58.141571 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Dec 10 15:27:58 crc kubenswrapper[4755]: I1210 15:27:58.225910 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 10 15:27:58 crc kubenswrapper[4755]: I1210 15:27:58.241648 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 10 15:27:58 crc kubenswrapper[4755]: I1210 15:27:58.245616 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 10 15:27:58 crc kubenswrapper[4755]: I1210 15:27:58.256925 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 10 15:27:58 crc kubenswrapper[4755]: I1210 15:27:58.288264 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 10 15:27:58 crc kubenswrapper[4755]: I1210 15:27:58.322529 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 10 15:27:58 crc kubenswrapper[4755]: I1210 15:27:58.347103 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 10 15:27:58 crc kubenswrapper[4755]: I1210 15:27:58.425320 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 10 15:27:58 crc kubenswrapper[4755]: I1210 15:27:58.439350 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Dec 10 15:27:58 crc kubenswrapper[4755]: I1210 15:27:58.536345 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 10 15:27:58 crc kubenswrapper[4755]: I1210 15:27:58.540304 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Dec 10 15:27:58 crc kubenswrapper[4755]: I1210 15:27:58.552926 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 10 15:27:58 crc kubenswrapper[4755]: I1210 15:27:58.574363 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Dec 10 15:27:58 crc kubenswrapper[4755]: I1210 15:27:58.674137 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 10 15:27:58 crc kubenswrapper[4755]: I1210 15:27:58.765269 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 10 15:27:58 crc kubenswrapper[4755]: I1210 15:27:58.780526 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 10 15:27:58 crc kubenswrapper[4755]: I1210 15:27:58.862493 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 10 15:27:58 crc kubenswrapper[4755]: I1210 15:27:58.882164 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 10 15:27:58 crc kubenswrapper[4755]: I1210 15:27:58.986946 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 10 15:27:59 crc kubenswrapper[4755]: I1210 15:27:59.033378 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 10 15:27:59 crc kubenswrapper[4755]: I1210 15:27:59.106108 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 10 15:27:59 crc kubenswrapper[4755]: I1210 15:27:59.137901 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 10 15:27:59 crc kubenswrapper[4755]: I1210 15:27:59.142482 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 10 15:27:59 crc kubenswrapper[4755]: I1210 15:27:59.234504 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 10 15:27:59 crc kubenswrapper[4755]: I1210 15:27:59.282294 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 10 15:27:59 crc kubenswrapper[4755]: I1210 15:27:59.308929 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 10 15:27:59 crc kubenswrapper[4755]: I1210 15:27:59.313028 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 10 15:27:59 crc kubenswrapper[4755]: I1210 15:27:59.374826 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 10 15:27:59 crc kubenswrapper[4755]: I1210 15:27:59.387427 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 10 15:27:59 crc kubenswrapper[4755]: I1210 15:27:59.422333 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 10 15:27:59 crc kubenswrapper[4755]: I1210 15:27:59.429799 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 10 15:27:59 crc kubenswrapper[4755]: I1210 15:27:59.438055 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Dec 10 15:27:59 crc kubenswrapper[4755]: I1210 15:27:59.439238 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 10 15:27:59 crc kubenswrapper[4755]: I1210 15:27:59.652082 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Dec 10 15:27:59 crc kubenswrapper[4755]: I1210 15:27:59.728762 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 10 15:27:59 crc kubenswrapper[4755]: I1210 15:27:59.738150 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 10 15:27:59 crc kubenswrapper[4755]: I1210 15:27:59.783543 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 10 15:27:59 crc kubenswrapper[4755]: I1210 15:27:59.924902 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 10 15:28:00 crc kubenswrapper[4755]: I1210 15:28:00.020268 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 10 15:28:00 crc kubenswrapper[4755]: I1210 15:28:00.023147 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 10 15:28:00 crc kubenswrapper[4755]: I1210 15:28:00.083483 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Dec 10 15:28:00 crc kubenswrapper[4755]: I1210 15:28:00.211509 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 10 15:28:00 crc kubenswrapper[4755]: I1210 15:28:00.220080 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 10 15:28:00 crc kubenswrapper[4755]: I1210 15:28:00.244880 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 10 15:28:00 crc kubenswrapper[4755]: I1210 15:28:00.271537 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 10 15:28:00 crc kubenswrapper[4755]: I1210 15:28:00.292671 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Dec 10 15:28:00 crc kubenswrapper[4755]: I1210 15:28:00.300788 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 10 15:28:00 crc kubenswrapper[4755]: I1210 15:28:00.380958 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 10 15:28:00 crc kubenswrapper[4755]: I1210 15:28:00.467589 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Dec 10 15:28:00 crc kubenswrapper[4755]: I1210 15:28:00.559522 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 10 15:28:00 crc kubenswrapper[4755]: I1210 15:28:00.633075 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Dec 10 15:28:00 crc kubenswrapper[4755]: I1210 15:28:00.641796 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 10 15:28:00 crc kubenswrapper[4755]: I1210 15:28:00.650431 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 10 15:28:00 crc kubenswrapper[4755]: I1210 15:28:00.844527 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 10 15:28:00 crc kubenswrapper[4755]: I1210 15:28:00.898220 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Dec 10 15:28:00 crc kubenswrapper[4755]: I1210 15:28:00.934927 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 10 15:28:00 crc kubenswrapper[4755]: I1210 15:28:00.957779 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 10 15:28:00 crc kubenswrapper[4755]: I1210 15:28:00.964602 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Dec 10 15:28:00 crc kubenswrapper[4755]: I1210 15:28:00.993227 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 10 15:28:01 crc kubenswrapper[4755]: I1210 15:28:01.061818 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 10 15:28:01 crc kubenswrapper[4755]: I1210 15:28:01.080329 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 10 15:28:01 crc kubenswrapper[4755]: I1210 15:28:01.083247 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Dec 10 15:28:01 crc kubenswrapper[4755]: I1210 15:28:01.158496 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Dec 10 15:28:01 crc kubenswrapper[4755]: I1210 15:28:01.213052 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-fdd74686d-bf2d2"] Dec 10 15:28:01 crc kubenswrapper[4755]: E1210 15:28:01.213391 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6299042d-c9f7-4651-a239-5b75017b83cb" containerName="installer" Dec 10 15:28:01 crc kubenswrapper[4755]: I1210 15:28:01.213416 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="6299042d-c9f7-4651-a239-5b75017b83cb" containerName="installer" Dec 10 15:28:01 crc kubenswrapper[4755]: E1210 15:28:01.213437 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1645de9b-f227-4d9d-885f-ffd58e5bef69" containerName="oauth-openshift" Dec 10 15:28:01 crc kubenswrapper[4755]: I1210 15:28:01.213447 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="1645de9b-f227-4d9d-885f-ffd58e5bef69" containerName="oauth-openshift" Dec 10 15:28:01 crc kubenswrapper[4755]: I1210 15:28:01.213591 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="6299042d-c9f7-4651-a239-5b75017b83cb" containerName="installer" Dec 10 15:28:01 crc kubenswrapper[4755]: I1210 15:28:01.213605 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="1645de9b-f227-4d9d-885f-ffd58e5bef69" containerName="oauth-openshift" Dec 10 15:28:01 crc kubenswrapper[4755]: I1210 15:28:01.214117 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-fdd74686d-bf2d2" Dec 10 15:28:01 crc kubenswrapper[4755]: I1210 15:28:01.216137 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 10 15:28:01 crc kubenswrapper[4755]: I1210 15:28:01.216779 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 10 15:28:01 crc kubenswrapper[4755]: I1210 15:28:01.217052 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 10 15:28:01 crc kubenswrapper[4755]: I1210 15:28:01.217399 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 10 15:28:01 crc kubenswrapper[4755]: I1210 15:28:01.218264 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 10 15:28:01 crc kubenswrapper[4755]: I1210 15:28:01.218827 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 10 15:28:01 crc kubenswrapper[4755]: I1210 15:28:01.219010 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 10 15:28:01 crc kubenswrapper[4755]: I1210 15:28:01.219124 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 10 15:28:01 crc kubenswrapper[4755]: I1210 15:28:01.219245 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 10 15:28:01 crc kubenswrapper[4755]: I1210 15:28:01.219940 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 10 15:28:01 crc kubenswrapper[4755]: I1210 15:28:01.220764 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 10 15:28:01 crc kubenswrapper[4755]: I1210 15:28:01.221545 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 10 15:28:01 crc kubenswrapper[4755]: I1210 15:28:01.234263 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 10 15:28:01 crc kubenswrapper[4755]: I1210 15:28:01.262037 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 10 15:28:01 crc kubenswrapper[4755]: I1210 15:28:01.262894 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 10 15:28:01 crc kubenswrapper[4755]: I1210 15:28:01.271485 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 10 15:28:01 crc kubenswrapper[4755]: I1210 15:28:01.277749 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 10 15:28:01 crc kubenswrapper[4755]: I1210 15:28:01.286899 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 10 15:28:01 crc kubenswrapper[4755]: I1210 15:28:01.297586 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Dec 10 15:28:01 crc kubenswrapper[4755]: I1210 15:28:01.308940 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wn5x\" (UniqueName: \"kubernetes.io/projected/7f2fcbc4-6b08-4b78-b5ec-2f26793af411-kube-api-access-2wn5x\") pod \"oauth-openshift-fdd74686d-bf2d2\" (UID: \"7f2fcbc4-6b08-4b78-b5ec-2f26793af411\") " pod="openshift-authentication/oauth-openshift-fdd74686d-bf2d2" Dec 10 15:28:01 crc kubenswrapper[4755]: I1210 15:28:01.308985 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7f2fcbc4-6b08-4b78-b5ec-2f26793af411-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-fdd74686d-bf2d2\" (UID: \"7f2fcbc4-6b08-4b78-b5ec-2f26793af411\") " pod="openshift-authentication/oauth-openshift-fdd74686d-bf2d2" Dec 10 15:28:01 crc kubenswrapper[4755]: I1210 15:28:01.309013 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7f2fcbc4-6b08-4b78-b5ec-2f26793af411-audit-policies\") pod \"oauth-openshift-fdd74686d-bf2d2\" (UID: \"7f2fcbc4-6b08-4b78-b5ec-2f26793af411\") " pod="openshift-authentication/oauth-openshift-fdd74686d-bf2d2" Dec 10 15:28:01 crc kubenswrapper[4755]: I1210 15:28:01.309090 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7f2fcbc4-6b08-4b78-b5ec-2f26793af411-audit-dir\") pod \"oauth-openshift-fdd74686d-bf2d2\" (UID: \"7f2fcbc4-6b08-4b78-b5ec-2f26793af411\") " pod="openshift-authentication/oauth-openshift-fdd74686d-bf2d2" Dec 10 15:28:01 crc kubenswrapper[4755]: I1210 15:28:01.309125 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7f2fcbc4-6b08-4b78-b5ec-2f26793af411-v4-0-config-system-serving-cert\") pod \"oauth-openshift-fdd74686d-bf2d2\" (UID: \"7f2fcbc4-6b08-4b78-b5ec-2f26793af411\") " pod="openshift-authentication/oauth-openshift-fdd74686d-bf2d2" Dec 10 15:28:01 crc kubenswrapper[4755]: I1210 15:28:01.309150 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7f2fcbc4-6b08-4b78-b5ec-2f26793af411-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-fdd74686d-bf2d2\" (UID: \"7f2fcbc4-6b08-4b78-b5ec-2f26793af411\") " pod="openshift-authentication/oauth-openshift-fdd74686d-bf2d2" Dec 10 15:28:01 crc kubenswrapper[4755]: I1210 15:28:01.309181 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7f2fcbc4-6b08-4b78-b5ec-2f26793af411-v4-0-config-user-template-error\") pod \"oauth-openshift-fdd74686d-bf2d2\" (UID: \"7f2fcbc4-6b08-4b78-b5ec-2f26793af411\") " pod="openshift-authentication/oauth-openshift-fdd74686d-bf2d2" Dec 10 15:28:01 crc kubenswrapper[4755]: I1210 15:28:01.309202 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7f2fcbc4-6b08-4b78-b5ec-2f26793af411-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-fdd74686d-bf2d2\" (UID: \"7f2fcbc4-6b08-4b78-b5ec-2f26793af411\") " pod="openshift-authentication/oauth-openshift-fdd74686d-bf2d2" Dec 10 15:28:01 crc kubenswrapper[4755]: I1210 15:28:01.309224 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7f2fcbc4-6b08-4b78-b5ec-2f26793af411-v4-0-config-system-session\") pod \"oauth-openshift-fdd74686d-bf2d2\" (UID: \"7f2fcbc4-6b08-4b78-b5ec-2f26793af411\") " pod="openshift-authentication/oauth-openshift-fdd74686d-bf2d2" Dec 10 15:28:01 crc kubenswrapper[4755]: I1210 15:28:01.309249 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/7f2fcbc4-6b08-4b78-b5ec-2f26793af411-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-fdd74686d-bf2d2\" (UID: \"7f2fcbc4-6b08-4b78-b5ec-2f26793af411\") " pod="openshift-authentication/oauth-openshift-fdd74686d-bf2d2" Dec 10 15:28:01 crc kubenswrapper[4755]: I1210 15:28:01.309281 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7f2fcbc4-6b08-4b78-b5ec-2f26793af411-v4-0-config-system-router-certs\") pod \"oauth-openshift-fdd74686d-bf2d2\" (UID: \"7f2fcbc4-6b08-4b78-b5ec-2f26793af411\") " pod="openshift-authentication/oauth-openshift-fdd74686d-bf2d2" Dec 10 15:28:01 crc kubenswrapper[4755]: I1210 15:28:01.309313 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7f2fcbc4-6b08-4b78-b5ec-2f26793af411-v4-0-config-system-cliconfig\") pod \"oauth-openshift-fdd74686d-bf2d2\" (UID: \"7f2fcbc4-6b08-4b78-b5ec-2f26793af411\") " pod="openshift-authentication/oauth-openshift-fdd74686d-bf2d2" Dec 10 15:28:01 crc kubenswrapper[4755]: I1210 15:28:01.309343 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7f2fcbc4-6b08-4b78-b5ec-2f26793af411-v4-0-config-user-template-login\") pod \"oauth-openshift-fdd74686d-bf2d2\" (UID: \"7f2fcbc4-6b08-4b78-b5ec-2f26793af411\") " pod="openshift-authentication/oauth-openshift-fdd74686d-bf2d2" Dec 10 15:28:01 crc kubenswrapper[4755]: I1210 15:28:01.309362 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7f2fcbc4-6b08-4b78-b5ec-2f26793af411-v4-0-config-system-service-ca\") pod \"oauth-openshift-fdd74686d-bf2d2\" (UID: \"7f2fcbc4-6b08-4b78-b5ec-2f26793af411\") " pod="openshift-authentication/oauth-openshift-fdd74686d-bf2d2" Dec 10 15:28:01 crc kubenswrapper[4755]: I1210 15:28:01.410395 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7f2fcbc4-6b08-4b78-b5ec-2f26793af411-audit-policies\") pod \"oauth-openshift-fdd74686d-bf2d2\" (UID: \"7f2fcbc4-6b08-4b78-b5ec-2f26793af411\") " pod="openshift-authentication/oauth-openshift-fdd74686d-bf2d2" Dec 10 15:28:01 crc kubenswrapper[4755]: I1210 15:28:01.410506 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7f2fcbc4-6b08-4b78-b5ec-2f26793af411-audit-dir\") pod \"oauth-openshift-fdd74686d-bf2d2\" (UID: \"7f2fcbc4-6b08-4b78-b5ec-2f26793af411\") " pod="openshift-authentication/oauth-openshift-fdd74686d-bf2d2" Dec 10 15:28:01 crc kubenswrapper[4755]: I1210 15:28:01.410528 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7f2fcbc4-6b08-4b78-b5ec-2f26793af411-v4-0-config-system-serving-cert\") pod \"oauth-openshift-fdd74686d-bf2d2\" (UID: \"7f2fcbc4-6b08-4b78-b5ec-2f26793af411\") " pod="openshift-authentication/oauth-openshift-fdd74686d-bf2d2" Dec 10 15:28:01 crc kubenswrapper[4755]: I1210 15:28:01.410555 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7f2fcbc4-6b08-4b78-b5ec-2f26793af411-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-fdd74686d-bf2d2\" (UID: \"7f2fcbc4-6b08-4b78-b5ec-2f26793af411\") " pod="openshift-authentication/oauth-openshift-fdd74686d-bf2d2" Dec 10 15:28:01 crc kubenswrapper[4755]: I1210 15:28:01.410590 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7f2fcbc4-6b08-4b78-b5ec-2f26793af411-v4-0-config-user-template-error\") pod \"oauth-openshift-fdd74686d-bf2d2\" (UID: \"7f2fcbc4-6b08-4b78-b5ec-2f26793af411\") " pod="openshift-authentication/oauth-openshift-fdd74686d-bf2d2" Dec 10 15:28:01 crc kubenswrapper[4755]: I1210 15:28:01.410612 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7f2fcbc4-6b08-4b78-b5ec-2f26793af411-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-fdd74686d-bf2d2\" (UID: \"7f2fcbc4-6b08-4b78-b5ec-2f26793af411\") " pod="openshift-authentication/oauth-openshift-fdd74686d-bf2d2" Dec 10 15:28:01 crc kubenswrapper[4755]: I1210 15:28:01.410628 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7f2fcbc4-6b08-4b78-b5ec-2f26793af411-v4-0-config-system-session\") pod \"oauth-openshift-fdd74686d-bf2d2\" (UID: \"7f2fcbc4-6b08-4b78-b5ec-2f26793af411\") " pod="openshift-authentication/oauth-openshift-fdd74686d-bf2d2" Dec 10 15:28:01 crc kubenswrapper[4755]: I1210 15:28:01.410648 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/7f2fcbc4-6b08-4b78-b5ec-2f26793af411-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-fdd74686d-bf2d2\" (UID: \"7f2fcbc4-6b08-4b78-b5ec-2f26793af411\") " pod="openshift-authentication/oauth-openshift-fdd74686d-bf2d2" Dec 10 15:28:01 crc kubenswrapper[4755]: I1210 15:28:01.410670 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7f2fcbc4-6b08-4b78-b5ec-2f26793af411-v4-0-config-system-router-certs\") pod \"oauth-openshift-fdd74686d-bf2d2\" (UID: \"7f2fcbc4-6b08-4b78-b5ec-2f26793af411\") " pod="openshift-authentication/oauth-openshift-fdd74686d-bf2d2" Dec 10 15:28:01 crc kubenswrapper[4755]: I1210 15:28:01.410663 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7f2fcbc4-6b08-4b78-b5ec-2f26793af411-audit-dir\") pod \"oauth-openshift-fdd74686d-bf2d2\" (UID: \"7f2fcbc4-6b08-4b78-b5ec-2f26793af411\") " pod="openshift-authentication/oauth-openshift-fdd74686d-bf2d2" Dec 10 15:28:01 crc kubenswrapper[4755]: I1210 15:28:01.410691 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7f2fcbc4-6b08-4b78-b5ec-2f26793af411-v4-0-config-system-cliconfig\") pod \"oauth-openshift-fdd74686d-bf2d2\" (UID: \"7f2fcbc4-6b08-4b78-b5ec-2f26793af411\") " pod="openshift-authentication/oauth-openshift-fdd74686d-bf2d2" Dec 10 15:28:01 crc kubenswrapper[4755]: I1210 15:28:01.410783 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7f2fcbc4-6b08-4b78-b5ec-2f26793af411-v4-0-config-user-template-login\") pod \"oauth-openshift-fdd74686d-bf2d2\" (UID: \"7f2fcbc4-6b08-4b78-b5ec-2f26793af411\") " pod="openshift-authentication/oauth-openshift-fdd74686d-bf2d2" Dec 10 15:28:01 crc kubenswrapper[4755]: I1210 15:28:01.410817 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7f2fcbc4-6b08-4b78-b5ec-2f26793af411-v4-0-config-system-service-ca\") pod \"oauth-openshift-fdd74686d-bf2d2\" (UID: \"7f2fcbc4-6b08-4b78-b5ec-2f26793af411\") " pod="openshift-authentication/oauth-openshift-fdd74686d-bf2d2" Dec 10 15:28:01 crc kubenswrapper[4755]: I1210 15:28:01.410862 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wn5x\" (UniqueName: \"kubernetes.io/projected/7f2fcbc4-6b08-4b78-b5ec-2f26793af411-kube-api-access-2wn5x\") pod \"oauth-openshift-fdd74686d-bf2d2\" (UID: \"7f2fcbc4-6b08-4b78-b5ec-2f26793af411\") " pod="openshift-authentication/oauth-openshift-fdd74686d-bf2d2" Dec 10 15:28:01 crc kubenswrapper[4755]: I1210 15:28:01.410890 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7f2fcbc4-6b08-4b78-b5ec-2f26793af411-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-fdd74686d-bf2d2\" (UID: \"7f2fcbc4-6b08-4b78-b5ec-2f26793af411\") " pod="openshift-authentication/oauth-openshift-fdd74686d-bf2d2" Dec 10 15:28:01 crc kubenswrapper[4755]: I1210 15:28:01.411225 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7f2fcbc4-6b08-4b78-b5ec-2f26793af411-v4-0-config-system-cliconfig\") pod \"oauth-openshift-fdd74686d-bf2d2\" (UID: \"7f2fcbc4-6b08-4b78-b5ec-2f26793af411\") " pod="openshift-authentication/oauth-openshift-fdd74686d-bf2d2" Dec 10 15:28:01 crc kubenswrapper[4755]: I1210 15:28:01.411236 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7f2fcbc4-6b08-4b78-b5ec-2f26793af411-audit-policies\") pod \"oauth-openshift-fdd74686d-bf2d2\" (UID: \"7f2fcbc4-6b08-4b78-b5ec-2f26793af411\") " pod="openshift-authentication/oauth-openshift-fdd74686d-bf2d2" Dec 10 15:28:01 crc kubenswrapper[4755]: I1210 15:28:01.411980 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7f2fcbc4-6b08-4b78-b5ec-2f26793af411-v4-0-config-system-service-ca\") pod \"oauth-openshift-fdd74686d-bf2d2\" (UID: \"7f2fcbc4-6b08-4b78-b5ec-2f26793af411\") " pod="openshift-authentication/oauth-openshift-fdd74686d-bf2d2" Dec 10 15:28:01 crc kubenswrapper[4755]: I1210 15:28:01.413453 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7f2fcbc4-6b08-4b78-b5ec-2f26793af411-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-fdd74686d-bf2d2\" (UID: \"7f2fcbc4-6b08-4b78-b5ec-2f26793af411\") " pod="openshift-authentication/oauth-openshift-fdd74686d-bf2d2" Dec 10 15:28:01 crc kubenswrapper[4755]: I1210 15:28:01.417278 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7f2fcbc4-6b08-4b78-b5ec-2f26793af411-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-fdd74686d-bf2d2\" (UID: \"7f2fcbc4-6b08-4b78-b5ec-2f26793af411\") " pod="openshift-authentication/oauth-openshift-fdd74686d-bf2d2" Dec 10 15:28:01 crc kubenswrapper[4755]: I1210 15:28:01.417327 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7f2fcbc4-6b08-4b78-b5ec-2f26793af411-v4-0-config-system-router-certs\") pod \"oauth-openshift-fdd74686d-bf2d2\" (UID: \"7f2fcbc4-6b08-4b78-b5ec-2f26793af411\") " pod="openshift-authentication/oauth-openshift-fdd74686d-bf2d2" Dec 10 15:28:01 crc kubenswrapper[4755]: I1210 15:28:01.417749 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7f2fcbc4-6b08-4b78-b5ec-2f26793af411-v4-0-config-system-session\") pod \"oauth-openshift-fdd74686d-bf2d2\" (UID: \"7f2fcbc4-6b08-4b78-b5ec-2f26793af411\") " pod="openshift-authentication/oauth-openshift-fdd74686d-bf2d2" Dec 10 15:28:01 crc kubenswrapper[4755]: I1210 15:28:01.418185 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7f2fcbc4-6b08-4b78-b5ec-2f26793af411-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-fdd74686d-bf2d2\" (UID: \"7f2fcbc4-6b08-4b78-b5ec-2f26793af411\") " pod="openshift-authentication/oauth-openshift-fdd74686d-bf2d2" Dec 10 15:28:01 crc kubenswrapper[4755]: I1210 15:28:01.424845 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/7f2fcbc4-6b08-4b78-b5ec-2f26793af411-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-fdd74686d-bf2d2\" (UID: \"7f2fcbc4-6b08-4b78-b5ec-2f26793af411\") " pod="openshift-authentication/oauth-openshift-fdd74686d-bf2d2" Dec 10 15:28:01 crc kubenswrapper[4755]: I1210 15:28:01.424855 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7f2fcbc4-6b08-4b78-b5ec-2f26793af411-v4-0-config-user-template-error\") pod \"oauth-openshift-fdd74686d-bf2d2\" (UID: \"7f2fcbc4-6b08-4b78-b5ec-2f26793af411\") " pod="openshift-authentication/oauth-openshift-fdd74686d-bf2d2" Dec 10 15:28:01 crc kubenswrapper[4755]: I1210 15:28:01.427332 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7f2fcbc4-6b08-4b78-b5ec-2f26793af411-v4-0-config-user-template-login\") pod \"oauth-openshift-fdd74686d-bf2d2\" (UID: \"7f2fcbc4-6b08-4b78-b5ec-2f26793af411\") " pod="openshift-authentication/oauth-openshift-fdd74686d-bf2d2" Dec 10 15:28:01 crc kubenswrapper[4755]: I1210 15:28:01.429841 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wn5x\" (UniqueName: \"kubernetes.io/projected/7f2fcbc4-6b08-4b78-b5ec-2f26793af411-kube-api-access-2wn5x\") pod \"oauth-openshift-fdd74686d-bf2d2\" (UID: \"7f2fcbc4-6b08-4b78-b5ec-2f26793af411\") " pod="openshift-authentication/oauth-openshift-fdd74686d-bf2d2" Dec 10 15:28:01 crc kubenswrapper[4755]: I1210 15:28:01.430957 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7f2fcbc4-6b08-4b78-b5ec-2f26793af411-v4-0-config-system-serving-cert\") pod \"oauth-openshift-fdd74686d-bf2d2\" (UID: \"7f2fcbc4-6b08-4b78-b5ec-2f26793af411\") " pod="openshift-authentication/oauth-openshift-fdd74686d-bf2d2" Dec 10 15:28:01 crc kubenswrapper[4755]: I1210 15:28:01.520312 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 10 15:28:01 crc kubenswrapper[4755]: I1210 15:28:01.569249 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-fdd74686d-bf2d2" Dec 10 15:28:01 crc kubenswrapper[4755]: I1210 15:28:01.596433 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 10 15:28:01 crc kubenswrapper[4755]: I1210 15:28:01.600164 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 10 15:28:01 crc kubenswrapper[4755]: I1210 15:28:01.693761 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 10 15:28:01 crc kubenswrapper[4755]: I1210 15:28:01.971250 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 10 15:28:02 crc kubenswrapper[4755]: I1210 15:28:02.055190 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 10 15:28:02 crc kubenswrapper[4755]: I1210 15:28:02.071175 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 10 15:28:02 crc kubenswrapper[4755]: I1210 15:28:02.074959 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Dec 10 15:28:02 crc kubenswrapper[4755]: I1210 15:28:02.090290 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 10 15:28:02 crc kubenswrapper[4755]: I1210 15:28:02.129210 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 10 15:28:02 crc kubenswrapper[4755]: I1210 15:28:02.134363 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 10 15:28:02 crc kubenswrapper[4755]: I1210 15:28:02.142414 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 10 15:28:02 crc kubenswrapper[4755]: I1210 15:28:02.155729 4755 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 10 15:28:02 crc kubenswrapper[4755]: I1210 15:28:02.289644 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 10 15:28:02 crc kubenswrapper[4755]: I1210 15:28:02.319505 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 10 15:28:02 crc kubenswrapper[4755]: I1210 15:28:02.365601 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 10 15:28:02 crc kubenswrapper[4755]: I1210 15:28:02.371607 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 10 15:28:02 crc kubenswrapper[4755]: I1210 15:28:02.432587 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 10 15:28:02 crc kubenswrapper[4755]: I1210 15:28:02.505548 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 10 15:28:02 crc kubenswrapper[4755]: I1210 15:28:02.511707 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Dec 10 15:28:02 crc kubenswrapper[4755]: I1210 15:28:02.547780 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 10 15:28:02 crc kubenswrapper[4755]: I1210 15:28:02.563398 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-fdd74686d-bf2d2"] Dec 10 15:28:02 crc kubenswrapper[4755]: I1210 15:28:02.625726 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 10 15:28:02 crc kubenswrapper[4755]: I1210 15:28:02.646957 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 10 15:28:02 crc kubenswrapper[4755]: I1210 15:28:02.661303 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 10 15:28:02 crc kubenswrapper[4755]: I1210 15:28:02.720260 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 10 15:28:02 crc kubenswrapper[4755]: I1210 15:28:02.766401 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Dec 10 15:28:02 crc kubenswrapper[4755]: I1210 15:28:02.797093 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Dec 10 15:28:02 crc kubenswrapper[4755]: I1210 15:28:02.851210 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 10 15:28:02 crc kubenswrapper[4755]: I1210 15:28:02.857307 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Dec 10 15:28:02 crc kubenswrapper[4755]: I1210 15:28:02.897192 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Dec 10 15:28:03 crc kubenswrapper[4755]: I1210 15:28:03.087283 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 10 15:28:03 crc kubenswrapper[4755]: I1210 15:28:03.115609 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 10 15:28:03 crc kubenswrapper[4755]: I1210 15:28:03.211919 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 10 15:28:03 crc kubenswrapper[4755]: I1210 15:28:03.298613 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 10 15:28:03 crc kubenswrapper[4755]: I1210 15:28:03.299785 4755 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 10 15:28:03 crc kubenswrapper[4755]: I1210 15:28:03.300080 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://ad72216bc68127f2b6c40ff2777d63bef5b422a2451861dbf06d428429ed4282" gracePeriod=5 Dec 10 15:28:03 crc kubenswrapper[4755]: I1210 15:28:03.313251 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 10 15:28:03 crc kubenswrapper[4755]: I1210 15:28:03.383708 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 10 15:28:03 crc kubenswrapper[4755]: I1210 15:28:03.384987 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Dec 10 15:28:03 crc kubenswrapper[4755]: I1210 15:28:03.447977 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 10 15:28:03 crc kubenswrapper[4755]: I1210 15:28:03.465325 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 10 15:28:03 crc kubenswrapper[4755]: I1210 15:28:03.541884 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 10 15:28:03 crc kubenswrapper[4755]: I1210 15:28:03.554676 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 10 15:28:03 crc kubenswrapper[4755]: I1210 15:28:03.607035 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 10 15:28:03 crc kubenswrapper[4755]: I1210 15:28:03.619309 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 10 15:28:03 crc kubenswrapper[4755]: I1210 15:28:03.653615 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Dec 10 15:28:03 crc kubenswrapper[4755]: I1210 15:28:03.690963 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 10 15:28:03 crc kubenswrapper[4755]: I1210 15:28:03.746945 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 10 15:28:03 crc kubenswrapper[4755]: I1210 15:28:03.923307 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 10 15:28:03 crc kubenswrapper[4755]: I1210 15:28:03.930125 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 10 15:28:04 crc kubenswrapper[4755]: I1210 15:28:04.098736 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 10 15:28:04 crc kubenswrapper[4755]: I1210 15:28:04.157849 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Dec 10 15:28:04 crc kubenswrapper[4755]: I1210 15:28:04.221877 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 10 15:28:04 crc kubenswrapper[4755]: I1210 15:28:04.257415 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 10 15:28:04 crc kubenswrapper[4755]: I1210 15:28:04.271914 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 10 15:28:04 crc kubenswrapper[4755]: I1210 15:28:04.271927 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 10 15:28:04 crc kubenswrapper[4755]: I1210 15:28:04.337845 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 10 15:28:04 crc kubenswrapper[4755]: I1210 15:28:04.358552 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 10 15:28:04 crc kubenswrapper[4755]: I1210 15:28:04.386757 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 10 15:28:04 crc kubenswrapper[4755]: I1210 15:28:04.398511 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Dec 10 15:28:04 crc kubenswrapper[4755]: I1210 15:28:04.553378 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Dec 10 15:28:04 crc kubenswrapper[4755]: I1210 15:28:04.568187 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 10 15:28:04 crc kubenswrapper[4755]: E1210 15:28:04.697288 4755 log.go:32] "RunPodSandbox from runtime service failed" err=< Dec 10 15:28:04 crc kubenswrapper[4755]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-fdd74686d-bf2d2_openshift-authentication_7f2fcbc4-6b08-4b78-b5ec-2f26793af411_0(baa87ddf5d1fabd85939c93910feda8da9f61c06e2838a46fc5d60b448c3d16f): error adding pod openshift-authentication_oauth-openshift-fdd74686d-bf2d2 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"baa87ddf5d1fabd85939c93910feda8da9f61c06e2838a46fc5d60b448c3d16f" Netns:"/var/run/netns/51b3048a-632f-4c39-b652-22f57cf0ebeb" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-fdd74686d-bf2d2;K8S_POD_INFRA_CONTAINER_ID=baa87ddf5d1fabd85939c93910feda8da9f61c06e2838a46fc5d60b448c3d16f;K8S_POD_UID=7f2fcbc4-6b08-4b78-b5ec-2f26793af411" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-fdd74686d-bf2d2] networking: Multus: [openshift-authentication/oauth-openshift-fdd74686d-bf2d2/7f2fcbc4-6b08-4b78-b5ec-2f26793af411]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod oauth-openshift-fdd74686d-bf2d2 in out of cluster comm: pod "oauth-openshift-fdd74686d-bf2d2" not found Dec 10 15:28:04 crc kubenswrapper[4755]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Dec 10 15:28:04 crc kubenswrapper[4755]: > Dec 10 15:28:04 crc kubenswrapper[4755]: E1210 15:28:04.697573 4755 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Dec 10 15:28:04 crc kubenswrapper[4755]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-fdd74686d-bf2d2_openshift-authentication_7f2fcbc4-6b08-4b78-b5ec-2f26793af411_0(baa87ddf5d1fabd85939c93910feda8da9f61c06e2838a46fc5d60b448c3d16f): error adding pod openshift-authentication_oauth-openshift-fdd74686d-bf2d2 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"baa87ddf5d1fabd85939c93910feda8da9f61c06e2838a46fc5d60b448c3d16f" Netns:"/var/run/netns/51b3048a-632f-4c39-b652-22f57cf0ebeb" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-fdd74686d-bf2d2;K8S_POD_INFRA_CONTAINER_ID=baa87ddf5d1fabd85939c93910feda8da9f61c06e2838a46fc5d60b448c3d16f;K8S_POD_UID=7f2fcbc4-6b08-4b78-b5ec-2f26793af411" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-fdd74686d-bf2d2] networking: Multus: [openshift-authentication/oauth-openshift-fdd74686d-bf2d2/7f2fcbc4-6b08-4b78-b5ec-2f26793af411]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod oauth-openshift-fdd74686d-bf2d2 in out of cluster comm: pod "oauth-openshift-fdd74686d-bf2d2" not found Dec 10 15:28:04 crc kubenswrapper[4755]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Dec 10 15:28:04 crc kubenswrapper[4755]: > pod="openshift-authentication/oauth-openshift-fdd74686d-bf2d2" Dec 10 15:28:04 crc kubenswrapper[4755]: E1210 15:28:04.697624 4755 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Dec 10 15:28:04 crc kubenswrapper[4755]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-fdd74686d-bf2d2_openshift-authentication_7f2fcbc4-6b08-4b78-b5ec-2f26793af411_0(baa87ddf5d1fabd85939c93910feda8da9f61c06e2838a46fc5d60b448c3d16f): error adding pod openshift-authentication_oauth-openshift-fdd74686d-bf2d2 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"baa87ddf5d1fabd85939c93910feda8da9f61c06e2838a46fc5d60b448c3d16f" Netns:"/var/run/netns/51b3048a-632f-4c39-b652-22f57cf0ebeb" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-fdd74686d-bf2d2;K8S_POD_INFRA_CONTAINER_ID=baa87ddf5d1fabd85939c93910feda8da9f61c06e2838a46fc5d60b448c3d16f;K8S_POD_UID=7f2fcbc4-6b08-4b78-b5ec-2f26793af411" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-fdd74686d-bf2d2] networking: Multus: [openshift-authentication/oauth-openshift-fdd74686d-bf2d2/7f2fcbc4-6b08-4b78-b5ec-2f26793af411]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod oauth-openshift-fdd74686d-bf2d2 in out of cluster comm: pod "oauth-openshift-fdd74686d-bf2d2" not found Dec 10 15:28:04 crc kubenswrapper[4755]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Dec 10 15:28:04 crc kubenswrapper[4755]: > pod="openshift-authentication/oauth-openshift-fdd74686d-bf2d2" Dec 10 15:28:04 crc kubenswrapper[4755]: E1210 15:28:04.697689 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"oauth-openshift-fdd74686d-bf2d2_openshift-authentication(7f2fcbc4-6b08-4b78-b5ec-2f26793af411)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"oauth-openshift-fdd74686d-bf2d2_openshift-authentication(7f2fcbc4-6b08-4b78-b5ec-2f26793af411)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-fdd74686d-bf2d2_openshift-authentication_7f2fcbc4-6b08-4b78-b5ec-2f26793af411_0(baa87ddf5d1fabd85939c93910feda8da9f61c06e2838a46fc5d60b448c3d16f): error adding pod openshift-authentication_oauth-openshift-fdd74686d-bf2d2 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"baa87ddf5d1fabd85939c93910feda8da9f61c06e2838a46fc5d60b448c3d16f\\\" Netns:\\\"/var/run/netns/51b3048a-632f-4c39-b652-22f57cf0ebeb\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-fdd74686d-bf2d2;K8S_POD_INFRA_CONTAINER_ID=baa87ddf5d1fabd85939c93910feda8da9f61c06e2838a46fc5d60b448c3d16f;K8S_POD_UID=7f2fcbc4-6b08-4b78-b5ec-2f26793af411\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-fdd74686d-bf2d2] networking: Multus: [openshift-authentication/oauth-openshift-fdd74686d-bf2d2/7f2fcbc4-6b08-4b78-b5ec-2f26793af411]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod oauth-openshift-fdd74686d-bf2d2 in out of cluster comm: pod \\\"oauth-openshift-fdd74686d-bf2d2\\\" not found\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-authentication/oauth-openshift-fdd74686d-bf2d2" podUID="7f2fcbc4-6b08-4b78-b5ec-2f26793af411" Dec 10 15:28:04 crc kubenswrapper[4755]: I1210 15:28:04.775267 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 10 15:28:04 crc kubenswrapper[4755]: I1210 15:28:04.780512 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 10 15:28:04 crc kubenswrapper[4755]: I1210 15:28:04.920233 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 10 15:28:04 crc kubenswrapper[4755]: I1210 15:28:04.984632 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-fdd74686d-bf2d2" Dec 10 15:28:04 crc kubenswrapper[4755]: I1210 15:28:04.985182 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-fdd74686d-bf2d2" Dec 10 15:28:05 crc kubenswrapper[4755]: I1210 15:28:05.001604 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 10 15:28:05 crc kubenswrapper[4755]: I1210 15:28:05.122821 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 10 15:28:05 crc kubenswrapper[4755]: I1210 15:28:05.154531 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Dec 10 15:28:05 crc kubenswrapper[4755]: I1210 15:28:05.542035 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 10 15:28:05 crc kubenswrapper[4755]: I1210 15:28:05.889240 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 10 15:28:05 crc kubenswrapper[4755]: I1210 15:28:05.945800 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 10 15:28:06 crc kubenswrapper[4755]: I1210 15:28:06.198977 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 10 15:28:06 crc kubenswrapper[4755]: I1210 15:28:06.607447 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 10 15:28:06 crc kubenswrapper[4755]: I1210 15:28:06.640349 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Dec 10 15:28:06 crc kubenswrapper[4755]: I1210 15:28:06.822817 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 10 15:28:06 crc kubenswrapper[4755]: I1210 15:28:06.917734 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-fdd74686d-bf2d2"] Dec 10 15:28:07 crc kubenswrapper[4755]: I1210 15:28:07.006575 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-fdd74686d-bf2d2" event={"ID":"7f2fcbc4-6b08-4b78-b5ec-2f26793af411","Type":"ContainerStarted","Data":"687e816641a9263e717cf1dc53074aff8edb85b4dff1647f5fb7c5a6aa272997"} Dec 10 15:28:08 crc kubenswrapper[4755]: I1210 15:28:08.012803 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-fdd74686d-bf2d2" event={"ID":"7f2fcbc4-6b08-4b78-b5ec-2f26793af411","Type":"ContainerStarted","Data":"ec9f59399a9a4c9922a2377be645b7f180a8439d55c7f5f7bbfbff8a777f892f"} Dec 10 15:28:08 crc kubenswrapper[4755]: I1210 15:28:08.013528 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-fdd74686d-bf2d2" Dec 10 15:28:08 crc kubenswrapper[4755]: I1210 15:28:08.014545 4755 patch_prober.go:28] interesting pod/oauth-openshift-fdd74686d-bf2d2 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.56:6443/healthz\": dial tcp 10.217.0.56:6443: connect: connection refused" start-of-body= Dec 10 15:28:08 crc kubenswrapper[4755]: I1210 15:28:08.014592 4755 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-fdd74686d-bf2d2" podUID="7f2fcbc4-6b08-4b78-b5ec-2f26793af411" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.56:6443/healthz\": dial tcp 10.217.0.56:6443: connect: connection refused" Dec 10 15:28:08 crc kubenswrapper[4755]: I1210 15:28:08.038185 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-fdd74686d-bf2d2" podStartSLOduration=53.038166178 podStartE2EDuration="53.038166178s" podCreationTimestamp="2025-12-10 15:27:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:28:08.036437287 +0000 UTC m=+284.637320929" watchObservedRunningTime="2025-12-10 15:28:08.038166178 +0000 UTC m=+284.639049820" Dec 10 15:28:08 crc kubenswrapper[4755]: I1210 15:28:08.874506 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 10 15:28:08 crc kubenswrapper[4755]: I1210 15:28:08.874573 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 10 15:28:09 crc kubenswrapper[4755]: I1210 15:28:09.000959 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 10 15:28:09 crc kubenswrapper[4755]: I1210 15:28:09.001087 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 10 15:28:09 crc kubenswrapper[4755]: I1210 15:28:09.001111 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 10 15:28:09 crc kubenswrapper[4755]: I1210 15:28:09.001168 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 10 15:28:09 crc kubenswrapper[4755]: I1210 15:28:09.001167 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 15:28:09 crc kubenswrapper[4755]: I1210 15:28:09.001196 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 15:28:09 crc kubenswrapper[4755]: I1210 15:28:09.001195 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 10 15:28:09 crc kubenswrapper[4755]: I1210 15:28:09.001256 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 15:28:09 crc kubenswrapper[4755]: I1210 15:28:09.001292 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 15:28:09 crc kubenswrapper[4755]: I1210 15:28:09.001740 4755 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Dec 10 15:28:09 crc kubenswrapper[4755]: I1210 15:28:09.001768 4755 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 10 15:28:09 crc kubenswrapper[4755]: I1210 15:28:09.001783 4755 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Dec 10 15:28:09 crc kubenswrapper[4755]: I1210 15:28:09.002174 4755 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Dec 10 15:28:09 crc kubenswrapper[4755]: I1210 15:28:09.012626 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 15:28:09 crc kubenswrapper[4755]: I1210 15:28:09.023837 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 10 15:28:09 crc kubenswrapper[4755]: I1210 15:28:09.023889 4755 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="ad72216bc68127f2b6c40ff2777d63bef5b422a2451861dbf06d428429ed4282" exitCode=137 Dec 10 15:28:09 crc kubenswrapper[4755]: I1210 15:28:09.024127 4755 scope.go:117] "RemoveContainer" containerID="ad72216bc68127f2b6c40ff2777d63bef5b422a2451861dbf06d428429ed4282" Dec 10 15:28:09 crc kubenswrapper[4755]: I1210 15:28:09.025198 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 10 15:28:09 crc kubenswrapper[4755]: I1210 15:28:09.029568 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-fdd74686d-bf2d2" Dec 10 15:28:09 crc kubenswrapper[4755]: I1210 15:28:09.062139 4755 scope.go:117] "RemoveContainer" containerID="ad72216bc68127f2b6c40ff2777d63bef5b422a2451861dbf06d428429ed4282" Dec 10 15:28:09 crc kubenswrapper[4755]: E1210 15:28:09.064560 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad72216bc68127f2b6c40ff2777d63bef5b422a2451861dbf06d428429ed4282\": container with ID starting with ad72216bc68127f2b6c40ff2777d63bef5b422a2451861dbf06d428429ed4282 not found: ID does not exist" containerID="ad72216bc68127f2b6c40ff2777d63bef5b422a2451861dbf06d428429ed4282" Dec 10 15:28:09 crc kubenswrapper[4755]: I1210 15:28:09.064604 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad72216bc68127f2b6c40ff2777d63bef5b422a2451861dbf06d428429ed4282"} err="failed to get container status \"ad72216bc68127f2b6c40ff2777d63bef5b422a2451861dbf06d428429ed4282\": rpc error: code = NotFound desc = could not find container \"ad72216bc68127f2b6c40ff2777d63bef5b422a2451861dbf06d428429ed4282\": container with ID starting with ad72216bc68127f2b6c40ff2777d63bef5b422a2451861dbf06d428429ed4282 not found: ID does not exist" Dec 10 15:28:09 crc kubenswrapper[4755]: I1210 15:28:09.103414 4755 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 10 15:28:09 crc kubenswrapper[4755]: I1210 15:28:09.765051 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Dec 10 15:28:17 crc kubenswrapper[4755]: I1210 15:28:17.050523 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Dec 10 15:28:17 crc kubenswrapper[4755]: I1210 15:28:17.387797 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-d89xt"] Dec 10 15:28:17 crc kubenswrapper[4755]: I1210 15:28:17.389074 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-d89xt" podUID="8f9f6949-50c9-4d7e-b75f-b990a642d3a7" containerName="registry-server" containerID="cri-o://746e14ccb915e96b51025fe2de451e92c08e8070b20e9848a49ea027ec8945f4" gracePeriod=30 Dec 10 15:28:17 crc kubenswrapper[4755]: I1210 15:28:17.413519 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-v7hds"] Dec 10 15:28:17 crc kubenswrapper[4755]: I1210 15:28:17.413841 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-v7hds" podUID="202dcce7-fa8a-4991-bd7a-661eab1f3274" containerName="registry-server" containerID="cri-o://abd8b54041aab17d948f2a3f005f066036edfb27361f4b1131439f3b98cb6247" gracePeriod=30 Dec 10 15:28:17 crc kubenswrapper[4755]: I1210 15:28:17.423352 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-5bvrp"] Dec 10 15:28:17 crc kubenswrapper[4755]: I1210 15:28:17.423615 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-5bvrp" podUID="64e1b92e-9035-4439-abdc-86205e68c591" containerName="marketplace-operator" containerID="cri-o://b52e2062ef030c0a58749ece4f375a26c3658e59d6ef93b0ffed076a9c53aad1" gracePeriod=30 Dec 10 15:28:17 crc kubenswrapper[4755]: I1210 15:28:17.439710 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qccqv"] Dec 10 15:28:17 crc kubenswrapper[4755]: I1210 15:28:17.440043 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qccqv" podUID="4ca3fb6e-045b-4025-9d18-eb0a13d9128a" containerName="registry-server" containerID="cri-o://083e0327882a15b636a692c5e5f4acbdfbe6fd26d3c1231f1507c34e96b81ccd" gracePeriod=30 Dec 10 15:28:17 crc kubenswrapper[4755]: I1210 15:28:17.444178 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-m45gd"] Dec 10 15:28:17 crc kubenswrapper[4755]: I1210 15:28:17.444526 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-m45gd" podUID="a08c52a2-16c8-48eb-af20-472b5202eb85" containerName="registry-server" containerID="cri-o://d755429e1151584ac1e91b47a807eb9e763166c3bfa62ff828404f0bfb613a58" gracePeriod=30 Dec 10 15:28:17 crc kubenswrapper[4755]: I1210 15:28:17.449555 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zlc5s"] Dec 10 15:28:17 crc kubenswrapper[4755]: E1210 15:28:17.449930 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 10 15:28:17 crc kubenswrapper[4755]: I1210 15:28:17.449956 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 10 15:28:17 crc kubenswrapper[4755]: I1210 15:28:17.450090 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 10 15:28:17 crc kubenswrapper[4755]: I1210 15:28:17.450644 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-zlc5s" Dec 10 15:28:17 crc kubenswrapper[4755]: I1210 15:28:17.454121 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zlc5s"] Dec 10 15:28:17 crc kubenswrapper[4755]: I1210 15:28:17.614763 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zntj\" (UniqueName: \"kubernetes.io/projected/62b9cf5c-ad14-40aa-a245-027d775331d7-kube-api-access-7zntj\") pod \"marketplace-operator-79b997595-zlc5s\" (UID: \"62b9cf5c-ad14-40aa-a245-027d775331d7\") " pod="openshift-marketplace/marketplace-operator-79b997595-zlc5s" Dec 10 15:28:17 crc kubenswrapper[4755]: I1210 15:28:17.614830 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/62b9cf5c-ad14-40aa-a245-027d775331d7-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-zlc5s\" (UID: \"62b9cf5c-ad14-40aa-a245-027d775331d7\") " pod="openshift-marketplace/marketplace-operator-79b997595-zlc5s" Dec 10 15:28:17 crc kubenswrapper[4755]: I1210 15:28:17.614883 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/62b9cf5c-ad14-40aa-a245-027d775331d7-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-zlc5s\" (UID: \"62b9cf5c-ad14-40aa-a245-027d775331d7\") " pod="openshift-marketplace/marketplace-operator-79b997595-zlc5s" Dec 10 15:28:17 crc kubenswrapper[4755]: I1210 15:28:17.715699 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/62b9cf5c-ad14-40aa-a245-027d775331d7-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-zlc5s\" (UID: \"62b9cf5c-ad14-40aa-a245-027d775331d7\") " pod="openshift-marketplace/marketplace-operator-79b997595-zlc5s" Dec 10 15:28:17 crc kubenswrapper[4755]: I1210 15:28:17.715757 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/62b9cf5c-ad14-40aa-a245-027d775331d7-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-zlc5s\" (UID: \"62b9cf5c-ad14-40aa-a245-027d775331d7\") " pod="openshift-marketplace/marketplace-operator-79b997595-zlc5s" Dec 10 15:28:17 crc kubenswrapper[4755]: I1210 15:28:17.715813 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zntj\" (UniqueName: \"kubernetes.io/projected/62b9cf5c-ad14-40aa-a245-027d775331d7-kube-api-access-7zntj\") pod \"marketplace-operator-79b997595-zlc5s\" (UID: \"62b9cf5c-ad14-40aa-a245-027d775331d7\") " pod="openshift-marketplace/marketplace-operator-79b997595-zlc5s" Dec 10 15:28:17 crc kubenswrapper[4755]: I1210 15:28:17.718019 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/62b9cf5c-ad14-40aa-a245-027d775331d7-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-zlc5s\" (UID: \"62b9cf5c-ad14-40aa-a245-027d775331d7\") " pod="openshift-marketplace/marketplace-operator-79b997595-zlc5s" Dec 10 15:28:17 crc kubenswrapper[4755]: I1210 15:28:17.730386 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/62b9cf5c-ad14-40aa-a245-027d775331d7-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-zlc5s\" (UID: \"62b9cf5c-ad14-40aa-a245-027d775331d7\") " pod="openshift-marketplace/marketplace-operator-79b997595-zlc5s" Dec 10 15:28:17 crc kubenswrapper[4755]: I1210 15:28:17.734959 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zntj\" (UniqueName: \"kubernetes.io/projected/62b9cf5c-ad14-40aa-a245-027d775331d7-kube-api-access-7zntj\") pod \"marketplace-operator-79b997595-zlc5s\" (UID: \"62b9cf5c-ad14-40aa-a245-027d775331d7\") " pod="openshift-marketplace/marketplace-operator-79b997595-zlc5s" Dec 10 15:28:17 crc kubenswrapper[4755]: I1210 15:28:17.765615 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-zlc5s" Dec 10 15:28:18 crc kubenswrapper[4755]: I1210 15:28:18.167488 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zlc5s"] Dec 10 15:28:18 crc kubenswrapper[4755]: W1210 15:28:18.175624 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod62b9cf5c_ad14_40aa_a245_027d775331d7.slice/crio-14d4a1f3fbff0d119c80f916f47f3f1c243d4301836f20ed7da70b50a631edef WatchSource:0}: Error finding container 14d4a1f3fbff0d119c80f916f47f3f1c243d4301836f20ed7da70b50a631edef: Status 404 returned error can't find the container with id 14d4a1f3fbff0d119c80f916f47f3f1c243d4301836f20ed7da70b50a631edef Dec 10 15:28:18 crc kubenswrapper[4755]: I1210 15:28:18.859096 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d89xt" Dec 10 15:28:18 crc kubenswrapper[4755]: I1210 15:28:18.925324 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-5bvrp" Dec 10 15:28:18 crc kubenswrapper[4755]: I1210 15:28:18.930575 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m45gd" Dec 10 15:28:18 crc kubenswrapper[4755]: I1210 15:28:18.936167 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v7hds" Dec 10 15:28:18 crc kubenswrapper[4755]: I1210 15:28:18.939312 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qccqv" Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.034025 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ca3fb6e-045b-4025-9d18-eb0a13d9128a-utilities\") pod \"4ca3fb6e-045b-4025-9d18-eb0a13d9128a\" (UID: \"4ca3fb6e-045b-4025-9d18-eb0a13d9128a\") " Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.034077 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/64e1b92e-9035-4439-abdc-86205e68c591-marketplace-operator-metrics\") pod \"64e1b92e-9035-4439-abdc-86205e68c591\" (UID: \"64e1b92e-9035-4439-abdc-86205e68c591\") " Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.034121 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fhm9k\" (UniqueName: \"kubernetes.io/projected/a08c52a2-16c8-48eb-af20-472b5202eb85-kube-api-access-fhm9k\") pod \"a08c52a2-16c8-48eb-af20-472b5202eb85\" (UID: \"a08c52a2-16c8-48eb-af20-472b5202eb85\") " Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.034144 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/202dcce7-fa8a-4991-bd7a-661eab1f3274-utilities\") pod \"202dcce7-fa8a-4991-bd7a-661eab1f3274\" (UID: \"202dcce7-fa8a-4991-bd7a-661eab1f3274\") " Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.034164 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f9f6949-50c9-4d7e-b75f-b990a642d3a7-catalog-content\") pod \"8f9f6949-50c9-4d7e-b75f-b990a642d3a7\" (UID: \"8f9f6949-50c9-4d7e-b75f-b990a642d3a7\") " Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.034180 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/64e1b92e-9035-4439-abdc-86205e68c591-marketplace-trusted-ca\") pod \"64e1b92e-9035-4439-abdc-86205e68c591\" (UID: \"64e1b92e-9035-4439-abdc-86205e68c591\") " Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.034198 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zqmw2\" (UniqueName: \"kubernetes.io/projected/64e1b92e-9035-4439-abdc-86205e68c591-kube-api-access-zqmw2\") pod \"64e1b92e-9035-4439-abdc-86205e68c591\" (UID: \"64e1b92e-9035-4439-abdc-86205e68c591\") " Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.034214 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7bbmb\" (UniqueName: \"kubernetes.io/projected/202dcce7-fa8a-4991-bd7a-661eab1f3274-kube-api-access-7bbmb\") pod \"202dcce7-fa8a-4991-bd7a-661eab1f3274\" (UID: \"202dcce7-fa8a-4991-bd7a-661eab1f3274\") " Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.034227 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ca3fb6e-045b-4025-9d18-eb0a13d9128a-catalog-content\") pod \"4ca3fb6e-045b-4025-9d18-eb0a13d9128a\" (UID: \"4ca3fb6e-045b-4025-9d18-eb0a13d9128a\") " Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.034855 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a08c52a2-16c8-48eb-af20-472b5202eb85-catalog-content\") pod \"a08c52a2-16c8-48eb-af20-472b5202eb85\" (UID: \"a08c52a2-16c8-48eb-af20-472b5202eb85\") " Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.034915 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/202dcce7-fa8a-4991-bd7a-661eab1f3274-catalog-content\") pod \"202dcce7-fa8a-4991-bd7a-661eab1f3274\" (UID: \"202dcce7-fa8a-4991-bd7a-661eab1f3274\") " Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.034966 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g8rg8\" (UniqueName: \"kubernetes.io/projected/4ca3fb6e-045b-4025-9d18-eb0a13d9128a-kube-api-access-g8rg8\") pod \"4ca3fb6e-045b-4025-9d18-eb0a13d9128a\" (UID: \"4ca3fb6e-045b-4025-9d18-eb0a13d9128a\") " Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.034985 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/202dcce7-fa8a-4991-bd7a-661eab1f3274-utilities" (OuterVolumeSpecName: "utilities") pod "202dcce7-fa8a-4991-bd7a-661eab1f3274" (UID: "202dcce7-fa8a-4991-bd7a-661eab1f3274"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.034995 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a08c52a2-16c8-48eb-af20-472b5202eb85-utilities\") pod \"a08c52a2-16c8-48eb-af20-472b5202eb85\" (UID: \"a08c52a2-16c8-48eb-af20-472b5202eb85\") " Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.035031 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f9f6949-50c9-4d7e-b75f-b990a642d3a7-utilities\") pod \"8f9f6949-50c9-4d7e-b75f-b990a642d3a7\" (UID: \"8f9f6949-50c9-4d7e-b75f-b990a642d3a7\") " Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.035056 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8kxhw\" (UniqueName: \"kubernetes.io/projected/8f9f6949-50c9-4d7e-b75f-b990a642d3a7-kube-api-access-8kxhw\") pod \"8f9f6949-50c9-4d7e-b75f-b990a642d3a7\" (UID: \"8f9f6949-50c9-4d7e-b75f-b990a642d3a7\") " Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.035273 4755 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/202dcce7-fa8a-4991-bd7a-661eab1f3274-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.035279 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ca3fb6e-045b-4025-9d18-eb0a13d9128a-utilities" (OuterVolumeSpecName: "utilities") pod "4ca3fb6e-045b-4025-9d18-eb0a13d9128a" (UID: "4ca3fb6e-045b-4025-9d18-eb0a13d9128a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.035612 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64e1b92e-9035-4439-abdc-86205e68c591-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "64e1b92e-9035-4439-abdc-86205e68c591" (UID: "64e1b92e-9035-4439-abdc-86205e68c591"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.036134 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a08c52a2-16c8-48eb-af20-472b5202eb85-utilities" (OuterVolumeSpecName: "utilities") pod "a08c52a2-16c8-48eb-af20-472b5202eb85" (UID: "a08c52a2-16c8-48eb-af20-472b5202eb85"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.036342 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f9f6949-50c9-4d7e-b75f-b990a642d3a7-utilities" (OuterVolumeSpecName: "utilities") pod "8f9f6949-50c9-4d7e-b75f-b990a642d3a7" (UID: "8f9f6949-50c9-4d7e-b75f-b990a642d3a7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.039719 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/202dcce7-fa8a-4991-bd7a-661eab1f3274-kube-api-access-7bbmb" (OuterVolumeSpecName: "kube-api-access-7bbmb") pod "202dcce7-fa8a-4991-bd7a-661eab1f3274" (UID: "202dcce7-fa8a-4991-bd7a-661eab1f3274"). InnerVolumeSpecName "kube-api-access-7bbmb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.039785 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ca3fb6e-045b-4025-9d18-eb0a13d9128a-kube-api-access-g8rg8" (OuterVolumeSpecName: "kube-api-access-g8rg8") pod "4ca3fb6e-045b-4025-9d18-eb0a13d9128a" (UID: "4ca3fb6e-045b-4025-9d18-eb0a13d9128a"). InnerVolumeSpecName "kube-api-access-g8rg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.039795 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f9f6949-50c9-4d7e-b75f-b990a642d3a7-kube-api-access-8kxhw" (OuterVolumeSpecName: "kube-api-access-8kxhw") pod "8f9f6949-50c9-4d7e-b75f-b990a642d3a7" (UID: "8f9f6949-50c9-4d7e-b75f-b990a642d3a7"). InnerVolumeSpecName "kube-api-access-8kxhw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.040020 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64e1b92e-9035-4439-abdc-86205e68c591-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "64e1b92e-9035-4439-abdc-86205e68c591" (UID: "64e1b92e-9035-4439-abdc-86205e68c591"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.041795 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a08c52a2-16c8-48eb-af20-472b5202eb85-kube-api-access-fhm9k" (OuterVolumeSpecName: "kube-api-access-fhm9k") pod "a08c52a2-16c8-48eb-af20-472b5202eb85" (UID: "a08c52a2-16c8-48eb-af20-472b5202eb85"). InnerVolumeSpecName "kube-api-access-fhm9k". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.041931 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64e1b92e-9035-4439-abdc-86205e68c591-kube-api-access-zqmw2" (OuterVolumeSpecName: "kube-api-access-zqmw2") pod "64e1b92e-9035-4439-abdc-86205e68c591" (UID: "64e1b92e-9035-4439-abdc-86205e68c591"). InnerVolumeSpecName "kube-api-access-zqmw2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.058760 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ca3fb6e-045b-4025-9d18-eb0a13d9128a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4ca3fb6e-045b-4025-9d18-eb0a13d9128a" (UID: "4ca3fb6e-045b-4025-9d18-eb0a13d9128a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.080119 4755 generic.go:334] "Generic (PLEG): container finished" podID="8f9f6949-50c9-4d7e-b75f-b990a642d3a7" containerID="746e14ccb915e96b51025fe2de451e92c08e8070b20e9848a49ea027ec8945f4" exitCode=0 Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.080162 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d89xt" Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.080527 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d89xt" event={"ID":"8f9f6949-50c9-4d7e-b75f-b990a642d3a7","Type":"ContainerDied","Data":"746e14ccb915e96b51025fe2de451e92c08e8070b20e9848a49ea027ec8945f4"} Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.080597 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d89xt" event={"ID":"8f9f6949-50c9-4d7e-b75f-b990a642d3a7","Type":"ContainerDied","Data":"c6a19ebdba4a01eeac7f3b58cbb3deb0b3a4ac0e79127df8de696d012044f177"} Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.080623 4755 scope.go:117] "RemoveContainer" containerID="746e14ccb915e96b51025fe2de451e92c08e8070b20e9848a49ea027ec8945f4" Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.082990 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-zlc5s" event={"ID":"62b9cf5c-ad14-40aa-a245-027d775331d7","Type":"ContainerStarted","Data":"2539a529792289e3ea3f4850294782523adeab2b67171cdefe5a05801965ac3a"} Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.083033 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-zlc5s" event={"ID":"62b9cf5c-ad14-40aa-a245-027d775331d7","Type":"ContainerStarted","Data":"14d4a1f3fbff0d119c80f916f47f3f1c243d4301836f20ed7da70b50a631edef"} Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.084382 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-zlc5s" Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.087704 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-zlc5s" Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.091381 4755 generic.go:334] "Generic (PLEG): container finished" podID="a08c52a2-16c8-48eb-af20-472b5202eb85" containerID="d755429e1151584ac1e91b47a807eb9e763166c3bfa62ff828404f0bfb613a58" exitCode=0 Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.091451 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m45gd" event={"ID":"a08c52a2-16c8-48eb-af20-472b5202eb85","Type":"ContainerDied","Data":"d755429e1151584ac1e91b47a807eb9e763166c3bfa62ff828404f0bfb613a58"} Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.091492 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m45gd" event={"ID":"a08c52a2-16c8-48eb-af20-472b5202eb85","Type":"ContainerDied","Data":"5d00154c3fc33f12babc8605010ce6270b72167ba5d064a0d87d5aa7cc709195"} Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.091556 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m45gd" Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.094099 4755 generic.go:334] "Generic (PLEG): container finished" podID="4ca3fb6e-045b-4025-9d18-eb0a13d9128a" containerID="083e0327882a15b636a692c5e5f4acbdfbe6fd26d3c1231f1507c34e96b81ccd" exitCode=0 Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.094149 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qccqv" event={"ID":"4ca3fb6e-045b-4025-9d18-eb0a13d9128a","Type":"ContainerDied","Data":"083e0327882a15b636a692c5e5f4acbdfbe6fd26d3c1231f1507c34e96b81ccd"} Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.094169 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qccqv" event={"ID":"4ca3fb6e-045b-4025-9d18-eb0a13d9128a","Type":"ContainerDied","Data":"3f36b90b7d6d2baeb74ad15e39a6d299dbfe2a4c400ec5f54aed4ffb32524aba"} Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.094217 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qccqv" Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.099586 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f9f6949-50c9-4d7e-b75f-b990a642d3a7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8f9f6949-50c9-4d7e-b75f-b990a642d3a7" (UID: "8f9f6949-50c9-4d7e-b75f-b990a642d3a7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.102818 4755 scope.go:117] "RemoveContainer" containerID="5c18e1a6ffeff96fb5e01ddbd35c47a5da5a8fcaf4719544394106328bce1cfd" Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.106921 4755 generic.go:334] "Generic (PLEG): container finished" podID="202dcce7-fa8a-4991-bd7a-661eab1f3274" containerID="abd8b54041aab17d948f2a3f005f066036edfb27361f4b1131439f3b98cb6247" exitCode=0 Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.107186 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v7hds" event={"ID":"202dcce7-fa8a-4991-bd7a-661eab1f3274","Type":"ContainerDied","Data":"abd8b54041aab17d948f2a3f005f066036edfb27361f4b1131439f3b98cb6247"} Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.107216 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v7hds" event={"ID":"202dcce7-fa8a-4991-bd7a-661eab1f3274","Type":"ContainerDied","Data":"3749eff209d81e6de079722b0d580b9cfde208c997351f444cbb737be36e085a"} Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.107417 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v7hds" Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.108957 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-zlc5s" podStartSLOduration=2.108930971 podStartE2EDuration="2.108930971s" podCreationTimestamp="2025-12-10 15:28:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:28:19.100738863 +0000 UTC m=+295.701622515" watchObservedRunningTime="2025-12-10 15:28:19.108930971 +0000 UTC m=+295.709814613" Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.111064 4755 generic.go:334] "Generic (PLEG): container finished" podID="64e1b92e-9035-4439-abdc-86205e68c591" containerID="b52e2062ef030c0a58749ece4f375a26c3658e59d6ef93b0ffed076a9c53aad1" exitCode=0 Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.111122 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-5bvrp" event={"ID":"64e1b92e-9035-4439-abdc-86205e68c591","Type":"ContainerDied","Data":"b52e2062ef030c0a58749ece4f375a26c3658e59d6ef93b0ffed076a9c53aad1"} Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.111159 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-5bvrp" event={"ID":"64e1b92e-9035-4439-abdc-86205e68c591","Type":"ContainerDied","Data":"d716721530d9e5cf0f4402f8434d6dd7b2cf3580fd677de9f4555a2ae693d323"} Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.111248 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-5bvrp" Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.113802 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/202dcce7-fa8a-4991-bd7a-661eab1f3274-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "202dcce7-fa8a-4991-bd7a-661eab1f3274" (UID: "202dcce7-fa8a-4991-bd7a-661eab1f3274"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.120570 4755 scope.go:117] "RemoveContainer" containerID="cdc8d2847f3f497f28d58b30d269c0afb047671eb0c00c784c4213f6d286abac" Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.136436 4755 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/202dcce7-fa8a-4991-bd7a-661eab1f3274-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.136488 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g8rg8\" (UniqueName: \"kubernetes.io/projected/4ca3fb6e-045b-4025-9d18-eb0a13d9128a-kube-api-access-g8rg8\") on node \"crc\" DevicePath \"\"" Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.136502 4755 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a08c52a2-16c8-48eb-af20-472b5202eb85-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.136511 4755 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f9f6949-50c9-4d7e-b75f-b990a642d3a7-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.136519 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8kxhw\" (UniqueName: \"kubernetes.io/projected/8f9f6949-50c9-4d7e-b75f-b990a642d3a7-kube-api-access-8kxhw\") on node \"crc\" DevicePath \"\"" Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.136528 4755 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ca3fb6e-045b-4025-9d18-eb0a13d9128a-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.136537 4755 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/64e1b92e-9035-4439-abdc-86205e68c591-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.136547 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fhm9k\" (UniqueName: \"kubernetes.io/projected/a08c52a2-16c8-48eb-af20-472b5202eb85-kube-api-access-fhm9k\") on node \"crc\" DevicePath \"\"" Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.137045 4755 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f9f6949-50c9-4d7e-b75f-b990a642d3a7-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.137056 4755 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/64e1b92e-9035-4439-abdc-86205e68c591-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.137064 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zqmw2\" (UniqueName: \"kubernetes.io/projected/64e1b92e-9035-4439-abdc-86205e68c591-kube-api-access-zqmw2\") on node \"crc\" DevicePath \"\"" Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.137073 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7bbmb\" (UniqueName: \"kubernetes.io/projected/202dcce7-fa8a-4991-bd7a-661eab1f3274-kube-api-access-7bbmb\") on node \"crc\" DevicePath \"\"" Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.137080 4755 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ca3fb6e-045b-4025-9d18-eb0a13d9128a-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.149816 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-5bvrp"] Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.154271 4755 scope.go:117] "RemoveContainer" containerID="746e14ccb915e96b51025fe2de451e92c08e8070b20e9848a49ea027ec8945f4" Dec 10 15:28:19 crc kubenswrapper[4755]: E1210 15:28:19.154888 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"746e14ccb915e96b51025fe2de451e92c08e8070b20e9848a49ea027ec8945f4\": container with ID starting with 746e14ccb915e96b51025fe2de451e92c08e8070b20e9848a49ea027ec8945f4 not found: ID does not exist" containerID="746e14ccb915e96b51025fe2de451e92c08e8070b20e9848a49ea027ec8945f4" Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.154983 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"746e14ccb915e96b51025fe2de451e92c08e8070b20e9848a49ea027ec8945f4"} err="failed to get container status \"746e14ccb915e96b51025fe2de451e92c08e8070b20e9848a49ea027ec8945f4\": rpc error: code = NotFound desc = could not find container \"746e14ccb915e96b51025fe2de451e92c08e8070b20e9848a49ea027ec8945f4\": container with ID starting with 746e14ccb915e96b51025fe2de451e92c08e8070b20e9848a49ea027ec8945f4 not found: ID does not exist" Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.155039 4755 scope.go:117] "RemoveContainer" containerID="5c18e1a6ffeff96fb5e01ddbd35c47a5da5a8fcaf4719544394106328bce1cfd" Dec 10 15:28:19 crc kubenswrapper[4755]: E1210 15:28:19.155404 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c18e1a6ffeff96fb5e01ddbd35c47a5da5a8fcaf4719544394106328bce1cfd\": container with ID starting with 5c18e1a6ffeff96fb5e01ddbd35c47a5da5a8fcaf4719544394106328bce1cfd not found: ID does not exist" containerID="5c18e1a6ffeff96fb5e01ddbd35c47a5da5a8fcaf4719544394106328bce1cfd" Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.155505 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c18e1a6ffeff96fb5e01ddbd35c47a5da5a8fcaf4719544394106328bce1cfd"} err="failed to get container status \"5c18e1a6ffeff96fb5e01ddbd35c47a5da5a8fcaf4719544394106328bce1cfd\": rpc error: code = NotFound desc = could not find container \"5c18e1a6ffeff96fb5e01ddbd35c47a5da5a8fcaf4719544394106328bce1cfd\": container with ID starting with 5c18e1a6ffeff96fb5e01ddbd35c47a5da5a8fcaf4719544394106328bce1cfd not found: ID does not exist" Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.155553 4755 scope.go:117] "RemoveContainer" containerID="cdc8d2847f3f497f28d58b30d269c0afb047671eb0c00c784c4213f6d286abac" Dec 10 15:28:19 crc kubenswrapper[4755]: E1210 15:28:19.155822 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cdc8d2847f3f497f28d58b30d269c0afb047671eb0c00c784c4213f6d286abac\": container with ID starting with cdc8d2847f3f497f28d58b30d269c0afb047671eb0c00c784c4213f6d286abac not found: ID does not exist" containerID="cdc8d2847f3f497f28d58b30d269c0afb047671eb0c00c784c4213f6d286abac" Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.155847 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cdc8d2847f3f497f28d58b30d269c0afb047671eb0c00c784c4213f6d286abac"} err="failed to get container status \"cdc8d2847f3f497f28d58b30d269c0afb047671eb0c00c784c4213f6d286abac\": rpc error: code = NotFound desc = could not find container \"cdc8d2847f3f497f28d58b30d269c0afb047671eb0c00c784c4213f6d286abac\": container with ID starting with cdc8d2847f3f497f28d58b30d269c0afb047671eb0c00c784c4213f6d286abac not found: ID does not exist" Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.155866 4755 scope.go:117] "RemoveContainer" containerID="d755429e1151584ac1e91b47a807eb9e763166c3bfa62ff828404f0bfb613a58" Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.164414 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-5bvrp"] Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.166178 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a08c52a2-16c8-48eb-af20-472b5202eb85-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a08c52a2-16c8-48eb-af20-472b5202eb85" (UID: "a08c52a2-16c8-48eb-af20-472b5202eb85"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.168930 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qccqv"] Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.171141 4755 scope.go:117] "RemoveContainer" containerID="2f59242985087d103f024953a7ff981164ac271230042a3bf31727b85bc17944" Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.174106 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qccqv"] Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.186688 4755 scope.go:117] "RemoveContainer" containerID="796e2793f85591aa9094297b128a5917ea0af9dac96cb0f1f205db8e167e7423" Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.199293 4755 scope.go:117] "RemoveContainer" containerID="d755429e1151584ac1e91b47a807eb9e763166c3bfa62ff828404f0bfb613a58" Dec 10 15:28:19 crc kubenswrapper[4755]: E1210 15:28:19.199718 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d755429e1151584ac1e91b47a807eb9e763166c3bfa62ff828404f0bfb613a58\": container with ID starting with d755429e1151584ac1e91b47a807eb9e763166c3bfa62ff828404f0bfb613a58 not found: ID does not exist" containerID="d755429e1151584ac1e91b47a807eb9e763166c3bfa62ff828404f0bfb613a58" Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.199763 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d755429e1151584ac1e91b47a807eb9e763166c3bfa62ff828404f0bfb613a58"} err="failed to get container status \"d755429e1151584ac1e91b47a807eb9e763166c3bfa62ff828404f0bfb613a58\": rpc error: code = NotFound desc = could not find container \"d755429e1151584ac1e91b47a807eb9e763166c3bfa62ff828404f0bfb613a58\": container with ID starting with d755429e1151584ac1e91b47a807eb9e763166c3bfa62ff828404f0bfb613a58 not found: ID does not exist" Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.199791 4755 scope.go:117] "RemoveContainer" containerID="2f59242985087d103f024953a7ff981164ac271230042a3bf31727b85bc17944" Dec 10 15:28:19 crc kubenswrapper[4755]: E1210 15:28:19.200101 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f59242985087d103f024953a7ff981164ac271230042a3bf31727b85bc17944\": container with ID starting with 2f59242985087d103f024953a7ff981164ac271230042a3bf31727b85bc17944 not found: ID does not exist" containerID="2f59242985087d103f024953a7ff981164ac271230042a3bf31727b85bc17944" Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.200127 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f59242985087d103f024953a7ff981164ac271230042a3bf31727b85bc17944"} err="failed to get container status \"2f59242985087d103f024953a7ff981164ac271230042a3bf31727b85bc17944\": rpc error: code = NotFound desc = could not find container \"2f59242985087d103f024953a7ff981164ac271230042a3bf31727b85bc17944\": container with ID starting with 2f59242985087d103f024953a7ff981164ac271230042a3bf31727b85bc17944 not found: ID does not exist" Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.200143 4755 scope.go:117] "RemoveContainer" containerID="796e2793f85591aa9094297b128a5917ea0af9dac96cb0f1f205db8e167e7423" Dec 10 15:28:19 crc kubenswrapper[4755]: E1210 15:28:19.200402 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"796e2793f85591aa9094297b128a5917ea0af9dac96cb0f1f205db8e167e7423\": container with ID starting with 796e2793f85591aa9094297b128a5917ea0af9dac96cb0f1f205db8e167e7423 not found: ID does not exist" containerID="796e2793f85591aa9094297b128a5917ea0af9dac96cb0f1f205db8e167e7423" Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.200453 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"796e2793f85591aa9094297b128a5917ea0af9dac96cb0f1f205db8e167e7423"} err="failed to get container status \"796e2793f85591aa9094297b128a5917ea0af9dac96cb0f1f205db8e167e7423\": rpc error: code = NotFound desc = could not find container \"796e2793f85591aa9094297b128a5917ea0af9dac96cb0f1f205db8e167e7423\": container with ID starting with 796e2793f85591aa9094297b128a5917ea0af9dac96cb0f1f205db8e167e7423 not found: ID does not exist" Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.200517 4755 scope.go:117] "RemoveContainer" containerID="083e0327882a15b636a692c5e5f4acbdfbe6fd26d3c1231f1507c34e96b81ccd" Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.211661 4755 scope.go:117] "RemoveContainer" containerID="e4a555d655ca374a04c3d9e5b7119292ffcb9a9c975e716028ab4bf46e04ee5d" Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.222270 4755 scope.go:117] "RemoveContainer" containerID="be9a851a2193a391a03c67d8ef8a753e6fb1180aa6156a1e001fa41e56d663b9" Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.236237 4755 scope.go:117] "RemoveContainer" containerID="083e0327882a15b636a692c5e5f4acbdfbe6fd26d3c1231f1507c34e96b81ccd" Dec 10 15:28:19 crc kubenswrapper[4755]: E1210 15:28:19.236970 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"083e0327882a15b636a692c5e5f4acbdfbe6fd26d3c1231f1507c34e96b81ccd\": container with ID starting with 083e0327882a15b636a692c5e5f4acbdfbe6fd26d3c1231f1507c34e96b81ccd not found: ID does not exist" containerID="083e0327882a15b636a692c5e5f4acbdfbe6fd26d3c1231f1507c34e96b81ccd" Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.237072 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"083e0327882a15b636a692c5e5f4acbdfbe6fd26d3c1231f1507c34e96b81ccd"} err="failed to get container status \"083e0327882a15b636a692c5e5f4acbdfbe6fd26d3c1231f1507c34e96b81ccd\": rpc error: code = NotFound desc = could not find container \"083e0327882a15b636a692c5e5f4acbdfbe6fd26d3c1231f1507c34e96b81ccd\": container with ID starting with 083e0327882a15b636a692c5e5f4acbdfbe6fd26d3c1231f1507c34e96b81ccd not found: ID does not exist" Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.237158 4755 scope.go:117] "RemoveContainer" containerID="e4a555d655ca374a04c3d9e5b7119292ffcb9a9c975e716028ab4bf46e04ee5d" Dec 10 15:28:19 crc kubenswrapper[4755]: E1210 15:28:19.237716 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4a555d655ca374a04c3d9e5b7119292ffcb9a9c975e716028ab4bf46e04ee5d\": container with ID starting with e4a555d655ca374a04c3d9e5b7119292ffcb9a9c975e716028ab4bf46e04ee5d not found: ID does not exist" containerID="e4a555d655ca374a04c3d9e5b7119292ffcb9a9c975e716028ab4bf46e04ee5d" Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.237766 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4a555d655ca374a04c3d9e5b7119292ffcb9a9c975e716028ab4bf46e04ee5d"} err="failed to get container status \"e4a555d655ca374a04c3d9e5b7119292ffcb9a9c975e716028ab4bf46e04ee5d\": rpc error: code = NotFound desc = could not find container \"e4a555d655ca374a04c3d9e5b7119292ffcb9a9c975e716028ab4bf46e04ee5d\": container with ID starting with e4a555d655ca374a04c3d9e5b7119292ffcb9a9c975e716028ab4bf46e04ee5d not found: ID does not exist" Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.237799 4755 scope.go:117] "RemoveContainer" containerID="be9a851a2193a391a03c67d8ef8a753e6fb1180aa6156a1e001fa41e56d663b9" Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.237929 4755 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a08c52a2-16c8-48eb-af20-472b5202eb85-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 15:28:19 crc kubenswrapper[4755]: E1210 15:28:19.238143 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be9a851a2193a391a03c67d8ef8a753e6fb1180aa6156a1e001fa41e56d663b9\": container with ID starting with be9a851a2193a391a03c67d8ef8a753e6fb1180aa6156a1e001fa41e56d663b9 not found: ID does not exist" containerID="be9a851a2193a391a03c67d8ef8a753e6fb1180aa6156a1e001fa41e56d663b9" Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.238183 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be9a851a2193a391a03c67d8ef8a753e6fb1180aa6156a1e001fa41e56d663b9"} err="failed to get container status \"be9a851a2193a391a03c67d8ef8a753e6fb1180aa6156a1e001fa41e56d663b9\": rpc error: code = NotFound desc = could not find container \"be9a851a2193a391a03c67d8ef8a753e6fb1180aa6156a1e001fa41e56d663b9\": container with ID starting with be9a851a2193a391a03c67d8ef8a753e6fb1180aa6156a1e001fa41e56d663b9 not found: ID does not exist" Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.238233 4755 scope.go:117] "RemoveContainer" containerID="abd8b54041aab17d948f2a3f005f066036edfb27361f4b1131439f3b98cb6247" Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.249243 4755 scope.go:117] "RemoveContainer" containerID="010eb66a79f0de81c0e4e3a0ca0d5e96137a423e2a4d76853616d0fbe9c653ff" Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.261396 4755 scope.go:117] "RemoveContainer" containerID="6f7aca0b830b7034d624d5009377acc24ddcffbd335813350d4e9d3973d01abf" Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.271332 4755 scope.go:117] "RemoveContainer" containerID="abd8b54041aab17d948f2a3f005f066036edfb27361f4b1131439f3b98cb6247" Dec 10 15:28:19 crc kubenswrapper[4755]: E1210 15:28:19.271742 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"abd8b54041aab17d948f2a3f005f066036edfb27361f4b1131439f3b98cb6247\": container with ID starting with abd8b54041aab17d948f2a3f005f066036edfb27361f4b1131439f3b98cb6247 not found: ID does not exist" containerID="abd8b54041aab17d948f2a3f005f066036edfb27361f4b1131439f3b98cb6247" Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.271781 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abd8b54041aab17d948f2a3f005f066036edfb27361f4b1131439f3b98cb6247"} err="failed to get container status \"abd8b54041aab17d948f2a3f005f066036edfb27361f4b1131439f3b98cb6247\": rpc error: code = NotFound desc = could not find container \"abd8b54041aab17d948f2a3f005f066036edfb27361f4b1131439f3b98cb6247\": container with ID starting with abd8b54041aab17d948f2a3f005f066036edfb27361f4b1131439f3b98cb6247 not found: ID does not exist" Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.271812 4755 scope.go:117] "RemoveContainer" containerID="010eb66a79f0de81c0e4e3a0ca0d5e96137a423e2a4d76853616d0fbe9c653ff" Dec 10 15:28:19 crc kubenswrapper[4755]: E1210 15:28:19.272173 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"010eb66a79f0de81c0e4e3a0ca0d5e96137a423e2a4d76853616d0fbe9c653ff\": container with ID starting with 010eb66a79f0de81c0e4e3a0ca0d5e96137a423e2a4d76853616d0fbe9c653ff not found: ID does not exist" containerID="010eb66a79f0de81c0e4e3a0ca0d5e96137a423e2a4d76853616d0fbe9c653ff" Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.272220 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"010eb66a79f0de81c0e4e3a0ca0d5e96137a423e2a4d76853616d0fbe9c653ff"} err="failed to get container status \"010eb66a79f0de81c0e4e3a0ca0d5e96137a423e2a4d76853616d0fbe9c653ff\": rpc error: code = NotFound desc = could not find container \"010eb66a79f0de81c0e4e3a0ca0d5e96137a423e2a4d76853616d0fbe9c653ff\": container with ID starting with 010eb66a79f0de81c0e4e3a0ca0d5e96137a423e2a4d76853616d0fbe9c653ff not found: ID does not exist" Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.272251 4755 scope.go:117] "RemoveContainer" containerID="6f7aca0b830b7034d624d5009377acc24ddcffbd335813350d4e9d3973d01abf" Dec 10 15:28:19 crc kubenswrapper[4755]: E1210 15:28:19.272526 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f7aca0b830b7034d624d5009377acc24ddcffbd335813350d4e9d3973d01abf\": container with ID starting with 6f7aca0b830b7034d624d5009377acc24ddcffbd335813350d4e9d3973d01abf not found: ID does not exist" containerID="6f7aca0b830b7034d624d5009377acc24ddcffbd335813350d4e9d3973d01abf" Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.272561 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f7aca0b830b7034d624d5009377acc24ddcffbd335813350d4e9d3973d01abf"} err="failed to get container status \"6f7aca0b830b7034d624d5009377acc24ddcffbd335813350d4e9d3973d01abf\": rpc error: code = NotFound desc = could not find container \"6f7aca0b830b7034d624d5009377acc24ddcffbd335813350d4e9d3973d01abf\": container with ID starting with 6f7aca0b830b7034d624d5009377acc24ddcffbd335813350d4e9d3973d01abf not found: ID does not exist" Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.272581 4755 scope.go:117] "RemoveContainer" containerID="b52e2062ef030c0a58749ece4f375a26c3658e59d6ef93b0ffed076a9c53aad1" Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.282901 4755 scope.go:117] "RemoveContainer" containerID="b52e2062ef030c0a58749ece4f375a26c3658e59d6ef93b0ffed076a9c53aad1" Dec 10 15:28:19 crc kubenswrapper[4755]: E1210 15:28:19.283153 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b52e2062ef030c0a58749ece4f375a26c3658e59d6ef93b0ffed076a9c53aad1\": container with ID starting with b52e2062ef030c0a58749ece4f375a26c3658e59d6ef93b0ffed076a9c53aad1 not found: ID does not exist" containerID="b52e2062ef030c0a58749ece4f375a26c3658e59d6ef93b0ffed076a9c53aad1" Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.283182 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b52e2062ef030c0a58749ece4f375a26c3658e59d6ef93b0ffed076a9c53aad1"} err="failed to get container status \"b52e2062ef030c0a58749ece4f375a26c3658e59d6ef93b0ffed076a9c53aad1\": rpc error: code = NotFound desc = could not find container \"b52e2062ef030c0a58749ece4f375a26c3658e59d6ef93b0ffed076a9c53aad1\": container with ID starting with b52e2062ef030c0a58749ece4f375a26c3658e59d6ef93b0ffed076a9c53aad1 not found: ID does not exist" Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.422774 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-d89xt"] Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.432728 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-d89xt"] Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.444630 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-m45gd"] Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.450119 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-m45gd"] Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.454156 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-v7hds"] Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.458241 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-v7hds"] Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.769712 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="202dcce7-fa8a-4991-bd7a-661eab1f3274" path="/var/lib/kubelet/pods/202dcce7-fa8a-4991-bd7a-661eab1f3274/volumes" Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.771304 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ca3fb6e-045b-4025-9d18-eb0a13d9128a" path="/var/lib/kubelet/pods/4ca3fb6e-045b-4025-9d18-eb0a13d9128a/volumes" Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.772644 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64e1b92e-9035-4439-abdc-86205e68c591" path="/var/lib/kubelet/pods/64e1b92e-9035-4439-abdc-86205e68c591/volumes" Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.774156 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f9f6949-50c9-4d7e-b75f-b990a642d3a7" path="/var/lib/kubelet/pods/8f9f6949-50c9-4d7e-b75f-b990a642d3a7/volumes" Dec 10 15:28:19 crc kubenswrapper[4755]: I1210 15:28:19.774993 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a08c52a2-16c8-48eb-af20-472b5202eb85" path="/var/lib/kubelet/pods/a08c52a2-16c8-48eb-af20-472b5202eb85/volumes" Dec 10 15:28:22 crc kubenswrapper[4755]: I1210 15:28:22.808785 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 10 15:28:23 crc kubenswrapper[4755]: I1210 15:28:23.329586 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Dec 10 15:28:23 crc kubenswrapper[4755]: I1210 15:28:23.645981 4755 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Dec 10 15:28:24 crc kubenswrapper[4755]: I1210 15:28:24.557392 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 10 15:28:28 crc kubenswrapper[4755]: I1210 15:28:28.779811 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 10 15:28:31 crc kubenswrapper[4755]: I1210 15:28:31.536864 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Dec 10 15:28:31 crc kubenswrapper[4755]: I1210 15:28:31.564780 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 10 15:28:33 crc kubenswrapper[4755]: I1210 15:28:33.080755 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 10 15:28:35 crc kubenswrapper[4755]: I1210 15:28:35.034041 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 10 15:28:36 crc kubenswrapper[4755]: I1210 15:28:36.025339 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 10 15:28:43 crc kubenswrapper[4755]: I1210 15:28:43.497301 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 10 15:28:45 crc kubenswrapper[4755]: I1210 15:28:45.563901 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 10 15:28:57 crc kubenswrapper[4755]: I1210 15:28:57.679306 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-m9ntd"] Dec 10 15:28:57 crc kubenswrapper[4755]: I1210 15:28:57.679926 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-m9ntd" podUID="4e98c44e-5a60-49a0-9186-2367509dda97" containerName="controller-manager" containerID="cri-o://a5f87a58a8fe5f3e98addc6e46e77d39a101ec826cfb7531df98eb4a0ce6d9a4" gracePeriod=30 Dec 10 15:28:57 crc kubenswrapper[4755]: I1210 15:28:57.787026 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-4bgvx"] Dec 10 15:28:57 crc kubenswrapper[4755]: I1210 15:28:57.787271 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4bgvx" podUID="8c1052c9-fa99-4f24-8fff-923dc489c08d" containerName="route-controller-manager" containerID="cri-o://66de0d05dd090c473da341567845c0e3f2768051c7b48d2558f5c9727dba8053" gracePeriod=30 Dec 10 15:28:58 crc kubenswrapper[4755]: I1210 15:28:58.146922 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4bgvx" Dec 10 15:28:58 crc kubenswrapper[4755]: I1210 15:28:58.318544 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c1052c9-fa99-4f24-8fff-923dc489c08d-serving-cert\") pod \"8c1052c9-fa99-4f24-8fff-923dc489c08d\" (UID: \"8c1052c9-fa99-4f24-8fff-923dc489c08d\") " Dec 10 15:28:58 crc kubenswrapper[4755]: I1210 15:28:58.318972 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8h5xr\" (UniqueName: \"kubernetes.io/projected/8c1052c9-fa99-4f24-8fff-923dc489c08d-kube-api-access-8h5xr\") pod \"8c1052c9-fa99-4f24-8fff-923dc489c08d\" (UID: \"8c1052c9-fa99-4f24-8fff-923dc489c08d\") " Dec 10 15:28:58 crc kubenswrapper[4755]: I1210 15:28:58.319044 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8c1052c9-fa99-4f24-8fff-923dc489c08d-client-ca\") pod \"8c1052c9-fa99-4f24-8fff-923dc489c08d\" (UID: \"8c1052c9-fa99-4f24-8fff-923dc489c08d\") " Dec 10 15:28:58 crc kubenswrapper[4755]: I1210 15:28:58.319118 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c1052c9-fa99-4f24-8fff-923dc489c08d-config\") pod \"8c1052c9-fa99-4f24-8fff-923dc489c08d\" (UID: \"8c1052c9-fa99-4f24-8fff-923dc489c08d\") " Dec 10 15:28:58 crc kubenswrapper[4755]: I1210 15:28:58.319943 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c1052c9-fa99-4f24-8fff-923dc489c08d-client-ca" (OuterVolumeSpecName: "client-ca") pod "8c1052c9-fa99-4f24-8fff-923dc489c08d" (UID: "8c1052c9-fa99-4f24-8fff-923dc489c08d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:28:58 crc kubenswrapper[4755]: I1210 15:28:58.320062 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c1052c9-fa99-4f24-8fff-923dc489c08d-config" (OuterVolumeSpecName: "config") pod "8c1052c9-fa99-4f24-8fff-923dc489c08d" (UID: "8c1052c9-fa99-4f24-8fff-923dc489c08d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:28:58 crc kubenswrapper[4755]: I1210 15:28:58.324111 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c1052c9-fa99-4f24-8fff-923dc489c08d-kube-api-access-8h5xr" (OuterVolumeSpecName: "kube-api-access-8h5xr") pod "8c1052c9-fa99-4f24-8fff-923dc489c08d" (UID: "8c1052c9-fa99-4f24-8fff-923dc489c08d"). InnerVolumeSpecName "kube-api-access-8h5xr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:28:58 crc kubenswrapper[4755]: I1210 15:28:58.326235 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c1052c9-fa99-4f24-8fff-923dc489c08d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8c1052c9-fa99-4f24-8fff-923dc489c08d" (UID: "8c1052c9-fa99-4f24-8fff-923dc489c08d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:28:58 crc kubenswrapper[4755]: I1210 15:28:58.346451 4755 generic.go:334] "Generic (PLEG): container finished" podID="8c1052c9-fa99-4f24-8fff-923dc489c08d" containerID="66de0d05dd090c473da341567845c0e3f2768051c7b48d2558f5c9727dba8053" exitCode=0 Dec 10 15:28:58 crc kubenswrapper[4755]: I1210 15:28:58.346525 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4bgvx" event={"ID":"8c1052c9-fa99-4f24-8fff-923dc489c08d","Type":"ContainerDied","Data":"66de0d05dd090c473da341567845c0e3f2768051c7b48d2558f5c9727dba8053"} Dec 10 15:28:58 crc kubenswrapper[4755]: I1210 15:28:58.346557 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4bgvx" Dec 10 15:28:58 crc kubenswrapper[4755]: I1210 15:28:58.346593 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4bgvx" event={"ID":"8c1052c9-fa99-4f24-8fff-923dc489c08d","Type":"ContainerDied","Data":"14cff523b2edbd32269f33544f5bc8f5b8bc75cf641ecdf026bddaf0610bd47a"} Dec 10 15:28:58 crc kubenswrapper[4755]: I1210 15:28:58.346618 4755 scope.go:117] "RemoveContainer" containerID="66de0d05dd090c473da341567845c0e3f2768051c7b48d2558f5c9727dba8053" Dec 10 15:28:58 crc kubenswrapper[4755]: I1210 15:28:58.347733 4755 generic.go:334] "Generic (PLEG): container finished" podID="4e98c44e-5a60-49a0-9186-2367509dda97" containerID="a5f87a58a8fe5f3e98addc6e46e77d39a101ec826cfb7531df98eb4a0ce6d9a4" exitCode=0 Dec 10 15:28:58 crc kubenswrapper[4755]: I1210 15:28:58.347797 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-m9ntd" event={"ID":"4e98c44e-5a60-49a0-9186-2367509dda97","Type":"ContainerDied","Data":"a5f87a58a8fe5f3e98addc6e46e77d39a101ec826cfb7531df98eb4a0ce6d9a4"} Dec 10 15:28:58 crc kubenswrapper[4755]: I1210 15:28:58.366093 4755 scope.go:117] "RemoveContainer" containerID="66de0d05dd090c473da341567845c0e3f2768051c7b48d2558f5c9727dba8053" Dec 10 15:28:58 crc kubenswrapper[4755]: E1210 15:28:58.366533 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66de0d05dd090c473da341567845c0e3f2768051c7b48d2558f5c9727dba8053\": container with ID starting with 66de0d05dd090c473da341567845c0e3f2768051c7b48d2558f5c9727dba8053 not found: ID does not exist" containerID="66de0d05dd090c473da341567845c0e3f2768051c7b48d2558f5c9727dba8053" Dec 10 15:28:58 crc kubenswrapper[4755]: I1210 15:28:58.366599 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66de0d05dd090c473da341567845c0e3f2768051c7b48d2558f5c9727dba8053"} err="failed to get container status \"66de0d05dd090c473da341567845c0e3f2768051c7b48d2558f5c9727dba8053\": rpc error: code = NotFound desc = could not find container \"66de0d05dd090c473da341567845c0e3f2768051c7b48d2558f5c9727dba8053\": container with ID starting with 66de0d05dd090c473da341567845c0e3f2768051c7b48d2558f5c9727dba8053 not found: ID does not exist" Dec 10 15:28:58 crc kubenswrapper[4755]: I1210 15:28:58.381768 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-4bgvx"] Dec 10 15:28:58 crc kubenswrapper[4755]: I1210 15:28:58.386045 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-4bgvx"] Dec 10 15:28:58 crc kubenswrapper[4755]: I1210 15:28:58.421069 4755 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c1052c9-fa99-4f24-8fff-923dc489c08d-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:28:58 crc kubenswrapper[4755]: I1210 15:28:58.421115 4755 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c1052c9-fa99-4f24-8fff-923dc489c08d-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 15:28:58 crc kubenswrapper[4755]: I1210 15:28:58.421127 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8h5xr\" (UniqueName: \"kubernetes.io/projected/8c1052c9-fa99-4f24-8fff-923dc489c08d-kube-api-access-8h5xr\") on node \"crc\" DevicePath \"\"" Dec 10 15:28:58 crc kubenswrapper[4755]: I1210 15:28:58.421139 4755 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8c1052c9-fa99-4f24-8fff-923dc489c08d-client-ca\") on node \"crc\" DevicePath \"\"" Dec 10 15:28:58 crc kubenswrapper[4755]: I1210 15:28:58.489316 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-m9ntd" Dec 10 15:28:58 crc kubenswrapper[4755]: I1210 15:28:58.623039 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rwp84\" (UniqueName: \"kubernetes.io/projected/4e98c44e-5a60-49a0-9186-2367509dda97-kube-api-access-rwp84\") pod \"4e98c44e-5a60-49a0-9186-2367509dda97\" (UID: \"4e98c44e-5a60-49a0-9186-2367509dda97\") " Dec 10 15:28:58 crc kubenswrapper[4755]: I1210 15:28:58.623104 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4e98c44e-5a60-49a0-9186-2367509dda97-serving-cert\") pod \"4e98c44e-5a60-49a0-9186-2367509dda97\" (UID: \"4e98c44e-5a60-49a0-9186-2367509dda97\") " Dec 10 15:28:58 crc kubenswrapper[4755]: I1210 15:28:58.623172 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4e98c44e-5a60-49a0-9186-2367509dda97-proxy-ca-bundles\") pod \"4e98c44e-5a60-49a0-9186-2367509dda97\" (UID: \"4e98c44e-5a60-49a0-9186-2367509dda97\") " Dec 10 15:28:58 crc kubenswrapper[4755]: I1210 15:28:58.623213 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e98c44e-5a60-49a0-9186-2367509dda97-config\") pod \"4e98c44e-5a60-49a0-9186-2367509dda97\" (UID: \"4e98c44e-5a60-49a0-9186-2367509dda97\") " Dec 10 15:28:58 crc kubenswrapper[4755]: I1210 15:28:58.623246 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4e98c44e-5a60-49a0-9186-2367509dda97-client-ca\") pod \"4e98c44e-5a60-49a0-9186-2367509dda97\" (UID: \"4e98c44e-5a60-49a0-9186-2367509dda97\") " Dec 10 15:28:58 crc kubenswrapper[4755]: I1210 15:28:58.624071 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e98c44e-5a60-49a0-9186-2367509dda97-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "4e98c44e-5a60-49a0-9186-2367509dda97" (UID: "4e98c44e-5a60-49a0-9186-2367509dda97"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:28:58 crc kubenswrapper[4755]: I1210 15:28:58.624105 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e98c44e-5a60-49a0-9186-2367509dda97-config" (OuterVolumeSpecName: "config") pod "4e98c44e-5a60-49a0-9186-2367509dda97" (UID: "4e98c44e-5a60-49a0-9186-2367509dda97"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:28:58 crc kubenswrapper[4755]: I1210 15:28:58.624218 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e98c44e-5a60-49a0-9186-2367509dda97-client-ca" (OuterVolumeSpecName: "client-ca") pod "4e98c44e-5a60-49a0-9186-2367509dda97" (UID: "4e98c44e-5a60-49a0-9186-2367509dda97"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:28:58 crc kubenswrapper[4755]: I1210 15:28:58.626704 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e98c44e-5a60-49a0-9186-2367509dda97-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "4e98c44e-5a60-49a0-9186-2367509dda97" (UID: "4e98c44e-5a60-49a0-9186-2367509dda97"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:28:58 crc kubenswrapper[4755]: I1210 15:28:58.626722 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e98c44e-5a60-49a0-9186-2367509dda97-kube-api-access-rwp84" (OuterVolumeSpecName: "kube-api-access-rwp84") pod "4e98c44e-5a60-49a0-9186-2367509dda97" (UID: "4e98c44e-5a60-49a0-9186-2367509dda97"). InnerVolumeSpecName "kube-api-access-rwp84". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:28:58 crc kubenswrapper[4755]: I1210 15:28:58.724882 4755 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e98c44e-5a60-49a0-9186-2367509dda97-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:28:58 crc kubenswrapper[4755]: I1210 15:28:58.724921 4755 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4e98c44e-5a60-49a0-9186-2367509dda97-client-ca\") on node \"crc\" DevicePath \"\"" Dec 10 15:28:58 crc kubenswrapper[4755]: I1210 15:28:58.724931 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rwp84\" (UniqueName: \"kubernetes.io/projected/4e98c44e-5a60-49a0-9186-2367509dda97-kube-api-access-rwp84\") on node \"crc\" DevicePath \"\"" Dec 10 15:28:58 crc kubenswrapper[4755]: I1210 15:28:58.724943 4755 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4e98c44e-5a60-49a0-9186-2367509dda97-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 15:28:58 crc kubenswrapper[4755]: I1210 15:28:58.724953 4755 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4e98c44e-5a60-49a0-9186-2367509dda97-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 10 15:28:59 crc kubenswrapper[4755]: I1210 15:28:59.182556 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-578f67dc67-vl7mg"] Dec 10 15:28:59 crc kubenswrapper[4755]: E1210 15:28:59.182834 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a08c52a2-16c8-48eb-af20-472b5202eb85" containerName="registry-server" Dec 10 15:28:59 crc kubenswrapper[4755]: I1210 15:28:59.182850 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="a08c52a2-16c8-48eb-af20-472b5202eb85" containerName="registry-server" Dec 10 15:28:59 crc kubenswrapper[4755]: E1210 15:28:59.182862 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f9f6949-50c9-4d7e-b75f-b990a642d3a7" containerName="extract-content" Dec 10 15:28:59 crc kubenswrapper[4755]: I1210 15:28:59.182872 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f9f6949-50c9-4d7e-b75f-b990a642d3a7" containerName="extract-content" Dec 10 15:28:59 crc kubenswrapper[4755]: E1210 15:28:59.182886 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ca3fb6e-045b-4025-9d18-eb0a13d9128a" containerName="registry-server" Dec 10 15:28:59 crc kubenswrapper[4755]: I1210 15:28:59.182894 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ca3fb6e-045b-4025-9d18-eb0a13d9128a" containerName="registry-server" Dec 10 15:28:59 crc kubenswrapper[4755]: E1210 15:28:59.182906 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e98c44e-5a60-49a0-9186-2367509dda97" containerName="controller-manager" Dec 10 15:28:59 crc kubenswrapper[4755]: I1210 15:28:59.182916 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e98c44e-5a60-49a0-9186-2367509dda97" containerName="controller-manager" Dec 10 15:28:59 crc kubenswrapper[4755]: E1210 15:28:59.182925 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a08c52a2-16c8-48eb-af20-472b5202eb85" containerName="extract-content" Dec 10 15:28:59 crc kubenswrapper[4755]: I1210 15:28:59.182933 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="a08c52a2-16c8-48eb-af20-472b5202eb85" containerName="extract-content" Dec 10 15:28:59 crc kubenswrapper[4755]: E1210 15:28:59.182947 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a08c52a2-16c8-48eb-af20-472b5202eb85" containerName="extract-utilities" Dec 10 15:28:59 crc kubenswrapper[4755]: I1210 15:28:59.182955 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="a08c52a2-16c8-48eb-af20-472b5202eb85" containerName="extract-utilities" Dec 10 15:28:59 crc kubenswrapper[4755]: E1210 15:28:59.182965 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="202dcce7-fa8a-4991-bd7a-661eab1f3274" containerName="extract-utilities" Dec 10 15:28:59 crc kubenswrapper[4755]: I1210 15:28:59.182973 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="202dcce7-fa8a-4991-bd7a-661eab1f3274" containerName="extract-utilities" Dec 10 15:28:59 crc kubenswrapper[4755]: E1210 15:28:59.182983 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="202dcce7-fa8a-4991-bd7a-661eab1f3274" containerName="extract-content" Dec 10 15:28:59 crc kubenswrapper[4755]: I1210 15:28:59.182992 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="202dcce7-fa8a-4991-bd7a-661eab1f3274" containerName="extract-content" Dec 10 15:28:59 crc kubenswrapper[4755]: E1210 15:28:59.183003 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ca3fb6e-045b-4025-9d18-eb0a13d9128a" containerName="extract-content" Dec 10 15:28:59 crc kubenswrapper[4755]: I1210 15:28:59.183012 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ca3fb6e-045b-4025-9d18-eb0a13d9128a" containerName="extract-content" Dec 10 15:28:59 crc kubenswrapper[4755]: E1210 15:28:59.183024 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ca3fb6e-045b-4025-9d18-eb0a13d9128a" containerName="extract-utilities" Dec 10 15:28:59 crc kubenswrapper[4755]: I1210 15:28:59.183032 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ca3fb6e-045b-4025-9d18-eb0a13d9128a" containerName="extract-utilities" Dec 10 15:28:59 crc kubenswrapper[4755]: E1210 15:28:59.183045 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64e1b92e-9035-4439-abdc-86205e68c591" containerName="marketplace-operator" Dec 10 15:28:59 crc kubenswrapper[4755]: I1210 15:28:59.183055 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="64e1b92e-9035-4439-abdc-86205e68c591" containerName="marketplace-operator" Dec 10 15:28:59 crc kubenswrapper[4755]: E1210 15:28:59.183072 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c1052c9-fa99-4f24-8fff-923dc489c08d" containerName="route-controller-manager" Dec 10 15:28:59 crc kubenswrapper[4755]: I1210 15:28:59.183082 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c1052c9-fa99-4f24-8fff-923dc489c08d" containerName="route-controller-manager" Dec 10 15:28:59 crc kubenswrapper[4755]: E1210 15:28:59.183095 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f9f6949-50c9-4d7e-b75f-b990a642d3a7" containerName="extract-utilities" Dec 10 15:28:59 crc kubenswrapper[4755]: I1210 15:28:59.183105 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f9f6949-50c9-4d7e-b75f-b990a642d3a7" containerName="extract-utilities" Dec 10 15:28:59 crc kubenswrapper[4755]: E1210 15:28:59.183120 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f9f6949-50c9-4d7e-b75f-b990a642d3a7" containerName="registry-server" Dec 10 15:28:59 crc kubenswrapper[4755]: I1210 15:28:59.183133 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f9f6949-50c9-4d7e-b75f-b990a642d3a7" containerName="registry-server" Dec 10 15:28:59 crc kubenswrapper[4755]: E1210 15:28:59.183146 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="202dcce7-fa8a-4991-bd7a-661eab1f3274" containerName="registry-server" Dec 10 15:28:59 crc kubenswrapper[4755]: I1210 15:28:59.183156 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="202dcce7-fa8a-4991-bd7a-661eab1f3274" containerName="registry-server" Dec 10 15:28:59 crc kubenswrapper[4755]: I1210 15:28:59.183300 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e98c44e-5a60-49a0-9186-2367509dda97" containerName="controller-manager" Dec 10 15:28:59 crc kubenswrapper[4755]: I1210 15:28:59.183313 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="a08c52a2-16c8-48eb-af20-472b5202eb85" containerName="registry-server" Dec 10 15:28:59 crc kubenswrapper[4755]: I1210 15:28:59.183329 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="202dcce7-fa8a-4991-bd7a-661eab1f3274" containerName="registry-server" Dec 10 15:28:59 crc kubenswrapper[4755]: I1210 15:28:59.183341 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c1052c9-fa99-4f24-8fff-923dc489c08d" containerName="route-controller-manager" Dec 10 15:28:59 crc kubenswrapper[4755]: I1210 15:28:59.183353 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ca3fb6e-045b-4025-9d18-eb0a13d9128a" containerName="registry-server" Dec 10 15:28:59 crc kubenswrapper[4755]: I1210 15:28:59.183366 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="64e1b92e-9035-4439-abdc-86205e68c591" containerName="marketplace-operator" Dec 10 15:28:59 crc kubenswrapper[4755]: I1210 15:28:59.183380 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f9f6949-50c9-4d7e-b75f-b990a642d3a7" containerName="registry-server" Dec 10 15:28:59 crc kubenswrapper[4755]: I1210 15:28:59.183845 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-578f67dc67-vl7mg" Dec 10 15:28:59 crc kubenswrapper[4755]: I1210 15:28:59.185360 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 10 15:28:59 crc kubenswrapper[4755]: I1210 15:28:59.186680 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 10 15:28:59 crc kubenswrapper[4755]: I1210 15:28:59.187110 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 10 15:28:59 crc kubenswrapper[4755]: I1210 15:28:59.187211 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 10 15:28:59 crc kubenswrapper[4755]: I1210 15:28:59.187119 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 10 15:28:59 crc kubenswrapper[4755]: I1210 15:28:59.189805 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 10 15:28:59 crc kubenswrapper[4755]: I1210 15:28:59.205569 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-865488587c-v2mj8"] Dec 10 15:28:59 crc kubenswrapper[4755]: I1210 15:28:59.207257 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-865488587c-v2mj8" Dec 10 15:28:59 crc kubenswrapper[4755]: I1210 15:28:59.212935 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-578f67dc67-vl7mg"] Dec 10 15:28:59 crc kubenswrapper[4755]: I1210 15:28:59.218279 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-865488587c-v2mj8"] Dec 10 15:28:59 crc kubenswrapper[4755]: I1210 15:28:59.331893 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f811320b-4864-4925-9528-6b9cb2606c20-proxy-ca-bundles\") pod \"controller-manager-865488587c-v2mj8\" (UID: \"f811320b-4864-4925-9528-6b9cb2606c20\") " pod="openshift-controller-manager/controller-manager-865488587c-v2mj8" Dec 10 15:28:59 crc kubenswrapper[4755]: I1210 15:28:59.331987 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1905d225-779d-4078-9acd-801cfc176622-serving-cert\") pod \"route-controller-manager-578f67dc67-vl7mg\" (UID: \"1905d225-779d-4078-9acd-801cfc176622\") " pod="openshift-route-controller-manager/route-controller-manager-578f67dc67-vl7mg" Dec 10 15:28:59 crc kubenswrapper[4755]: I1210 15:28:59.332042 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6z775\" (UniqueName: \"kubernetes.io/projected/f811320b-4864-4925-9528-6b9cb2606c20-kube-api-access-6z775\") pod \"controller-manager-865488587c-v2mj8\" (UID: \"f811320b-4864-4925-9528-6b9cb2606c20\") " pod="openshift-controller-manager/controller-manager-865488587c-v2mj8" Dec 10 15:28:59 crc kubenswrapper[4755]: I1210 15:28:59.332197 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f811320b-4864-4925-9528-6b9cb2606c20-client-ca\") pod \"controller-manager-865488587c-v2mj8\" (UID: \"f811320b-4864-4925-9528-6b9cb2606c20\") " pod="openshift-controller-manager/controller-manager-865488587c-v2mj8" Dec 10 15:28:59 crc kubenswrapper[4755]: I1210 15:28:59.332248 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1905d225-779d-4078-9acd-801cfc176622-config\") pod \"route-controller-manager-578f67dc67-vl7mg\" (UID: \"1905d225-779d-4078-9acd-801cfc176622\") " pod="openshift-route-controller-manager/route-controller-manager-578f67dc67-vl7mg" Dec 10 15:28:59 crc kubenswrapper[4755]: I1210 15:28:59.332296 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f811320b-4864-4925-9528-6b9cb2606c20-serving-cert\") pod \"controller-manager-865488587c-v2mj8\" (UID: \"f811320b-4864-4925-9528-6b9cb2606c20\") " pod="openshift-controller-manager/controller-manager-865488587c-v2mj8" Dec 10 15:28:59 crc kubenswrapper[4755]: I1210 15:28:59.332509 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f811320b-4864-4925-9528-6b9cb2606c20-config\") pod \"controller-manager-865488587c-v2mj8\" (UID: \"f811320b-4864-4925-9528-6b9cb2606c20\") " pod="openshift-controller-manager/controller-manager-865488587c-v2mj8" Dec 10 15:28:59 crc kubenswrapper[4755]: I1210 15:28:59.332586 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrpbt\" (UniqueName: \"kubernetes.io/projected/1905d225-779d-4078-9acd-801cfc176622-kube-api-access-mrpbt\") pod \"route-controller-manager-578f67dc67-vl7mg\" (UID: \"1905d225-779d-4078-9acd-801cfc176622\") " pod="openshift-route-controller-manager/route-controller-manager-578f67dc67-vl7mg" Dec 10 15:28:59 crc kubenswrapper[4755]: I1210 15:28:59.332645 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1905d225-779d-4078-9acd-801cfc176622-client-ca\") pod \"route-controller-manager-578f67dc67-vl7mg\" (UID: \"1905d225-779d-4078-9acd-801cfc176622\") " pod="openshift-route-controller-manager/route-controller-manager-578f67dc67-vl7mg" Dec 10 15:28:59 crc kubenswrapper[4755]: I1210 15:28:59.359732 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-m9ntd" event={"ID":"4e98c44e-5a60-49a0-9186-2367509dda97","Type":"ContainerDied","Data":"98819c52c8c96639765927c04af504bf678b1db167d0df6f1f819b6216e3652f"} Dec 10 15:28:59 crc kubenswrapper[4755]: I1210 15:28:59.359805 4755 scope.go:117] "RemoveContainer" containerID="a5f87a58a8fe5f3e98addc6e46e77d39a101ec826cfb7531df98eb4a0ce6d9a4" Dec 10 15:28:59 crc kubenswrapper[4755]: I1210 15:28:59.359970 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-m9ntd" Dec 10 15:28:59 crc kubenswrapper[4755]: I1210 15:28:59.412088 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-m9ntd"] Dec 10 15:28:59 crc kubenswrapper[4755]: I1210 15:28:59.416560 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-m9ntd"] Dec 10 15:28:59 crc kubenswrapper[4755]: I1210 15:28:59.433968 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrpbt\" (UniqueName: \"kubernetes.io/projected/1905d225-779d-4078-9acd-801cfc176622-kube-api-access-mrpbt\") pod \"route-controller-manager-578f67dc67-vl7mg\" (UID: \"1905d225-779d-4078-9acd-801cfc176622\") " pod="openshift-route-controller-manager/route-controller-manager-578f67dc67-vl7mg" Dec 10 15:28:59 crc kubenswrapper[4755]: I1210 15:28:59.434364 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1905d225-779d-4078-9acd-801cfc176622-client-ca\") pod \"route-controller-manager-578f67dc67-vl7mg\" (UID: \"1905d225-779d-4078-9acd-801cfc176622\") " pod="openshift-route-controller-manager/route-controller-manager-578f67dc67-vl7mg" Dec 10 15:28:59 crc kubenswrapper[4755]: I1210 15:28:59.434545 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f811320b-4864-4925-9528-6b9cb2606c20-proxy-ca-bundles\") pod \"controller-manager-865488587c-v2mj8\" (UID: \"f811320b-4864-4925-9528-6b9cb2606c20\") " pod="openshift-controller-manager/controller-manager-865488587c-v2mj8" Dec 10 15:28:59 crc kubenswrapper[4755]: I1210 15:28:59.435587 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1905d225-779d-4078-9acd-801cfc176622-client-ca\") pod \"route-controller-manager-578f67dc67-vl7mg\" (UID: \"1905d225-779d-4078-9acd-801cfc176622\") " pod="openshift-route-controller-manager/route-controller-manager-578f67dc67-vl7mg" Dec 10 15:28:59 crc kubenswrapper[4755]: I1210 15:28:59.436096 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f811320b-4864-4925-9528-6b9cb2606c20-proxy-ca-bundles\") pod \"controller-manager-865488587c-v2mj8\" (UID: \"f811320b-4864-4925-9528-6b9cb2606c20\") " pod="openshift-controller-manager/controller-manager-865488587c-v2mj8" Dec 10 15:28:59 crc kubenswrapper[4755]: I1210 15:28:59.436166 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1905d225-779d-4078-9acd-801cfc176622-serving-cert\") pod \"route-controller-manager-578f67dc67-vl7mg\" (UID: \"1905d225-779d-4078-9acd-801cfc176622\") " pod="openshift-route-controller-manager/route-controller-manager-578f67dc67-vl7mg" Dec 10 15:28:59 crc kubenswrapper[4755]: I1210 15:28:59.436730 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6z775\" (UniqueName: \"kubernetes.io/projected/f811320b-4864-4925-9528-6b9cb2606c20-kube-api-access-6z775\") pod \"controller-manager-865488587c-v2mj8\" (UID: \"f811320b-4864-4925-9528-6b9cb2606c20\") " pod="openshift-controller-manager/controller-manager-865488587c-v2mj8" Dec 10 15:28:59 crc kubenswrapper[4755]: I1210 15:28:59.437265 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f811320b-4864-4925-9528-6b9cb2606c20-client-ca\") pod \"controller-manager-865488587c-v2mj8\" (UID: \"f811320b-4864-4925-9528-6b9cb2606c20\") " pod="openshift-controller-manager/controller-manager-865488587c-v2mj8" Dec 10 15:28:59 crc kubenswrapper[4755]: I1210 15:28:59.438227 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1905d225-779d-4078-9acd-801cfc176622-config\") pod \"route-controller-manager-578f67dc67-vl7mg\" (UID: \"1905d225-779d-4078-9acd-801cfc176622\") " pod="openshift-route-controller-manager/route-controller-manager-578f67dc67-vl7mg" Dec 10 15:28:59 crc kubenswrapper[4755]: I1210 15:28:59.438424 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f811320b-4864-4925-9528-6b9cb2606c20-serving-cert\") pod \"controller-manager-865488587c-v2mj8\" (UID: \"f811320b-4864-4925-9528-6b9cb2606c20\") " pod="openshift-controller-manager/controller-manager-865488587c-v2mj8" Dec 10 15:28:59 crc kubenswrapper[4755]: I1210 15:28:59.438163 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f811320b-4864-4925-9528-6b9cb2606c20-client-ca\") pod \"controller-manager-865488587c-v2mj8\" (UID: \"f811320b-4864-4925-9528-6b9cb2606c20\") " pod="openshift-controller-manager/controller-manager-865488587c-v2mj8" Dec 10 15:28:59 crc kubenswrapper[4755]: I1210 15:28:59.438795 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f811320b-4864-4925-9528-6b9cb2606c20-config\") pod \"controller-manager-865488587c-v2mj8\" (UID: \"f811320b-4864-4925-9528-6b9cb2606c20\") " pod="openshift-controller-manager/controller-manager-865488587c-v2mj8" Dec 10 15:28:59 crc kubenswrapper[4755]: I1210 15:28:59.442704 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f811320b-4864-4925-9528-6b9cb2606c20-serving-cert\") pod \"controller-manager-865488587c-v2mj8\" (UID: \"f811320b-4864-4925-9528-6b9cb2606c20\") " pod="openshift-controller-manager/controller-manager-865488587c-v2mj8" Dec 10 15:28:59 crc kubenswrapper[4755]: I1210 15:28:59.443935 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1905d225-779d-4078-9acd-801cfc176622-config\") pod \"route-controller-manager-578f67dc67-vl7mg\" (UID: \"1905d225-779d-4078-9acd-801cfc176622\") " pod="openshift-route-controller-manager/route-controller-manager-578f67dc67-vl7mg" Dec 10 15:28:59 crc kubenswrapper[4755]: I1210 15:28:59.444644 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f811320b-4864-4925-9528-6b9cb2606c20-config\") pod \"controller-manager-865488587c-v2mj8\" (UID: \"f811320b-4864-4925-9528-6b9cb2606c20\") " pod="openshift-controller-manager/controller-manager-865488587c-v2mj8" Dec 10 15:28:59 crc kubenswrapper[4755]: I1210 15:28:59.445100 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1905d225-779d-4078-9acd-801cfc176622-serving-cert\") pod \"route-controller-manager-578f67dc67-vl7mg\" (UID: \"1905d225-779d-4078-9acd-801cfc176622\") " pod="openshift-route-controller-manager/route-controller-manager-578f67dc67-vl7mg" Dec 10 15:28:59 crc kubenswrapper[4755]: I1210 15:28:59.452631 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrpbt\" (UniqueName: \"kubernetes.io/projected/1905d225-779d-4078-9acd-801cfc176622-kube-api-access-mrpbt\") pod \"route-controller-manager-578f67dc67-vl7mg\" (UID: \"1905d225-779d-4078-9acd-801cfc176622\") " pod="openshift-route-controller-manager/route-controller-manager-578f67dc67-vl7mg" Dec 10 15:28:59 crc kubenswrapper[4755]: I1210 15:28:59.463672 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6z775\" (UniqueName: \"kubernetes.io/projected/f811320b-4864-4925-9528-6b9cb2606c20-kube-api-access-6z775\") pod \"controller-manager-865488587c-v2mj8\" (UID: \"f811320b-4864-4925-9528-6b9cb2606c20\") " pod="openshift-controller-manager/controller-manager-865488587c-v2mj8" Dec 10 15:28:59 crc kubenswrapper[4755]: I1210 15:28:59.506142 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-578f67dc67-vl7mg" Dec 10 15:28:59 crc kubenswrapper[4755]: I1210 15:28:59.523428 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-865488587c-v2mj8" Dec 10 15:28:59 crc kubenswrapper[4755]: I1210 15:28:59.646159 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-865488587c-v2mj8"] Dec 10 15:28:59 crc kubenswrapper[4755]: I1210 15:28:59.655111 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-578f67dc67-vl7mg"] Dec 10 15:28:59 crc kubenswrapper[4755]: I1210 15:28:59.742661 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-578f67dc67-vl7mg"] Dec 10 15:28:59 crc kubenswrapper[4755]: I1210 15:28:59.772554 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e98c44e-5a60-49a0-9186-2367509dda97" path="/var/lib/kubelet/pods/4e98c44e-5a60-49a0-9186-2367509dda97/volumes" Dec 10 15:28:59 crc kubenswrapper[4755]: I1210 15:28:59.773343 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c1052c9-fa99-4f24-8fff-923dc489c08d" path="/var/lib/kubelet/pods/8c1052c9-fa99-4f24-8fff-923dc489c08d/volumes" Dec 10 15:28:59 crc kubenswrapper[4755]: I1210 15:28:59.783892 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-865488587c-v2mj8"] Dec 10 15:29:00 crc kubenswrapper[4755]: I1210 15:29:00.372598 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-578f67dc67-vl7mg" event={"ID":"1905d225-779d-4078-9acd-801cfc176622","Type":"ContainerStarted","Data":"91aaa44094a9429cd7ab87439e792b17f753c9697b847c45a1dfc050273d6b3a"} Dec 10 15:29:00 crc kubenswrapper[4755]: I1210 15:29:00.372921 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-578f67dc67-vl7mg" event={"ID":"1905d225-779d-4078-9acd-801cfc176622","Type":"ContainerStarted","Data":"1881c7125f8fa939efd096a2b54ec72b4448b793944ba014549cbdb06ee67dff"} Dec 10 15:29:00 crc kubenswrapper[4755]: I1210 15:29:00.372943 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-578f67dc67-vl7mg" Dec 10 15:29:00 crc kubenswrapper[4755]: I1210 15:29:00.372711 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-578f67dc67-vl7mg" podUID="1905d225-779d-4078-9acd-801cfc176622" containerName="route-controller-manager" containerID="cri-o://91aaa44094a9429cd7ab87439e792b17f753c9697b847c45a1dfc050273d6b3a" gracePeriod=30 Dec 10 15:29:00 crc kubenswrapper[4755]: I1210 15:29:00.376865 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-865488587c-v2mj8" event={"ID":"f811320b-4864-4925-9528-6b9cb2606c20","Type":"ContainerStarted","Data":"abb5ee0a2493506c5225752d2e7fc7916f800fd9be5009d4a2eef657af9e9bd9"} Dec 10 15:29:00 crc kubenswrapper[4755]: I1210 15:29:00.376898 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-865488587c-v2mj8" event={"ID":"f811320b-4864-4925-9528-6b9cb2606c20","Type":"ContainerStarted","Data":"71c065a94415ed10eb9f2e7f641009a148a019c903532310d6d87f45c2cb6811"} Dec 10 15:29:00 crc kubenswrapper[4755]: I1210 15:29:00.377144 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-865488587c-v2mj8" Dec 10 15:29:00 crc kubenswrapper[4755]: I1210 15:29:00.377100 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-865488587c-v2mj8" podUID="f811320b-4864-4925-9528-6b9cb2606c20" containerName="controller-manager" containerID="cri-o://abb5ee0a2493506c5225752d2e7fc7916f800fd9be5009d4a2eef657af9e9bd9" gracePeriod=30 Dec 10 15:29:00 crc kubenswrapper[4755]: I1210 15:29:00.383335 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-865488587c-v2mj8" Dec 10 15:29:00 crc kubenswrapper[4755]: I1210 15:29:00.392246 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-578f67dc67-vl7mg" podStartSLOduration=2.392222856 podStartE2EDuration="2.392222856s" podCreationTimestamp="2025-12-10 15:28:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:29:00.389725194 +0000 UTC m=+336.990608846" watchObservedRunningTime="2025-12-10 15:29:00.392222856 +0000 UTC m=+336.993106488" Dec 10 15:29:00 crc kubenswrapper[4755]: I1210 15:29:00.411214 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-865488587c-v2mj8" podStartSLOduration=2.411191408 podStartE2EDuration="2.411191408s" podCreationTimestamp="2025-12-10 15:28:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:29:00.410982672 +0000 UTC m=+337.011866314" watchObservedRunningTime="2025-12-10 15:29:00.411191408 +0000 UTC m=+337.012075040" Dec 10 15:29:00 crc kubenswrapper[4755]: I1210 15:29:00.652365 4755 patch_prober.go:28] interesting pod/route-controller-manager-578f67dc67-vl7mg container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.58:8443/healthz\": read tcp 10.217.0.2:60428->10.217.0.58:8443: read: connection reset by peer" start-of-body= Dec 10 15:29:00 crc kubenswrapper[4755]: I1210 15:29:00.652484 4755 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-578f67dc67-vl7mg" podUID="1905d225-779d-4078-9acd-801cfc176622" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.58:8443/healthz\": read tcp 10.217.0.2:60428->10.217.0.58:8443: read: connection reset by peer" Dec 10 15:29:00 crc kubenswrapper[4755]: I1210 15:29:00.750631 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-865488587c-v2mj8" Dec 10 15:29:00 crc kubenswrapper[4755]: I1210 15:29:00.808818 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5b85888b7c-627qf"] Dec 10 15:29:00 crc kubenswrapper[4755]: E1210 15:29:00.809120 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f811320b-4864-4925-9528-6b9cb2606c20" containerName="controller-manager" Dec 10 15:29:00 crc kubenswrapper[4755]: I1210 15:29:00.809139 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="f811320b-4864-4925-9528-6b9cb2606c20" containerName="controller-manager" Dec 10 15:29:00 crc kubenswrapper[4755]: I1210 15:29:00.809306 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="f811320b-4864-4925-9528-6b9cb2606c20" containerName="controller-manager" Dec 10 15:29:00 crc kubenswrapper[4755]: I1210 15:29:00.809846 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b85888b7c-627qf" Dec 10 15:29:00 crc kubenswrapper[4755]: I1210 15:29:00.816706 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5b85888b7c-627qf"] Dec 10 15:29:00 crc kubenswrapper[4755]: I1210 15:29:00.859146 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f811320b-4864-4925-9528-6b9cb2606c20-proxy-ca-bundles\") pod \"f811320b-4864-4925-9528-6b9cb2606c20\" (UID: \"f811320b-4864-4925-9528-6b9cb2606c20\") " Dec 10 15:29:00 crc kubenswrapper[4755]: I1210 15:29:00.859203 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6z775\" (UniqueName: \"kubernetes.io/projected/f811320b-4864-4925-9528-6b9cb2606c20-kube-api-access-6z775\") pod \"f811320b-4864-4925-9528-6b9cb2606c20\" (UID: \"f811320b-4864-4925-9528-6b9cb2606c20\") " Dec 10 15:29:00 crc kubenswrapper[4755]: I1210 15:29:00.859244 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f811320b-4864-4925-9528-6b9cb2606c20-serving-cert\") pod \"f811320b-4864-4925-9528-6b9cb2606c20\" (UID: \"f811320b-4864-4925-9528-6b9cb2606c20\") " Dec 10 15:29:00 crc kubenswrapper[4755]: I1210 15:29:00.859280 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f811320b-4864-4925-9528-6b9cb2606c20-client-ca\") pod \"f811320b-4864-4925-9528-6b9cb2606c20\" (UID: \"f811320b-4864-4925-9528-6b9cb2606c20\") " Dec 10 15:29:00 crc kubenswrapper[4755]: I1210 15:29:00.859340 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f811320b-4864-4925-9528-6b9cb2606c20-config\") pod \"f811320b-4864-4925-9528-6b9cb2606c20\" (UID: \"f811320b-4864-4925-9528-6b9cb2606c20\") " Dec 10 15:29:00 crc kubenswrapper[4755]: I1210 15:29:00.859971 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f811320b-4864-4925-9528-6b9cb2606c20-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "f811320b-4864-4925-9528-6b9cb2606c20" (UID: "f811320b-4864-4925-9528-6b9cb2606c20"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:29:00 crc kubenswrapper[4755]: I1210 15:29:00.860063 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f811320b-4864-4925-9528-6b9cb2606c20-client-ca" (OuterVolumeSpecName: "client-ca") pod "f811320b-4864-4925-9528-6b9cb2606c20" (UID: "f811320b-4864-4925-9528-6b9cb2606c20"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:29:00 crc kubenswrapper[4755]: I1210 15:29:00.860167 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f811320b-4864-4925-9528-6b9cb2606c20-config" (OuterVolumeSpecName: "config") pod "f811320b-4864-4925-9528-6b9cb2606c20" (UID: "f811320b-4864-4925-9528-6b9cb2606c20"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:29:00 crc kubenswrapper[4755]: I1210 15:29:00.864903 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f811320b-4864-4925-9528-6b9cb2606c20-kube-api-access-6z775" (OuterVolumeSpecName: "kube-api-access-6z775") pod "f811320b-4864-4925-9528-6b9cb2606c20" (UID: "f811320b-4864-4925-9528-6b9cb2606c20"). InnerVolumeSpecName "kube-api-access-6z775". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:29:00 crc kubenswrapper[4755]: I1210 15:29:00.864909 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f811320b-4864-4925-9528-6b9cb2606c20-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f811320b-4864-4925-9528-6b9cb2606c20" (UID: "f811320b-4864-4925-9528-6b9cb2606c20"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:29:00 crc kubenswrapper[4755]: I1210 15:29:00.897314 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-578f67dc67-vl7mg_1905d225-779d-4078-9acd-801cfc176622/route-controller-manager/0.log" Dec 10 15:29:00 crc kubenswrapper[4755]: I1210 15:29:00.897403 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-578f67dc67-vl7mg" Dec 10 15:29:00 crc kubenswrapper[4755]: I1210 15:29:00.961158 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pf8bj\" (UniqueName: \"kubernetes.io/projected/99862552-2d0b-4c68-8a7d-72c9c8993c96-kube-api-access-pf8bj\") pod \"controller-manager-5b85888b7c-627qf\" (UID: \"99862552-2d0b-4c68-8a7d-72c9c8993c96\") " pod="openshift-controller-manager/controller-manager-5b85888b7c-627qf" Dec 10 15:29:00 crc kubenswrapper[4755]: I1210 15:29:00.961214 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/99862552-2d0b-4c68-8a7d-72c9c8993c96-client-ca\") pod \"controller-manager-5b85888b7c-627qf\" (UID: \"99862552-2d0b-4c68-8a7d-72c9c8993c96\") " pod="openshift-controller-manager/controller-manager-5b85888b7c-627qf" Dec 10 15:29:00 crc kubenswrapper[4755]: I1210 15:29:00.961242 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/99862552-2d0b-4c68-8a7d-72c9c8993c96-serving-cert\") pod \"controller-manager-5b85888b7c-627qf\" (UID: \"99862552-2d0b-4c68-8a7d-72c9c8993c96\") " pod="openshift-controller-manager/controller-manager-5b85888b7c-627qf" Dec 10 15:29:00 crc kubenswrapper[4755]: I1210 15:29:00.961266 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99862552-2d0b-4c68-8a7d-72c9c8993c96-config\") pod \"controller-manager-5b85888b7c-627qf\" (UID: \"99862552-2d0b-4c68-8a7d-72c9c8993c96\") " pod="openshift-controller-manager/controller-manager-5b85888b7c-627qf" Dec 10 15:29:00 crc kubenswrapper[4755]: I1210 15:29:00.961297 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/99862552-2d0b-4c68-8a7d-72c9c8993c96-proxy-ca-bundles\") pod \"controller-manager-5b85888b7c-627qf\" (UID: \"99862552-2d0b-4c68-8a7d-72c9c8993c96\") " pod="openshift-controller-manager/controller-manager-5b85888b7c-627qf" Dec 10 15:29:00 crc kubenswrapper[4755]: I1210 15:29:00.961454 4755 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f811320b-4864-4925-9528-6b9cb2606c20-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 10 15:29:00 crc kubenswrapper[4755]: I1210 15:29:00.961502 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6z775\" (UniqueName: \"kubernetes.io/projected/f811320b-4864-4925-9528-6b9cb2606c20-kube-api-access-6z775\") on node \"crc\" DevicePath \"\"" Dec 10 15:29:00 crc kubenswrapper[4755]: I1210 15:29:00.961515 4755 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f811320b-4864-4925-9528-6b9cb2606c20-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 15:29:00 crc kubenswrapper[4755]: I1210 15:29:00.961523 4755 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f811320b-4864-4925-9528-6b9cb2606c20-client-ca\") on node \"crc\" DevicePath \"\"" Dec 10 15:29:00 crc kubenswrapper[4755]: I1210 15:29:00.961534 4755 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f811320b-4864-4925-9528-6b9cb2606c20-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:29:01 crc kubenswrapper[4755]: I1210 15:29:01.062509 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1905d225-779d-4078-9acd-801cfc176622-config\") pod \"1905d225-779d-4078-9acd-801cfc176622\" (UID: \"1905d225-779d-4078-9acd-801cfc176622\") " Dec 10 15:29:01 crc kubenswrapper[4755]: I1210 15:29:01.062692 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1905d225-779d-4078-9acd-801cfc176622-client-ca\") pod \"1905d225-779d-4078-9acd-801cfc176622\" (UID: \"1905d225-779d-4078-9acd-801cfc176622\") " Dec 10 15:29:01 crc kubenswrapper[4755]: I1210 15:29:01.062771 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mrpbt\" (UniqueName: \"kubernetes.io/projected/1905d225-779d-4078-9acd-801cfc176622-kube-api-access-mrpbt\") pod \"1905d225-779d-4078-9acd-801cfc176622\" (UID: \"1905d225-779d-4078-9acd-801cfc176622\") " Dec 10 15:29:01 crc kubenswrapper[4755]: I1210 15:29:01.062829 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1905d225-779d-4078-9acd-801cfc176622-serving-cert\") pod \"1905d225-779d-4078-9acd-801cfc176622\" (UID: \"1905d225-779d-4078-9acd-801cfc176622\") " Dec 10 15:29:01 crc kubenswrapper[4755]: I1210 15:29:01.063048 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99862552-2d0b-4c68-8a7d-72c9c8993c96-config\") pod \"controller-manager-5b85888b7c-627qf\" (UID: \"99862552-2d0b-4c68-8a7d-72c9c8993c96\") " pod="openshift-controller-manager/controller-manager-5b85888b7c-627qf" Dec 10 15:29:01 crc kubenswrapper[4755]: I1210 15:29:01.063102 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/99862552-2d0b-4c68-8a7d-72c9c8993c96-proxy-ca-bundles\") pod \"controller-manager-5b85888b7c-627qf\" (UID: \"99862552-2d0b-4c68-8a7d-72c9c8993c96\") " pod="openshift-controller-manager/controller-manager-5b85888b7c-627qf" Dec 10 15:29:01 crc kubenswrapper[4755]: I1210 15:29:01.063248 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pf8bj\" (UniqueName: \"kubernetes.io/projected/99862552-2d0b-4c68-8a7d-72c9c8993c96-kube-api-access-pf8bj\") pod \"controller-manager-5b85888b7c-627qf\" (UID: \"99862552-2d0b-4c68-8a7d-72c9c8993c96\") " pod="openshift-controller-manager/controller-manager-5b85888b7c-627qf" Dec 10 15:29:01 crc kubenswrapper[4755]: I1210 15:29:01.063333 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/99862552-2d0b-4c68-8a7d-72c9c8993c96-client-ca\") pod \"controller-manager-5b85888b7c-627qf\" (UID: \"99862552-2d0b-4c68-8a7d-72c9c8993c96\") " pod="openshift-controller-manager/controller-manager-5b85888b7c-627qf" Dec 10 15:29:01 crc kubenswrapper[4755]: I1210 15:29:01.063388 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/99862552-2d0b-4c68-8a7d-72c9c8993c96-serving-cert\") pod \"controller-manager-5b85888b7c-627qf\" (UID: \"99862552-2d0b-4c68-8a7d-72c9c8993c96\") " pod="openshift-controller-manager/controller-manager-5b85888b7c-627qf" Dec 10 15:29:01 crc kubenswrapper[4755]: I1210 15:29:01.063636 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1905d225-779d-4078-9acd-801cfc176622-client-ca" (OuterVolumeSpecName: "client-ca") pod "1905d225-779d-4078-9acd-801cfc176622" (UID: "1905d225-779d-4078-9acd-801cfc176622"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:29:01 crc kubenswrapper[4755]: I1210 15:29:01.063790 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1905d225-779d-4078-9acd-801cfc176622-config" (OuterVolumeSpecName: "config") pod "1905d225-779d-4078-9acd-801cfc176622" (UID: "1905d225-779d-4078-9acd-801cfc176622"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:29:01 crc kubenswrapper[4755]: I1210 15:29:01.065375 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/99862552-2d0b-4c68-8a7d-72c9c8993c96-proxy-ca-bundles\") pod \"controller-manager-5b85888b7c-627qf\" (UID: \"99862552-2d0b-4c68-8a7d-72c9c8993c96\") " pod="openshift-controller-manager/controller-manager-5b85888b7c-627qf" Dec 10 15:29:01 crc kubenswrapper[4755]: I1210 15:29:01.065651 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99862552-2d0b-4c68-8a7d-72c9c8993c96-config\") pod \"controller-manager-5b85888b7c-627qf\" (UID: \"99862552-2d0b-4c68-8a7d-72c9c8993c96\") " pod="openshift-controller-manager/controller-manager-5b85888b7c-627qf" Dec 10 15:29:01 crc kubenswrapper[4755]: I1210 15:29:01.066129 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1905d225-779d-4078-9acd-801cfc176622-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1905d225-779d-4078-9acd-801cfc176622" (UID: "1905d225-779d-4078-9acd-801cfc176622"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:29:01 crc kubenswrapper[4755]: I1210 15:29:01.067332 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/99862552-2d0b-4c68-8a7d-72c9c8993c96-client-ca\") pod \"controller-manager-5b85888b7c-627qf\" (UID: \"99862552-2d0b-4c68-8a7d-72c9c8993c96\") " pod="openshift-controller-manager/controller-manager-5b85888b7c-627qf" Dec 10 15:29:01 crc kubenswrapper[4755]: I1210 15:29:01.068202 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1905d225-779d-4078-9acd-801cfc176622-kube-api-access-mrpbt" (OuterVolumeSpecName: "kube-api-access-mrpbt") pod "1905d225-779d-4078-9acd-801cfc176622" (UID: "1905d225-779d-4078-9acd-801cfc176622"). InnerVolumeSpecName "kube-api-access-mrpbt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:29:01 crc kubenswrapper[4755]: I1210 15:29:01.068514 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/99862552-2d0b-4c68-8a7d-72c9c8993c96-serving-cert\") pod \"controller-manager-5b85888b7c-627qf\" (UID: \"99862552-2d0b-4c68-8a7d-72c9c8993c96\") " pod="openshift-controller-manager/controller-manager-5b85888b7c-627qf" Dec 10 15:29:01 crc kubenswrapper[4755]: I1210 15:29:01.091198 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pf8bj\" (UniqueName: \"kubernetes.io/projected/99862552-2d0b-4c68-8a7d-72c9c8993c96-kube-api-access-pf8bj\") pod \"controller-manager-5b85888b7c-627qf\" (UID: \"99862552-2d0b-4c68-8a7d-72c9c8993c96\") " pod="openshift-controller-manager/controller-manager-5b85888b7c-627qf" Dec 10 15:29:01 crc kubenswrapper[4755]: I1210 15:29:01.139780 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b85888b7c-627qf" Dec 10 15:29:01 crc kubenswrapper[4755]: I1210 15:29:01.164041 4755 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1905d225-779d-4078-9acd-801cfc176622-client-ca\") on node \"crc\" DevicePath \"\"" Dec 10 15:29:01 crc kubenswrapper[4755]: I1210 15:29:01.164088 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mrpbt\" (UniqueName: \"kubernetes.io/projected/1905d225-779d-4078-9acd-801cfc176622-kube-api-access-mrpbt\") on node \"crc\" DevicePath \"\"" Dec 10 15:29:01 crc kubenswrapper[4755]: I1210 15:29:01.164097 4755 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1905d225-779d-4078-9acd-801cfc176622-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 15:29:01 crc kubenswrapper[4755]: I1210 15:29:01.164108 4755 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1905d225-779d-4078-9acd-801cfc176622-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:29:01 crc kubenswrapper[4755]: I1210 15:29:01.332226 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5b85888b7c-627qf"] Dec 10 15:29:01 crc kubenswrapper[4755]: W1210 15:29:01.338676 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod99862552_2d0b_4c68_8a7d_72c9c8993c96.slice/crio-442a8eed1d1e5e84fc4d25242c2f26c3d1243f540ec3b757938d71065b6eb49c WatchSource:0}: Error finding container 442a8eed1d1e5e84fc4d25242c2f26c3d1243f540ec3b757938d71065b6eb49c: Status 404 returned error can't find the container with id 442a8eed1d1e5e84fc4d25242c2f26c3d1243f540ec3b757938d71065b6eb49c Dec 10 15:29:01 crc kubenswrapper[4755]: I1210 15:29:01.383039 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-578f67dc67-vl7mg_1905d225-779d-4078-9acd-801cfc176622/route-controller-manager/0.log" Dec 10 15:29:01 crc kubenswrapper[4755]: I1210 15:29:01.383092 4755 generic.go:334] "Generic (PLEG): container finished" podID="1905d225-779d-4078-9acd-801cfc176622" containerID="91aaa44094a9429cd7ab87439e792b17f753c9697b847c45a1dfc050273d6b3a" exitCode=255 Dec 10 15:29:01 crc kubenswrapper[4755]: I1210 15:29:01.383178 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-578f67dc67-vl7mg" Dec 10 15:29:01 crc kubenswrapper[4755]: I1210 15:29:01.383176 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-578f67dc67-vl7mg" event={"ID":"1905d225-779d-4078-9acd-801cfc176622","Type":"ContainerDied","Data":"91aaa44094a9429cd7ab87439e792b17f753c9697b847c45a1dfc050273d6b3a"} Dec 10 15:29:01 crc kubenswrapper[4755]: I1210 15:29:01.383246 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-578f67dc67-vl7mg" event={"ID":"1905d225-779d-4078-9acd-801cfc176622","Type":"ContainerDied","Data":"1881c7125f8fa939efd096a2b54ec72b4448b793944ba014549cbdb06ee67dff"} Dec 10 15:29:01 crc kubenswrapper[4755]: I1210 15:29:01.383274 4755 scope.go:117] "RemoveContainer" containerID="91aaa44094a9429cd7ab87439e792b17f753c9697b847c45a1dfc050273d6b3a" Dec 10 15:29:01 crc kubenswrapper[4755]: I1210 15:29:01.385216 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5b85888b7c-627qf" event={"ID":"99862552-2d0b-4c68-8a7d-72c9c8993c96","Type":"ContainerStarted","Data":"442a8eed1d1e5e84fc4d25242c2f26c3d1243f540ec3b757938d71065b6eb49c"} Dec 10 15:29:01 crc kubenswrapper[4755]: I1210 15:29:01.388540 4755 generic.go:334] "Generic (PLEG): container finished" podID="f811320b-4864-4925-9528-6b9cb2606c20" containerID="abb5ee0a2493506c5225752d2e7fc7916f800fd9be5009d4a2eef657af9e9bd9" exitCode=0 Dec 10 15:29:01 crc kubenswrapper[4755]: I1210 15:29:01.388582 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-865488587c-v2mj8" event={"ID":"f811320b-4864-4925-9528-6b9cb2606c20","Type":"ContainerDied","Data":"abb5ee0a2493506c5225752d2e7fc7916f800fd9be5009d4a2eef657af9e9bd9"} Dec 10 15:29:01 crc kubenswrapper[4755]: I1210 15:29:01.388605 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-865488587c-v2mj8" event={"ID":"f811320b-4864-4925-9528-6b9cb2606c20","Type":"ContainerDied","Data":"71c065a94415ed10eb9f2e7f641009a148a019c903532310d6d87f45c2cb6811"} Dec 10 15:29:01 crc kubenswrapper[4755]: I1210 15:29:01.388666 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-865488587c-v2mj8" Dec 10 15:29:01 crc kubenswrapper[4755]: I1210 15:29:01.401403 4755 scope.go:117] "RemoveContainer" containerID="91aaa44094a9429cd7ab87439e792b17f753c9697b847c45a1dfc050273d6b3a" Dec 10 15:29:01 crc kubenswrapper[4755]: E1210 15:29:01.401886 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91aaa44094a9429cd7ab87439e792b17f753c9697b847c45a1dfc050273d6b3a\": container with ID starting with 91aaa44094a9429cd7ab87439e792b17f753c9697b847c45a1dfc050273d6b3a not found: ID does not exist" containerID="91aaa44094a9429cd7ab87439e792b17f753c9697b847c45a1dfc050273d6b3a" Dec 10 15:29:01 crc kubenswrapper[4755]: I1210 15:29:01.401932 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91aaa44094a9429cd7ab87439e792b17f753c9697b847c45a1dfc050273d6b3a"} err="failed to get container status \"91aaa44094a9429cd7ab87439e792b17f753c9697b847c45a1dfc050273d6b3a\": rpc error: code = NotFound desc = could not find container \"91aaa44094a9429cd7ab87439e792b17f753c9697b847c45a1dfc050273d6b3a\": container with ID starting with 91aaa44094a9429cd7ab87439e792b17f753c9697b847c45a1dfc050273d6b3a not found: ID does not exist" Dec 10 15:29:01 crc kubenswrapper[4755]: I1210 15:29:01.401965 4755 scope.go:117] "RemoveContainer" containerID="abb5ee0a2493506c5225752d2e7fc7916f800fd9be5009d4a2eef657af9e9bd9" Dec 10 15:29:01 crc kubenswrapper[4755]: I1210 15:29:01.429620 4755 scope.go:117] "RemoveContainer" containerID="abb5ee0a2493506c5225752d2e7fc7916f800fd9be5009d4a2eef657af9e9bd9" Dec 10 15:29:01 crc kubenswrapper[4755]: E1210 15:29:01.431344 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"abb5ee0a2493506c5225752d2e7fc7916f800fd9be5009d4a2eef657af9e9bd9\": container with ID starting with abb5ee0a2493506c5225752d2e7fc7916f800fd9be5009d4a2eef657af9e9bd9 not found: ID does not exist" containerID="abb5ee0a2493506c5225752d2e7fc7916f800fd9be5009d4a2eef657af9e9bd9" Dec 10 15:29:01 crc kubenswrapper[4755]: I1210 15:29:01.431385 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abb5ee0a2493506c5225752d2e7fc7916f800fd9be5009d4a2eef657af9e9bd9"} err="failed to get container status \"abb5ee0a2493506c5225752d2e7fc7916f800fd9be5009d4a2eef657af9e9bd9\": rpc error: code = NotFound desc = could not find container \"abb5ee0a2493506c5225752d2e7fc7916f800fd9be5009d4a2eef657af9e9bd9\": container with ID starting with abb5ee0a2493506c5225752d2e7fc7916f800fd9be5009d4a2eef657af9e9bd9 not found: ID does not exist" Dec 10 15:29:01 crc kubenswrapper[4755]: I1210 15:29:01.436187 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-578f67dc67-vl7mg"] Dec 10 15:29:01 crc kubenswrapper[4755]: I1210 15:29:01.441034 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-578f67dc67-vl7mg"] Dec 10 15:29:01 crc kubenswrapper[4755]: I1210 15:29:01.449587 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-865488587c-v2mj8"] Dec 10 15:29:01 crc kubenswrapper[4755]: I1210 15:29:01.453963 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-865488587c-v2mj8"] Dec 10 15:29:01 crc kubenswrapper[4755]: I1210 15:29:01.764996 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1905d225-779d-4078-9acd-801cfc176622" path="/var/lib/kubelet/pods/1905d225-779d-4078-9acd-801cfc176622/volumes" Dec 10 15:29:01 crc kubenswrapper[4755]: I1210 15:29:01.765651 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f811320b-4864-4925-9528-6b9cb2606c20" path="/var/lib/kubelet/pods/f811320b-4864-4925-9528-6b9cb2606c20/volumes" Dec 10 15:29:02 crc kubenswrapper[4755]: I1210 15:29:02.397734 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5b85888b7c-627qf" event={"ID":"99862552-2d0b-4c68-8a7d-72c9c8993c96","Type":"ContainerStarted","Data":"f483ce397f352d6694da7abad770d149efbcbbee8645ee22477aaebf74c2a07e"} Dec 10 15:29:02 crc kubenswrapper[4755]: I1210 15:29:02.398001 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5b85888b7c-627qf" Dec 10 15:29:02 crc kubenswrapper[4755]: I1210 15:29:02.402766 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5b85888b7c-627qf" Dec 10 15:29:02 crc kubenswrapper[4755]: I1210 15:29:02.416421 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5b85888b7c-627qf" podStartSLOduration=3.416397146 podStartE2EDuration="3.416397146s" podCreationTimestamp="2025-12-10 15:28:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:29:02.41307156 +0000 UTC m=+339.013955252" watchObservedRunningTime="2025-12-10 15:29:02.416397146 +0000 UTC m=+339.017280778" Dec 10 15:29:03 crc kubenswrapper[4755]: I1210 15:29:03.186702 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7484d9ddcc-4czng"] Dec 10 15:29:03 crc kubenswrapper[4755]: E1210 15:29:03.187604 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1905d225-779d-4078-9acd-801cfc176622" containerName="route-controller-manager" Dec 10 15:29:03 crc kubenswrapper[4755]: I1210 15:29:03.187723 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="1905d225-779d-4078-9acd-801cfc176622" containerName="route-controller-manager" Dec 10 15:29:03 crc kubenswrapper[4755]: I1210 15:29:03.187928 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="1905d225-779d-4078-9acd-801cfc176622" containerName="route-controller-manager" Dec 10 15:29:03 crc kubenswrapper[4755]: I1210 15:29:03.188459 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7484d9ddcc-4czng" Dec 10 15:29:03 crc kubenswrapper[4755]: I1210 15:29:03.191296 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 10 15:29:03 crc kubenswrapper[4755]: I1210 15:29:03.191328 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 10 15:29:03 crc kubenswrapper[4755]: I1210 15:29:03.191360 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 10 15:29:03 crc kubenswrapper[4755]: I1210 15:29:03.191425 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 10 15:29:03 crc kubenswrapper[4755]: I1210 15:29:03.191782 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 10 15:29:03 crc kubenswrapper[4755]: I1210 15:29:03.192907 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 10 15:29:03 crc kubenswrapper[4755]: I1210 15:29:03.202589 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7484d9ddcc-4czng"] Dec 10 15:29:03 crc kubenswrapper[4755]: I1210 15:29:03.288657 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8ff565ca-4cd7-42a8-b161-e6dc43ab29b0-client-ca\") pod \"route-controller-manager-7484d9ddcc-4czng\" (UID: \"8ff565ca-4cd7-42a8-b161-e6dc43ab29b0\") " pod="openshift-route-controller-manager/route-controller-manager-7484d9ddcc-4czng" Dec 10 15:29:03 crc kubenswrapper[4755]: I1210 15:29:03.288714 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8ff565ca-4cd7-42a8-b161-e6dc43ab29b0-serving-cert\") pod \"route-controller-manager-7484d9ddcc-4czng\" (UID: \"8ff565ca-4cd7-42a8-b161-e6dc43ab29b0\") " pod="openshift-route-controller-manager/route-controller-manager-7484d9ddcc-4czng" Dec 10 15:29:03 crc kubenswrapper[4755]: I1210 15:29:03.288740 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ff565ca-4cd7-42a8-b161-e6dc43ab29b0-config\") pod \"route-controller-manager-7484d9ddcc-4czng\" (UID: \"8ff565ca-4cd7-42a8-b161-e6dc43ab29b0\") " pod="openshift-route-controller-manager/route-controller-manager-7484d9ddcc-4czng" Dec 10 15:29:03 crc kubenswrapper[4755]: I1210 15:29:03.288757 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thk6r\" (UniqueName: \"kubernetes.io/projected/8ff565ca-4cd7-42a8-b161-e6dc43ab29b0-kube-api-access-thk6r\") pod \"route-controller-manager-7484d9ddcc-4czng\" (UID: \"8ff565ca-4cd7-42a8-b161-e6dc43ab29b0\") " pod="openshift-route-controller-manager/route-controller-manager-7484d9ddcc-4czng" Dec 10 15:29:03 crc kubenswrapper[4755]: I1210 15:29:03.390433 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8ff565ca-4cd7-42a8-b161-e6dc43ab29b0-client-ca\") pod \"route-controller-manager-7484d9ddcc-4czng\" (UID: \"8ff565ca-4cd7-42a8-b161-e6dc43ab29b0\") " pod="openshift-route-controller-manager/route-controller-manager-7484d9ddcc-4czng" Dec 10 15:29:03 crc kubenswrapper[4755]: I1210 15:29:03.390931 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8ff565ca-4cd7-42a8-b161-e6dc43ab29b0-serving-cert\") pod \"route-controller-manager-7484d9ddcc-4czng\" (UID: \"8ff565ca-4cd7-42a8-b161-e6dc43ab29b0\") " pod="openshift-route-controller-manager/route-controller-manager-7484d9ddcc-4czng" Dec 10 15:29:03 crc kubenswrapper[4755]: I1210 15:29:03.391162 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ff565ca-4cd7-42a8-b161-e6dc43ab29b0-config\") pod \"route-controller-manager-7484d9ddcc-4czng\" (UID: \"8ff565ca-4cd7-42a8-b161-e6dc43ab29b0\") " pod="openshift-route-controller-manager/route-controller-manager-7484d9ddcc-4czng" Dec 10 15:29:03 crc kubenswrapper[4755]: I1210 15:29:03.391348 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thk6r\" (UniqueName: \"kubernetes.io/projected/8ff565ca-4cd7-42a8-b161-e6dc43ab29b0-kube-api-access-thk6r\") pod \"route-controller-manager-7484d9ddcc-4czng\" (UID: \"8ff565ca-4cd7-42a8-b161-e6dc43ab29b0\") " pod="openshift-route-controller-manager/route-controller-manager-7484d9ddcc-4czng" Dec 10 15:29:03 crc kubenswrapper[4755]: I1210 15:29:03.391895 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8ff565ca-4cd7-42a8-b161-e6dc43ab29b0-client-ca\") pod \"route-controller-manager-7484d9ddcc-4czng\" (UID: \"8ff565ca-4cd7-42a8-b161-e6dc43ab29b0\") " pod="openshift-route-controller-manager/route-controller-manager-7484d9ddcc-4czng" Dec 10 15:29:03 crc kubenswrapper[4755]: I1210 15:29:03.392641 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ff565ca-4cd7-42a8-b161-e6dc43ab29b0-config\") pod \"route-controller-manager-7484d9ddcc-4czng\" (UID: \"8ff565ca-4cd7-42a8-b161-e6dc43ab29b0\") " pod="openshift-route-controller-manager/route-controller-manager-7484d9ddcc-4czng" Dec 10 15:29:03 crc kubenswrapper[4755]: I1210 15:29:03.397803 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8ff565ca-4cd7-42a8-b161-e6dc43ab29b0-serving-cert\") pod \"route-controller-manager-7484d9ddcc-4czng\" (UID: \"8ff565ca-4cd7-42a8-b161-e6dc43ab29b0\") " pod="openshift-route-controller-manager/route-controller-manager-7484d9ddcc-4czng" Dec 10 15:29:03 crc kubenswrapper[4755]: I1210 15:29:03.412801 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thk6r\" (UniqueName: \"kubernetes.io/projected/8ff565ca-4cd7-42a8-b161-e6dc43ab29b0-kube-api-access-thk6r\") pod \"route-controller-manager-7484d9ddcc-4czng\" (UID: \"8ff565ca-4cd7-42a8-b161-e6dc43ab29b0\") " pod="openshift-route-controller-manager/route-controller-manager-7484d9ddcc-4czng" Dec 10 15:29:03 crc kubenswrapper[4755]: I1210 15:29:03.515431 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7484d9ddcc-4czng" Dec 10 15:29:03 crc kubenswrapper[4755]: I1210 15:29:03.716102 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7484d9ddcc-4czng"] Dec 10 15:29:04 crc kubenswrapper[4755]: I1210 15:29:04.413893 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7484d9ddcc-4czng" event={"ID":"8ff565ca-4cd7-42a8-b161-e6dc43ab29b0","Type":"ContainerStarted","Data":"f8fd1d03ed83ec460adccf2132565e1f243765ef2763fe5e9fca8aeb9ff39a4e"} Dec 10 15:29:04 crc kubenswrapper[4755]: I1210 15:29:04.413934 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7484d9ddcc-4czng" event={"ID":"8ff565ca-4cd7-42a8-b161-e6dc43ab29b0","Type":"ContainerStarted","Data":"f044f6e8d99e5bde842d4f226275627e39a8a4f207c28e7d59ce5805f2c1c582"} Dec 10 15:29:04 crc kubenswrapper[4755]: I1210 15:29:04.431896 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7484d9ddcc-4czng" podStartSLOduration=5.431876593 podStartE2EDuration="5.431876593s" podCreationTimestamp="2025-12-10 15:28:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:29:04.428778743 +0000 UTC m=+341.029662375" watchObservedRunningTime="2025-12-10 15:29:04.431876593 +0000 UTC m=+341.032760225" Dec 10 15:29:05 crc kubenswrapper[4755]: I1210 15:29:05.418286 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7484d9ddcc-4czng" Dec 10 15:29:05 crc kubenswrapper[4755]: I1210 15:29:05.424002 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7484d9ddcc-4czng" Dec 10 15:29:31 crc kubenswrapper[4755]: I1210 15:29:31.027887 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-pwfnv"] Dec 10 15:29:31 crc kubenswrapper[4755]: I1210 15:29:31.029933 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pwfnv" Dec 10 15:29:31 crc kubenswrapper[4755]: I1210 15:29:31.031810 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 10 15:29:31 crc kubenswrapper[4755]: I1210 15:29:31.037785 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pwfnv"] Dec 10 15:29:31 crc kubenswrapper[4755]: I1210 15:29:31.130955 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c87a9c35-e8c9-42d7-9715-a1467d4f134e-utilities\") pod \"redhat-marketplace-pwfnv\" (UID: \"c87a9c35-e8c9-42d7-9715-a1467d4f134e\") " pod="openshift-marketplace/redhat-marketplace-pwfnv" Dec 10 15:29:31 crc kubenswrapper[4755]: I1210 15:29:31.131029 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c87a9c35-e8c9-42d7-9715-a1467d4f134e-catalog-content\") pod \"redhat-marketplace-pwfnv\" (UID: \"c87a9c35-e8c9-42d7-9715-a1467d4f134e\") " pod="openshift-marketplace/redhat-marketplace-pwfnv" Dec 10 15:29:31 crc kubenswrapper[4755]: I1210 15:29:31.131068 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhtf6\" (UniqueName: \"kubernetes.io/projected/c87a9c35-e8c9-42d7-9715-a1467d4f134e-kube-api-access-zhtf6\") pod \"redhat-marketplace-pwfnv\" (UID: \"c87a9c35-e8c9-42d7-9715-a1467d4f134e\") " pod="openshift-marketplace/redhat-marketplace-pwfnv" Dec 10 15:29:31 crc kubenswrapper[4755]: I1210 15:29:31.232696 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhtf6\" (UniqueName: \"kubernetes.io/projected/c87a9c35-e8c9-42d7-9715-a1467d4f134e-kube-api-access-zhtf6\") pod \"redhat-marketplace-pwfnv\" (UID: \"c87a9c35-e8c9-42d7-9715-a1467d4f134e\") " pod="openshift-marketplace/redhat-marketplace-pwfnv" Dec 10 15:29:31 crc kubenswrapper[4755]: I1210 15:29:31.232763 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c87a9c35-e8c9-42d7-9715-a1467d4f134e-utilities\") pod \"redhat-marketplace-pwfnv\" (UID: \"c87a9c35-e8c9-42d7-9715-a1467d4f134e\") " pod="openshift-marketplace/redhat-marketplace-pwfnv" Dec 10 15:29:31 crc kubenswrapper[4755]: I1210 15:29:31.232801 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c87a9c35-e8c9-42d7-9715-a1467d4f134e-catalog-content\") pod \"redhat-marketplace-pwfnv\" (UID: \"c87a9c35-e8c9-42d7-9715-a1467d4f134e\") " pod="openshift-marketplace/redhat-marketplace-pwfnv" Dec 10 15:29:31 crc kubenswrapper[4755]: I1210 15:29:31.233245 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c87a9c35-e8c9-42d7-9715-a1467d4f134e-catalog-content\") pod \"redhat-marketplace-pwfnv\" (UID: \"c87a9c35-e8c9-42d7-9715-a1467d4f134e\") " pod="openshift-marketplace/redhat-marketplace-pwfnv" Dec 10 15:29:31 crc kubenswrapper[4755]: I1210 15:29:31.233338 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c87a9c35-e8c9-42d7-9715-a1467d4f134e-utilities\") pod \"redhat-marketplace-pwfnv\" (UID: \"c87a9c35-e8c9-42d7-9715-a1467d4f134e\") " pod="openshift-marketplace/redhat-marketplace-pwfnv" Dec 10 15:29:31 crc kubenswrapper[4755]: I1210 15:29:31.255845 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhtf6\" (UniqueName: \"kubernetes.io/projected/c87a9c35-e8c9-42d7-9715-a1467d4f134e-kube-api-access-zhtf6\") pod \"redhat-marketplace-pwfnv\" (UID: \"c87a9c35-e8c9-42d7-9715-a1467d4f134e\") " pod="openshift-marketplace/redhat-marketplace-pwfnv" Dec 10 15:29:31 crc kubenswrapper[4755]: I1210 15:29:31.346186 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pwfnv" Dec 10 15:29:31 crc kubenswrapper[4755]: I1210 15:29:31.623093 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-s8l8s"] Dec 10 15:29:31 crc kubenswrapper[4755]: I1210 15:29:31.624486 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s8l8s" Dec 10 15:29:31 crc kubenswrapper[4755]: I1210 15:29:31.635308 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s8l8s"] Dec 10 15:29:31 crc kubenswrapper[4755]: I1210 15:29:31.636797 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 10 15:29:31 crc kubenswrapper[4755]: I1210 15:29:31.723502 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pwfnv"] Dec 10 15:29:31 crc kubenswrapper[4755]: I1210 15:29:31.737755 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8qhg\" (UniqueName: \"kubernetes.io/projected/affd9511-69f4-4147-8df8-14faa94916ee-kube-api-access-m8qhg\") pod \"redhat-operators-s8l8s\" (UID: \"affd9511-69f4-4147-8df8-14faa94916ee\") " pod="openshift-marketplace/redhat-operators-s8l8s" Dec 10 15:29:31 crc kubenswrapper[4755]: I1210 15:29:31.737811 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/affd9511-69f4-4147-8df8-14faa94916ee-utilities\") pod \"redhat-operators-s8l8s\" (UID: \"affd9511-69f4-4147-8df8-14faa94916ee\") " pod="openshift-marketplace/redhat-operators-s8l8s" Dec 10 15:29:31 crc kubenswrapper[4755]: I1210 15:29:31.737834 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/affd9511-69f4-4147-8df8-14faa94916ee-catalog-content\") pod \"redhat-operators-s8l8s\" (UID: \"affd9511-69f4-4147-8df8-14faa94916ee\") " pod="openshift-marketplace/redhat-operators-s8l8s" Dec 10 15:29:31 crc kubenswrapper[4755]: I1210 15:29:31.839362 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/affd9511-69f4-4147-8df8-14faa94916ee-utilities\") pod \"redhat-operators-s8l8s\" (UID: \"affd9511-69f4-4147-8df8-14faa94916ee\") " pod="openshift-marketplace/redhat-operators-s8l8s" Dec 10 15:29:31 crc kubenswrapper[4755]: I1210 15:29:31.839421 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/affd9511-69f4-4147-8df8-14faa94916ee-catalog-content\") pod \"redhat-operators-s8l8s\" (UID: \"affd9511-69f4-4147-8df8-14faa94916ee\") " pod="openshift-marketplace/redhat-operators-s8l8s" Dec 10 15:29:31 crc kubenswrapper[4755]: I1210 15:29:31.839536 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8qhg\" (UniqueName: \"kubernetes.io/projected/affd9511-69f4-4147-8df8-14faa94916ee-kube-api-access-m8qhg\") pod \"redhat-operators-s8l8s\" (UID: \"affd9511-69f4-4147-8df8-14faa94916ee\") " pod="openshift-marketplace/redhat-operators-s8l8s" Dec 10 15:29:31 crc kubenswrapper[4755]: I1210 15:29:31.840018 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/affd9511-69f4-4147-8df8-14faa94916ee-utilities\") pod \"redhat-operators-s8l8s\" (UID: \"affd9511-69f4-4147-8df8-14faa94916ee\") " pod="openshift-marketplace/redhat-operators-s8l8s" Dec 10 15:29:31 crc kubenswrapper[4755]: I1210 15:29:31.840071 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/affd9511-69f4-4147-8df8-14faa94916ee-catalog-content\") pod \"redhat-operators-s8l8s\" (UID: \"affd9511-69f4-4147-8df8-14faa94916ee\") " pod="openshift-marketplace/redhat-operators-s8l8s" Dec 10 15:29:31 crc kubenswrapper[4755]: I1210 15:29:31.861718 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8qhg\" (UniqueName: \"kubernetes.io/projected/affd9511-69f4-4147-8df8-14faa94916ee-kube-api-access-m8qhg\") pod \"redhat-operators-s8l8s\" (UID: \"affd9511-69f4-4147-8df8-14faa94916ee\") " pod="openshift-marketplace/redhat-operators-s8l8s" Dec 10 15:29:31 crc kubenswrapper[4755]: I1210 15:29:31.951168 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s8l8s" Dec 10 15:29:32 crc kubenswrapper[4755]: I1210 15:29:32.426193 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s8l8s"] Dec 10 15:29:32 crc kubenswrapper[4755]: W1210 15:29:32.430393 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaffd9511_69f4_4147_8df8_14faa94916ee.slice/crio-39f74a75026631fc33d198c7eb874cbb26f94622c22cfe8de8c62b8ce2116bd4 WatchSource:0}: Error finding container 39f74a75026631fc33d198c7eb874cbb26f94622c22cfe8de8c62b8ce2116bd4: Status 404 returned error can't find the container with id 39f74a75026631fc33d198c7eb874cbb26f94622c22cfe8de8c62b8ce2116bd4 Dec 10 15:29:32 crc kubenswrapper[4755]: I1210 15:29:32.554732 4755 generic.go:334] "Generic (PLEG): container finished" podID="c87a9c35-e8c9-42d7-9715-a1467d4f134e" containerID="4ef6de38320efa9bee6efc78cdb12a897fb55344151a2746e4b8475fece032ee" exitCode=0 Dec 10 15:29:32 crc kubenswrapper[4755]: I1210 15:29:32.554797 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pwfnv" event={"ID":"c87a9c35-e8c9-42d7-9715-a1467d4f134e","Type":"ContainerDied","Data":"4ef6de38320efa9bee6efc78cdb12a897fb55344151a2746e4b8475fece032ee"} Dec 10 15:29:32 crc kubenswrapper[4755]: I1210 15:29:32.554821 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pwfnv" event={"ID":"c87a9c35-e8c9-42d7-9715-a1467d4f134e","Type":"ContainerStarted","Data":"1bb1632f702a9034be465821a685dfc0e98349383e5dd7f65d4f2ebb2aaddc29"} Dec 10 15:29:32 crc kubenswrapper[4755]: I1210 15:29:32.556054 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s8l8s" event={"ID":"affd9511-69f4-4147-8df8-14faa94916ee","Type":"ContainerStarted","Data":"39f74a75026631fc33d198c7eb874cbb26f94622c22cfe8de8c62b8ce2116bd4"} Dec 10 15:29:33 crc kubenswrapper[4755]: I1210 15:29:33.424707 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ltxsm"] Dec 10 15:29:33 crc kubenswrapper[4755]: I1210 15:29:33.426099 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ltxsm" Dec 10 15:29:33 crc kubenswrapper[4755]: I1210 15:29:33.428798 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 10 15:29:33 crc kubenswrapper[4755]: I1210 15:29:33.441199 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ltxsm"] Dec 10 15:29:33 crc kubenswrapper[4755]: I1210 15:29:33.559872 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftdj2\" (UniqueName: \"kubernetes.io/projected/bf0eab0b-0b57-4c95-8edd-0b84d5f8e8f6-kube-api-access-ftdj2\") pod \"certified-operators-ltxsm\" (UID: \"bf0eab0b-0b57-4c95-8edd-0b84d5f8e8f6\") " pod="openshift-marketplace/certified-operators-ltxsm" Dec 10 15:29:33 crc kubenswrapper[4755]: I1210 15:29:33.559923 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf0eab0b-0b57-4c95-8edd-0b84d5f8e8f6-catalog-content\") pod \"certified-operators-ltxsm\" (UID: \"bf0eab0b-0b57-4c95-8edd-0b84d5f8e8f6\") " pod="openshift-marketplace/certified-operators-ltxsm" Dec 10 15:29:33 crc kubenswrapper[4755]: I1210 15:29:33.559954 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf0eab0b-0b57-4c95-8edd-0b84d5f8e8f6-utilities\") pod \"certified-operators-ltxsm\" (UID: \"bf0eab0b-0b57-4c95-8edd-0b84d5f8e8f6\") " pod="openshift-marketplace/certified-operators-ltxsm" Dec 10 15:29:33 crc kubenswrapper[4755]: I1210 15:29:33.561695 4755 generic.go:334] "Generic (PLEG): container finished" podID="affd9511-69f4-4147-8df8-14faa94916ee" containerID="0f20e8193c0089cd24dd315c2e57e1bfbada0bdae0170d74e24a9411770b42fb" exitCode=0 Dec 10 15:29:33 crc kubenswrapper[4755]: I1210 15:29:33.561791 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s8l8s" event={"ID":"affd9511-69f4-4147-8df8-14faa94916ee","Type":"ContainerDied","Data":"0f20e8193c0089cd24dd315c2e57e1bfbada0bdae0170d74e24a9411770b42fb"} Dec 10 15:29:33 crc kubenswrapper[4755]: I1210 15:29:33.564393 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pwfnv" event={"ID":"c87a9c35-e8c9-42d7-9715-a1467d4f134e","Type":"ContainerStarted","Data":"9479cbfc8a929c2b87f9e82b993bb8d214aa1f226d1f2f17241f5b8855265903"} Dec 10 15:29:33 crc kubenswrapper[4755]: I1210 15:29:33.660901 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf0eab0b-0b57-4c95-8edd-0b84d5f8e8f6-catalog-content\") pod \"certified-operators-ltxsm\" (UID: \"bf0eab0b-0b57-4c95-8edd-0b84d5f8e8f6\") " pod="openshift-marketplace/certified-operators-ltxsm" Dec 10 15:29:33 crc kubenswrapper[4755]: I1210 15:29:33.660953 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf0eab0b-0b57-4c95-8edd-0b84d5f8e8f6-utilities\") pod \"certified-operators-ltxsm\" (UID: \"bf0eab0b-0b57-4c95-8edd-0b84d5f8e8f6\") " pod="openshift-marketplace/certified-operators-ltxsm" Dec 10 15:29:33 crc kubenswrapper[4755]: I1210 15:29:33.661071 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftdj2\" (UniqueName: \"kubernetes.io/projected/bf0eab0b-0b57-4c95-8edd-0b84d5f8e8f6-kube-api-access-ftdj2\") pod \"certified-operators-ltxsm\" (UID: \"bf0eab0b-0b57-4c95-8edd-0b84d5f8e8f6\") " pod="openshift-marketplace/certified-operators-ltxsm" Dec 10 15:29:33 crc kubenswrapper[4755]: I1210 15:29:33.661433 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf0eab0b-0b57-4c95-8edd-0b84d5f8e8f6-catalog-content\") pod \"certified-operators-ltxsm\" (UID: \"bf0eab0b-0b57-4c95-8edd-0b84d5f8e8f6\") " pod="openshift-marketplace/certified-operators-ltxsm" Dec 10 15:29:33 crc kubenswrapper[4755]: I1210 15:29:33.661718 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf0eab0b-0b57-4c95-8edd-0b84d5f8e8f6-utilities\") pod \"certified-operators-ltxsm\" (UID: \"bf0eab0b-0b57-4c95-8edd-0b84d5f8e8f6\") " pod="openshift-marketplace/certified-operators-ltxsm" Dec 10 15:29:33 crc kubenswrapper[4755]: I1210 15:29:33.682431 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftdj2\" (UniqueName: \"kubernetes.io/projected/bf0eab0b-0b57-4c95-8edd-0b84d5f8e8f6-kube-api-access-ftdj2\") pod \"certified-operators-ltxsm\" (UID: \"bf0eab0b-0b57-4c95-8edd-0b84d5f8e8f6\") " pod="openshift-marketplace/certified-operators-ltxsm" Dec 10 15:29:33 crc kubenswrapper[4755]: I1210 15:29:33.744549 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ltxsm" Dec 10 15:29:34 crc kubenswrapper[4755]: I1210 15:29:34.025318 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-sx998"] Dec 10 15:29:34 crc kubenswrapper[4755]: I1210 15:29:34.028868 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sx998" Dec 10 15:29:34 crc kubenswrapper[4755]: I1210 15:29:34.034078 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sx998"] Dec 10 15:29:34 crc kubenswrapper[4755]: I1210 15:29:34.036107 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 10 15:29:34 crc kubenswrapper[4755]: I1210 15:29:34.121817 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ltxsm"] Dec 10 15:29:34 crc kubenswrapper[4755]: I1210 15:29:34.167155 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgzdk\" (UniqueName: \"kubernetes.io/projected/4bc52194-661c-4b4c-9642-b6b1706e2fd0-kube-api-access-lgzdk\") pod \"community-operators-sx998\" (UID: \"4bc52194-661c-4b4c-9642-b6b1706e2fd0\") " pod="openshift-marketplace/community-operators-sx998" Dec 10 15:29:34 crc kubenswrapper[4755]: I1210 15:29:34.167207 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bc52194-661c-4b4c-9642-b6b1706e2fd0-catalog-content\") pod \"community-operators-sx998\" (UID: \"4bc52194-661c-4b4c-9642-b6b1706e2fd0\") " pod="openshift-marketplace/community-operators-sx998" Dec 10 15:29:34 crc kubenswrapper[4755]: I1210 15:29:34.167249 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bc52194-661c-4b4c-9642-b6b1706e2fd0-utilities\") pod \"community-operators-sx998\" (UID: \"4bc52194-661c-4b4c-9642-b6b1706e2fd0\") " pod="openshift-marketplace/community-operators-sx998" Dec 10 15:29:34 crc kubenswrapper[4755]: I1210 15:29:34.268167 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bc52194-661c-4b4c-9642-b6b1706e2fd0-utilities\") pod \"community-operators-sx998\" (UID: \"4bc52194-661c-4b4c-9642-b6b1706e2fd0\") " pod="openshift-marketplace/community-operators-sx998" Dec 10 15:29:34 crc kubenswrapper[4755]: I1210 15:29:34.268263 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgzdk\" (UniqueName: \"kubernetes.io/projected/4bc52194-661c-4b4c-9642-b6b1706e2fd0-kube-api-access-lgzdk\") pod \"community-operators-sx998\" (UID: \"4bc52194-661c-4b4c-9642-b6b1706e2fd0\") " pod="openshift-marketplace/community-operators-sx998" Dec 10 15:29:34 crc kubenswrapper[4755]: I1210 15:29:34.268308 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bc52194-661c-4b4c-9642-b6b1706e2fd0-catalog-content\") pod \"community-operators-sx998\" (UID: \"4bc52194-661c-4b4c-9642-b6b1706e2fd0\") " pod="openshift-marketplace/community-operators-sx998" Dec 10 15:29:34 crc kubenswrapper[4755]: I1210 15:29:34.268787 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bc52194-661c-4b4c-9642-b6b1706e2fd0-catalog-content\") pod \"community-operators-sx998\" (UID: \"4bc52194-661c-4b4c-9642-b6b1706e2fd0\") " pod="openshift-marketplace/community-operators-sx998" Dec 10 15:29:34 crc kubenswrapper[4755]: I1210 15:29:34.269065 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bc52194-661c-4b4c-9642-b6b1706e2fd0-utilities\") pod \"community-operators-sx998\" (UID: \"4bc52194-661c-4b4c-9642-b6b1706e2fd0\") " pod="openshift-marketplace/community-operators-sx998" Dec 10 15:29:34 crc kubenswrapper[4755]: I1210 15:29:34.287220 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgzdk\" (UniqueName: \"kubernetes.io/projected/4bc52194-661c-4b4c-9642-b6b1706e2fd0-kube-api-access-lgzdk\") pod \"community-operators-sx998\" (UID: \"4bc52194-661c-4b4c-9642-b6b1706e2fd0\") " pod="openshift-marketplace/community-operators-sx998" Dec 10 15:29:34 crc kubenswrapper[4755]: I1210 15:29:34.346432 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sx998" Dec 10 15:29:34 crc kubenswrapper[4755]: I1210 15:29:34.576993 4755 generic.go:334] "Generic (PLEG): container finished" podID="bf0eab0b-0b57-4c95-8edd-0b84d5f8e8f6" containerID="2cc897a6bcb66a6959aae60c609b69aa863a21842922f58fc1c4abb0ec3bf9c4" exitCode=0 Dec 10 15:29:34 crc kubenswrapper[4755]: I1210 15:29:34.577137 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ltxsm" event={"ID":"bf0eab0b-0b57-4c95-8edd-0b84d5f8e8f6","Type":"ContainerDied","Data":"2cc897a6bcb66a6959aae60c609b69aa863a21842922f58fc1c4abb0ec3bf9c4"} Dec 10 15:29:34 crc kubenswrapper[4755]: I1210 15:29:34.577364 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ltxsm" event={"ID":"bf0eab0b-0b57-4c95-8edd-0b84d5f8e8f6","Type":"ContainerStarted","Data":"fef0315baf89530651a166234f99d4a3499eb1a59aa05e57b414b9240e4da3c0"} Dec 10 15:29:34 crc kubenswrapper[4755]: I1210 15:29:34.582723 4755 generic.go:334] "Generic (PLEG): container finished" podID="c87a9c35-e8c9-42d7-9715-a1467d4f134e" containerID="9479cbfc8a929c2b87f9e82b993bb8d214aa1f226d1f2f17241f5b8855265903" exitCode=0 Dec 10 15:29:34 crc kubenswrapper[4755]: I1210 15:29:34.582815 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pwfnv" event={"ID":"c87a9c35-e8c9-42d7-9715-a1467d4f134e","Type":"ContainerDied","Data":"9479cbfc8a929c2b87f9e82b993bb8d214aa1f226d1f2f17241f5b8855265903"} Dec 10 15:29:34 crc kubenswrapper[4755]: I1210 15:29:34.584585 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s8l8s" event={"ID":"affd9511-69f4-4147-8df8-14faa94916ee","Type":"ContainerStarted","Data":"b167cab8f7b86a5dacd8429cab1db8129e62416fd2cd6e722daf8395f1a1554b"} Dec 10 15:29:34 crc kubenswrapper[4755]: I1210 15:29:34.794242 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sx998"] Dec 10 15:29:34 crc kubenswrapper[4755]: W1210 15:29:34.805632 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4bc52194_661c_4b4c_9642_b6b1706e2fd0.slice/crio-14498d3b212868e3b8424aa560e0025f4a9bfb3363428c7437c0b8a9f0fe92a2 WatchSource:0}: Error finding container 14498d3b212868e3b8424aa560e0025f4a9bfb3363428c7437c0b8a9f0fe92a2: Status 404 returned error can't find the container with id 14498d3b212868e3b8424aa560e0025f4a9bfb3363428c7437c0b8a9f0fe92a2 Dec 10 15:29:35 crc kubenswrapper[4755]: I1210 15:29:35.590028 4755 generic.go:334] "Generic (PLEG): container finished" podID="4bc52194-661c-4b4c-9642-b6b1706e2fd0" containerID="c68bff513234b76af0783aba2e41498504b60e6d2399a09b1507a99ebbb88a7a" exitCode=0 Dec 10 15:29:35 crc kubenswrapper[4755]: I1210 15:29:35.590212 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sx998" event={"ID":"4bc52194-661c-4b4c-9642-b6b1706e2fd0","Type":"ContainerDied","Data":"c68bff513234b76af0783aba2e41498504b60e6d2399a09b1507a99ebbb88a7a"} Dec 10 15:29:35 crc kubenswrapper[4755]: I1210 15:29:35.590511 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sx998" event={"ID":"4bc52194-661c-4b4c-9642-b6b1706e2fd0","Type":"ContainerStarted","Data":"14498d3b212868e3b8424aa560e0025f4a9bfb3363428c7437c0b8a9f0fe92a2"} Dec 10 15:29:35 crc kubenswrapper[4755]: I1210 15:29:35.592765 4755 generic.go:334] "Generic (PLEG): container finished" podID="affd9511-69f4-4147-8df8-14faa94916ee" containerID="b167cab8f7b86a5dacd8429cab1db8129e62416fd2cd6e722daf8395f1a1554b" exitCode=0 Dec 10 15:29:35 crc kubenswrapper[4755]: I1210 15:29:35.592814 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s8l8s" event={"ID":"affd9511-69f4-4147-8df8-14faa94916ee","Type":"ContainerDied","Data":"b167cab8f7b86a5dacd8429cab1db8129e62416fd2cd6e722daf8395f1a1554b"} Dec 10 15:29:35 crc kubenswrapper[4755]: I1210 15:29:35.594720 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ltxsm" event={"ID":"bf0eab0b-0b57-4c95-8edd-0b84d5f8e8f6","Type":"ContainerStarted","Data":"cd315a56e0e1c959f15d0226b84750bb5b4be2766e8ab006eadf26dea2fec7cf"} Dec 10 15:29:35 crc kubenswrapper[4755]: I1210 15:29:35.603566 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pwfnv" event={"ID":"c87a9c35-e8c9-42d7-9715-a1467d4f134e","Type":"ContainerStarted","Data":"cede8919e278ab048e790a1b5311484a070721f3320b517ceb0a04762c274e10"} Dec 10 15:29:35 crc kubenswrapper[4755]: I1210 15:29:35.644563 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-pwfnv" podStartSLOduration=2.156974132 podStartE2EDuration="4.644546301s" podCreationTimestamp="2025-12-10 15:29:31 +0000 UTC" firstStartedPulling="2025-12-10 15:29:32.556211799 +0000 UTC m=+369.157095431" lastFinishedPulling="2025-12-10 15:29:35.043783968 +0000 UTC m=+371.644667600" observedRunningTime="2025-12-10 15:29:35.639796002 +0000 UTC m=+372.240679644" watchObservedRunningTime="2025-12-10 15:29:35.644546301 +0000 UTC m=+372.245429923" Dec 10 15:29:36 crc kubenswrapper[4755]: I1210 15:29:36.610376 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s8l8s" event={"ID":"affd9511-69f4-4147-8df8-14faa94916ee","Type":"ContainerStarted","Data":"4e1b4bdef38242eab70220e1997e518887075ec424dcb6b7454db42f0d42b99e"} Dec 10 15:29:36 crc kubenswrapper[4755]: I1210 15:29:36.615230 4755 generic.go:334] "Generic (PLEG): container finished" podID="bf0eab0b-0b57-4c95-8edd-0b84d5f8e8f6" containerID="cd315a56e0e1c959f15d0226b84750bb5b4be2766e8ab006eadf26dea2fec7cf" exitCode=0 Dec 10 15:29:36 crc kubenswrapper[4755]: I1210 15:29:36.615328 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ltxsm" event={"ID":"bf0eab0b-0b57-4c95-8edd-0b84d5f8e8f6","Type":"ContainerDied","Data":"cd315a56e0e1c959f15d0226b84750bb5b4be2766e8ab006eadf26dea2fec7cf"} Dec 10 15:29:36 crc kubenswrapper[4755]: I1210 15:29:36.628049 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-s8l8s" podStartSLOduration=3.112239379 podStartE2EDuration="5.628036059s" podCreationTimestamp="2025-12-10 15:29:31 +0000 UTC" firstStartedPulling="2025-12-10 15:29:33.562743266 +0000 UTC m=+370.163626898" lastFinishedPulling="2025-12-10 15:29:36.078539946 +0000 UTC m=+372.679423578" observedRunningTime="2025-12-10 15:29:36.627135023 +0000 UTC m=+373.228018675" watchObservedRunningTime="2025-12-10 15:29:36.628036059 +0000 UTC m=+373.228919701" Dec 10 15:29:37 crc kubenswrapper[4755]: I1210 15:29:37.622886 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ltxsm" event={"ID":"bf0eab0b-0b57-4c95-8edd-0b84d5f8e8f6","Type":"ContainerStarted","Data":"1a73cc52e61de0c71b99c23aae726a30d648fb22087e94c22f9a80b36eaa4b88"} Dec 10 15:29:37 crc kubenswrapper[4755]: I1210 15:29:37.625646 4755 generic.go:334] "Generic (PLEG): container finished" podID="4bc52194-661c-4b4c-9642-b6b1706e2fd0" containerID="a0cc1b9487f997adb4fa3802e365c4c370d320b12a4446e921d8f1811a72c8a5" exitCode=0 Dec 10 15:29:37 crc kubenswrapper[4755]: I1210 15:29:37.626301 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sx998" event={"ID":"4bc52194-661c-4b4c-9642-b6b1706e2fd0","Type":"ContainerDied","Data":"a0cc1b9487f997adb4fa3802e365c4c370d320b12a4446e921d8f1811a72c8a5"} Dec 10 15:29:37 crc kubenswrapper[4755]: I1210 15:29:37.646011 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5b85888b7c-627qf"] Dec 10 15:29:37 crc kubenswrapper[4755]: I1210 15:29:37.646315 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5b85888b7c-627qf" podUID="99862552-2d0b-4c68-8a7d-72c9c8993c96" containerName="controller-manager" containerID="cri-o://f483ce397f352d6694da7abad770d149efbcbbee8645ee22477aaebf74c2a07e" gracePeriod=30 Dec 10 15:29:37 crc kubenswrapper[4755]: I1210 15:29:37.647692 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ltxsm" podStartSLOduration=2.185875299 podStartE2EDuration="4.647678849s" podCreationTimestamp="2025-12-10 15:29:33 +0000 UTC" firstStartedPulling="2025-12-10 15:29:34.57874809 +0000 UTC m=+371.179631722" lastFinishedPulling="2025-12-10 15:29:37.04055164 +0000 UTC m=+373.641435272" observedRunningTime="2025-12-10 15:29:37.643970411 +0000 UTC m=+374.244854043" watchObservedRunningTime="2025-12-10 15:29:37.647678849 +0000 UTC m=+374.248562481" Dec 10 15:29:37 crc kubenswrapper[4755]: I1210 15:29:37.664602 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7484d9ddcc-4czng"] Dec 10 15:29:37 crc kubenswrapper[4755]: I1210 15:29:37.664847 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7484d9ddcc-4czng" podUID="8ff565ca-4cd7-42a8-b161-e6dc43ab29b0" containerName="route-controller-manager" containerID="cri-o://f8fd1d03ed83ec460adccf2132565e1f243765ef2763fe5e9fca8aeb9ff39a4e" gracePeriod=30 Dec 10 15:29:38 crc kubenswrapper[4755]: I1210 15:29:38.066072 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7484d9ddcc-4czng" Dec 10 15:29:38 crc kubenswrapper[4755]: I1210 15:29:38.118940 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8ff565ca-4cd7-42a8-b161-e6dc43ab29b0-serving-cert\") pod \"8ff565ca-4cd7-42a8-b161-e6dc43ab29b0\" (UID: \"8ff565ca-4cd7-42a8-b161-e6dc43ab29b0\") " Dec 10 15:29:38 crc kubenswrapper[4755]: I1210 15:29:38.119070 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ff565ca-4cd7-42a8-b161-e6dc43ab29b0-config\") pod \"8ff565ca-4cd7-42a8-b161-e6dc43ab29b0\" (UID: \"8ff565ca-4cd7-42a8-b161-e6dc43ab29b0\") " Dec 10 15:29:38 crc kubenswrapper[4755]: I1210 15:29:38.119088 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8ff565ca-4cd7-42a8-b161-e6dc43ab29b0-client-ca\") pod \"8ff565ca-4cd7-42a8-b161-e6dc43ab29b0\" (UID: \"8ff565ca-4cd7-42a8-b161-e6dc43ab29b0\") " Dec 10 15:29:38 crc kubenswrapper[4755]: I1210 15:29:38.119148 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-thk6r\" (UniqueName: \"kubernetes.io/projected/8ff565ca-4cd7-42a8-b161-e6dc43ab29b0-kube-api-access-thk6r\") pod \"8ff565ca-4cd7-42a8-b161-e6dc43ab29b0\" (UID: \"8ff565ca-4cd7-42a8-b161-e6dc43ab29b0\") " Dec 10 15:29:38 crc kubenswrapper[4755]: I1210 15:29:38.120164 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ff565ca-4cd7-42a8-b161-e6dc43ab29b0-client-ca" (OuterVolumeSpecName: "client-ca") pod "8ff565ca-4cd7-42a8-b161-e6dc43ab29b0" (UID: "8ff565ca-4cd7-42a8-b161-e6dc43ab29b0"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:29:38 crc kubenswrapper[4755]: I1210 15:29:38.120211 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ff565ca-4cd7-42a8-b161-e6dc43ab29b0-config" (OuterVolumeSpecName: "config") pod "8ff565ca-4cd7-42a8-b161-e6dc43ab29b0" (UID: "8ff565ca-4cd7-42a8-b161-e6dc43ab29b0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:29:38 crc kubenswrapper[4755]: I1210 15:29:38.125172 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ff565ca-4cd7-42a8-b161-e6dc43ab29b0-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8ff565ca-4cd7-42a8-b161-e6dc43ab29b0" (UID: "8ff565ca-4cd7-42a8-b161-e6dc43ab29b0"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:29:38 crc kubenswrapper[4755]: I1210 15:29:38.125650 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ff565ca-4cd7-42a8-b161-e6dc43ab29b0-kube-api-access-thk6r" (OuterVolumeSpecName: "kube-api-access-thk6r") pod "8ff565ca-4cd7-42a8-b161-e6dc43ab29b0" (UID: "8ff565ca-4cd7-42a8-b161-e6dc43ab29b0"). InnerVolumeSpecName "kube-api-access-thk6r". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:29:38 crc kubenswrapper[4755]: I1210 15:29:38.192169 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b85888b7c-627qf" Dec 10 15:29:38 crc kubenswrapper[4755]: I1210 15:29:38.220539 4755 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ff565ca-4cd7-42a8-b161-e6dc43ab29b0-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:29:38 crc kubenswrapper[4755]: I1210 15:29:38.220570 4755 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8ff565ca-4cd7-42a8-b161-e6dc43ab29b0-client-ca\") on node \"crc\" DevicePath \"\"" Dec 10 15:29:38 crc kubenswrapper[4755]: I1210 15:29:38.220582 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-thk6r\" (UniqueName: \"kubernetes.io/projected/8ff565ca-4cd7-42a8-b161-e6dc43ab29b0-kube-api-access-thk6r\") on node \"crc\" DevicePath \"\"" Dec 10 15:29:38 crc kubenswrapper[4755]: I1210 15:29:38.220591 4755 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8ff565ca-4cd7-42a8-b161-e6dc43ab29b0-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 15:29:38 crc kubenswrapper[4755]: I1210 15:29:38.321777 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/99862552-2d0b-4c68-8a7d-72c9c8993c96-client-ca\") pod \"99862552-2d0b-4c68-8a7d-72c9c8993c96\" (UID: \"99862552-2d0b-4c68-8a7d-72c9c8993c96\") " Dec 10 15:29:38 crc kubenswrapper[4755]: I1210 15:29:38.321871 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/99862552-2d0b-4c68-8a7d-72c9c8993c96-serving-cert\") pod \"99862552-2d0b-4c68-8a7d-72c9c8993c96\" (UID: \"99862552-2d0b-4c68-8a7d-72c9c8993c96\") " Dec 10 15:29:38 crc kubenswrapper[4755]: I1210 15:29:38.321941 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pf8bj\" (UniqueName: \"kubernetes.io/projected/99862552-2d0b-4c68-8a7d-72c9c8993c96-kube-api-access-pf8bj\") pod \"99862552-2d0b-4c68-8a7d-72c9c8993c96\" (UID: \"99862552-2d0b-4c68-8a7d-72c9c8993c96\") " Dec 10 15:29:38 crc kubenswrapper[4755]: I1210 15:29:38.321981 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99862552-2d0b-4c68-8a7d-72c9c8993c96-config\") pod \"99862552-2d0b-4c68-8a7d-72c9c8993c96\" (UID: \"99862552-2d0b-4c68-8a7d-72c9c8993c96\") " Dec 10 15:29:38 crc kubenswrapper[4755]: I1210 15:29:38.322004 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/99862552-2d0b-4c68-8a7d-72c9c8993c96-proxy-ca-bundles\") pod \"99862552-2d0b-4c68-8a7d-72c9c8993c96\" (UID: \"99862552-2d0b-4c68-8a7d-72c9c8993c96\") " Dec 10 15:29:38 crc kubenswrapper[4755]: I1210 15:29:38.322571 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99862552-2d0b-4c68-8a7d-72c9c8993c96-client-ca" (OuterVolumeSpecName: "client-ca") pod "99862552-2d0b-4c68-8a7d-72c9c8993c96" (UID: "99862552-2d0b-4c68-8a7d-72c9c8993c96"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:29:38 crc kubenswrapper[4755]: I1210 15:29:38.322615 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99862552-2d0b-4c68-8a7d-72c9c8993c96-config" (OuterVolumeSpecName: "config") pod "99862552-2d0b-4c68-8a7d-72c9c8993c96" (UID: "99862552-2d0b-4c68-8a7d-72c9c8993c96"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:29:38 crc kubenswrapper[4755]: I1210 15:29:38.322692 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99862552-2d0b-4c68-8a7d-72c9c8993c96-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "99862552-2d0b-4c68-8a7d-72c9c8993c96" (UID: "99862552-2d0b-4c68-8a7d-72c9c8993c96"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:29:38 crc kubenswrapper[4755]: I1210 15:29:38.325728 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99862552-2d0b-4c68-8a7d-72c9c8993c96-kube-api-access-pf8bj" (OuterVolumeSpecName: "kube-api-access-pf8bj") pod "99862552-2d0b-4c68-8a7d-72c9c8993c96" (UID: "99862552-2d0b-4c68-8a7d-72c9c8993c96"). InnerVolumeSpecName "kube-api-access-pf8bj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:29:38 crc kubenswrapper[4755]: I1210 15:29:38.326570 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99862552-2d0b-4c68-8a7d-72c9c8993c96-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "99862552-2d0b-4c68-8a7d-72c9c8993c96" (UID: "99862552-2d0b-4c68-8a7d-72c9c8993c96"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:29:38 crc kubenswrapper[4755]: I1210 15:29:38.423160 4755 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99862552-2d0b-4c68-8a7d-72c9c8993c96-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:29:38 crc kubenswrapper[4755]: I1210 15:29:38.423208 4755 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/99862552-2d0b-4c68-8a7d-72c9c8993c96-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 10 15:29:38 crc kubenswrapper[4755]: I1210 15:29:38.423223 4755 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/99862552-2d0b-4c68-8a7d-72c9c8993c96-client-ca\") on node \"crc\" DevicePath \"\"" Dec 10 15:29:38 crc kubenswrapper[4755]: I1210 15:29:38.423233 4755 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/99862552-2d0b-4c68-8a7d-72c9c8993c96-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 15:29:38 crc kubenswrapper[4755]: I1210 15:29:38.423245 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pf8bj\" (UniqueName: \"kubernetes.io/projected/99862552-2d0b-4c68-8a7d-72c9c8993c96-kube-api-access-pf8bj\") on node \"crc\" DevicePath \"\"" Dec 10 15:29:38 crc kubenswrapper[4755]: I1210 15:29:38.631293 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sx998" event={"ID":"4bc52194-661c-4b4c-9642-b6b1706e2fd0","Type":"ContainerStarted","Data":"410ea9d0c74a0c997db6c60b563a444b32e7f7e2c0d64188b0c464babfa19f9d"} Dec 10 15:29:38 crc kubenswrapper[4755]: I1210 15:29:38.642900 4755 generic.go:334] "Generic (PLEG): container finished" podID="8ff565ca-4cd7-42a8-b161-e6dc43ab29b0" containerID="f8fd1d03ed83ec460adccf2132565e1f243765ef2763fe5e9fca8aeb9ff39a4e" exitCode=0 Dec 10 15:29:38 crc kubenswrapper[4755]: I1210 15:29:38.642989 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7484d9ddcc-4czng" event={"ID":"8ff565ca-4cd7-42a8-b161-e6dc43ab29b0","Type":"ContainerDied","Data":"f8fd1d03ed83ec460adccf2132565e1f243765ef2763fe5e9fca8aeb9ff39a4e"} Dec 10 15:29:38 crc kubenswrapper[4755]: I1210 15:29:38.643016 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7484d9ddcc-4czng" event={"ID":"8ff565ca-4cd7-42a8-b161-e6dc43ab29b0","Type":"ContainerDied","Data":"f044f6e8d99e5bde842d4f226275627e39a8a4f207c28e7d59ce5805f2c1c582"} Dec 10 15:29:38 crc kubenswrapper[4755]: I1210 15:29:38.643032 4755 scope.go:117] "RemoveContainer" containerID="f8fd1d03ed83ec460adccf2132565e1f243765ef2763fe5e9fca8aeb9ff39a4e" Dec 10 15:29:38 crc kubenswrapper[4755]: I1210 15:29:38.643025 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7484d9ddcc-4czng" Dec 10 15:29:38 crc kubenswrapper[4755]: I1210 15:29:38.644782 4755 generic.go:334] "Generic (PLEG): container finished" podID="99862552-2d0b-4c68-8a7d-72c9c8993c96" containerID="f483ce397f352d6694da7abad770d149efbcbbee8645ee22477aaebf74c2a07e" exitCode=0 Dec 10 15:29:38 crc kubenswrapper[4755]: I1210 15:29:38.644847 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5b85888b7c-627qf" event={"ID":"99862552-2d0b-4c68-8a7d-72c9c8993c96","Type":"ContainerDied","Data":"f483ce397f352d6694da7abad770d149efbcbbee8645ee22477aaebf74c2a07e"} Dec 10 15:29:38 crc kubenswrapper[4755]: I1210 15:29:38.644883 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5b85888b7c-627qf" event={"ID":"99862552-2d0b-4c68-8a7d-72c9c8993c96","Type":"ContainerDied","Data":"442a8eed1d1e5e84fc4d25242c2f26c3d1243f540ec3b757938d71065b6eb49c"} Dec 10 15:29:38 crc kubenswrapper[4755]: I1210 15:29:38.644897 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b85888b7c-627qf" Dec 10 15:29:38 crc kubenswrapper[4755]: I1210 15:29:38.658704 4755 scope.go:117] "RemoveContainer" containerID="f8fd1d03ed83ec460adccf2132565e1f243765ef2763fe5e9fca8aeb9ff39a4e" Dec 10 15:29:38 crc kubenswrapper[4755]: E1210 15:29:38.659087 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8fd1d03ed83ec460adccf2132565e1f243765ef2763fe5e9fca8aeb9ff39a4e\": container with ID starting with f8fd1d03ed83ec460adccf2132565e1f243765ef2763fe5e9fca8aeb9ff39a4e not found: ID does not exist" containerID="f8fd1d03ed83ec460adccf2132565e1f243765ef2763fe5e9fca8aeb9ff39a4e" Dec 10 15:29:38 crc kubenswrapper[4755]: I1210 15:29:38.659117 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8fd1d03ed83ec460adccf2132565e1f243765ef2763fe5e9fca8aeb9ff39a4e"} err="failed to get container status \"f8fd1d03ed83ec460adccf2132565e1f243765ef2763fe5e9fca8aeb9ff39a4e\": rpc error: code = NotFound desc = could not find container \"f8fd1d03ed83ec460adccf2132565e1f243765ef2763fe5e9fca8aeb9ff39a4e\": container with ID starting with f8fd1d03ed83ec460adccf2132565e1f243765ef2763fe5e9fca8aeb9ff39a4e not found: ID does not exist" Dec 10 15:29:38 crc kubenswrapper[4755]: I1210 15:29:38.659138 4755 scope.go:117] "RemoveContainer" containerID="f483ce397f352d6694da7abad770d149efbcbbee8645ee22477aaebf74c2a07e" Dec 10 15:29:38 crc kubenswrapper[4755]: I1210 15:29:38.673153 4755 scope.go:117] "RemoveContainer" containerID="f483ce397f352d6694da7abad770d149efbcbbee8645ee22477aaebf74c2a07e" Dec 10 15:29:38 crc kubenswrapper[4755]: E1210 15:29:38.673785 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f483ce397f352d6694da7abad770d149efbcbbee8645ee22477aaebf74c2a07e\": container with ID starting with f483ce397f352d6694da7abad770d149efbcbbee8645ee22477aaebf74c2a07e not found: ID does not exist" containerID="f483ce397f352d6694da7abad770d149efbcbbee8645ee22477aaebf74c2a07e" Dec 10 15:29:38 crc kubenswrapper[4755]: I1210 15:29:38.673836 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f483ce397f352d6694da7abad770d149efbcbbee8645ee22477aaebf74c2a07e"} err="failed to get container status \"f483ce397f352d6694da7abad770d149efbcbbee8645ee22477aaebf74c2a07e\": rpc error: code = NotFound desc = could not find container \"f483ce397f352d6694da7abad770d149efbcbbee8645ee22477aaebf74c2a07e\": container with ID starting with f483ce397f352d6694da7abad770d149efbcbbee8645ee22477aaebf74c2a07e not found: ID does not exist" Dec 10 15:29:38 crc kubenswrapper[4755]: I1210 15:29:38.677433 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-sx998" podStartSLOduration=2.164200698 podStartE2EDuration="4.677411202s" podCreationTimestamp="2025-12-10 15:29:34 +0000 UTC" firstStartedPulling="2025-12-10 15:29:35.591285573 +0000 UTC m=+372.192169205" lastFinishedPulling="2025-12-10 15:29:38.104496047 +0000 UTC m=+374.705379709" observedRunningTime="2025-12-10 15:29:38.651825148 +0000 UTC m=+375.252708810" watchObservedRunningTime="2025-12-10 15:29:38.677411202 +0000 UTC m=+375.278294844" Dec 10 15:29:38 crc kubenswrapper[4755]: I1210 15:29:38.684576 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7484d9ddcc-4czng"] Dec 10 15:29:38 crc kubenswrapper[4755]: I1210 15:29:38.689015 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7484d9ddcc-4czng"] Dec 10 15:29:38 crc kubenswrapper[4755]: I1210 15:29:38.692594 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5b85888b7c-627qf"] Dec 10 15:29:38 crc kubenswrapper[4755]: I1210 15:29:38.695335 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5b85888b7c-627qf"] Dec 10 15:29:39 crc kubenswrapper[4755]: I1210 15:29:39.208446 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-578f67dc67-mxvsj"] Dec 10 15:29:39 crc kubenswrapper[4755]: E1210 15:29:39.209059 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ff565ca-4cd7-42a8-b161-e6dc43ab29b0" containerName="route-controller-manager" Dec 10 15:29:39 crc kubenswrapper[4755]: I1210 15:29:39.209075 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ff565ca-4cd7-42a8-b161-e6dc43ab29b0" containerName="route-controller-manager" Dec 10 15:29:39 crc kubenswrapper[4755]: E1210 15:29:39.209099 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99862552-2d0b-4c68-8a7d-72c9c8993c96" containerName="controller-manager" Dec 10 15:29:39 crc kubenswrapper[4755]: I1210 15:29:39.209106 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="99862552-2d0b-4c68-8a7d-72c9c8993c96" containerName="controller-manager" Dec 10 15:29:39 crc kubenswrapper[4755]: I1210 15:29:39.209205 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ff565ca-4cd7-42a8-b161-e6dc43ab29b0" containerName="route-controller-manager" Dec 10 15:29:39 crc kubenswrapper[4755]: I1210 15:29:39.209216 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="99862552-2d0b-4c68-8a7d-72c9c8993c96" containerName="controller-manager" Dec 10 15:29:39 crc kubenswrapper[4755]: I1210 15:29:39.209625 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-578f67dc67-mxvsj" Dec 10 15:29:39 crc kubenswrapper[4755]: I1210 15:29:39.211516 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 10 15:29:39 crc kubenswrapper[4755]: I1210 15:29:39.211818 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 10 15:29:39 crc kubenswrapper[4755]: I1210 15:29:39.212138 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 10 15:29:39 crc kubenswrapper[4755]: I1210 15:29:39.212390 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 10 15:29:39 crc kubenswrapper[4755]: I1210 15:29:39.212722 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-865488587c-ls86h"] Dec 10 15:29:39 crc kubenswrapper[4755]: I1210 15:29:39.212995 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 10 15:29:39 crc kubenswrapper[4755]: I1210 15:29:39.213515 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-865488587c-ls86h" Dec 10 15:29:39 crc kubenswrapper[4755]: I1210 15:29:39.215984 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 10 15:29:39 crc kubenswrapper[4755]: I1210 15:29:39.216266 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 10 15:29:39 crc kubenswrapper[4755]: I1210 15:29:39.216523 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 10 15:29:39 crc kubenswrapper[4755]: I1210 15:29:39.217016 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 10 15:29:39 crc kubenswrapper[4755]: I1210 15:29:39.217029 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 10 15:29:39 crc kubenswrapper[4755]: I1210 15:29:39.217161 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 10 15:29:39 crc kubenswrapper[4755]: I1210 15:29:39.217286 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 10 15:29:39 crc kubenswrapper[4755]: I1210 15:29:39.224193 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-578f67dc67-mxvsj"] Dec 10 15:29:39 crc kubenswrapper[4755]: I1210 15:29:39.236612 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 10 15:29:39 crc kubenswrapper[4755]: I1210 15:29:39.240687 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-865488587c-ls86h"] Dec 10 15:29:39 crc kubenswrapper[4755]: I1210 15:29:39.334445 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74ae1538-009a-4594-8b75-e3400c38ce6f-config\") pod \"controller-manager-865488587c-ls86h\" (UID: \"74ae1538-009a-4594-8b75-e3400c38ce6f\") " pod="openshift-controller-manager/controller-manager-865488587c-ls86h" Dec 10 15:29:39 crc kubenswrapper[4755]: I1210 15:29:39.334508 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/74ae1538-009a-4594-8b75-e3400c38ce6f-client-ca\") pod \"controller-manager-865488587c-ls86h\" (UID: \"74ae1538-009a-4594-8b75-e3400c38ce6f\") " pod="openshift-controller-manager/controller-manager-865488587c-ls86h" Dec 10 15:29:39 crc kubenswrapper[4755]: I1210 15:29:39.334527 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bgkp\" (UniqueName: \"kubernetes.io/projected/74ae1538-009a-4594-8b75-e3400c38ce6f-kube-api-access-7bgkp\") pod \"controller-manager-865488587c-ls86h\" (UID: \"74ae1538-009a-4594-8b75-e3400c38ce6f\") " pod="openshift-controller-manager/controller-manager-865488587c-ls86h" Dec 10 15:29:39 crc kubenswrapper[4755]: I1210 15:29:39.334549 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0feb08f-5d53-4c06-bd0e-be43f4af01e7-config\") pod \"route-controller-manager-578f67dc67-mxvsj\" (UID: \"e0feb08f-5d53-4c06-bd0e-be43f4af01e7\") " pod="openshift-route-controller-manager/route-controller-manager-578f67dc67-mxvsj" Dec 10 15:29:39 crc kubenswrapper[4755]: I1210 15:29:39.334571 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e0feb08f-5d53-4c06-bd0e-be43f4af01e7-client-ca\") pod \"route-controller-manager-578f67dc67-mxvsj\" (UID: \"e0feb08f-5d53-4c06-bd0e-be43f4af01e7\") " pod="openshift-route-controller-manager/route-controller-manager-578f67dc67-mxvsj" Dec 10 15:29:39 crc kubenswrapper[4755]: I1210 15:29:39.334680 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k65k2\" (UniqueName: \"kubernetes.io/projected/e0feb08f-5d53-4c06-bd0e-be43f4af01e7-kube-api-access-k65k2\") pod \"route-controller-manager-578f67dc67-mxvsj\" (UID: \"e0feb08f-5d53-4c06-bd0e-be43f4af01e7\") " pod="openshift-route-controller-manager/route-controller-manager-578f67dc67-mxvsj" Dec 10 15:29:39 crc kubenswrapper[4755]: I1210 15:29:39.334746 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e0feb08f-5d53-4c06-bd0e-be43f4af01e7-serving-cert\") pod \"route-controller-manager-578f67dc67-mxvsj\" (UID: \"e0feb08f-5d53-4c06-bd0e-be43f4af01e7\") " pod="openshift-route-controller-manager/route-controller-manager-578f67dc67-mxvsj" Dec 10 15:29:39 crc kubenswrapper[4755]: I1210 15:29:39.334779 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/74ae1538-009a-4594-8b75-e3400c38ce6f-proxy-ca-bundles\") pod \"controller-manager-865488587c-ls86h\" (UID: \"74ae1538-009a-4594-8b75-e3400c38ce6f\") " pod="openshift-controller-manager/controller-manager-865488587c-ls86h" Dec 10 15:29:39 crc kubenswrapper[4755]: I1210 15:29:39.334836 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/74ae1538-009a-4594-8b75-e3400c38ce6f-serving-cert\") pod \"controller-manager-865488587c-ls86h\" (UID: \"74ae1538-009a-4594-8b75-e3400c38ce6f\") " pod="openshift-controller-manager/controller-manager-865488587c-ls86h" Dec 10 15:29:39 crc kubenswrapper[4755]: I1210 15:29:39.436423 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/74ae1538-009a-4594-8b75-e3400c38ce6f-client-ca\") pod \"controller-manager-865488587c-ls86h\" (UID: \"74ae1538-009a-4594-8b75-e3400c38ce6f\") " pod="openshift-controller-manager/controller-manager-865488587c-ls86h" Dec 10 15:29:39 crc kubenswrapper[4755]: I1210 15:29:39.436508 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bgkp\" (UniqueName: \"kubernetes.io/projected/74ae1538-009a-4594-8b75-e3400c38ce6f-kube-api-access-7bgkp\") pod \"controller-manager-865488587c-ls86h\" (UID: \"74ae1538-009a-4594-8b75-e3400c38ce6f\") " pod="openshift-controller-manager/controller-manager-865488587c-ls86h" Dec 10 15:29:39 crc kubenswrapper[4755]: I1210 15:29:39.436643 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0feb08f-5d53-4c06-bd0e-be43f4af01e7-config\") pod \"route-controller-manager-578f67dc67-mxvsj\" (UID: \"e0feb08f-5d53-4c06-bd0e-be43f4af01e7\") " pod="openshift-route-controller-manager/route-controller-manager-578f67dc67-mxvsj" Dec 10 15:29:39 crc kubenswrapper[4755]: I1210 15:29:39.436689 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e0feb08f-5d53-4c06-bd0e-be43f4af01e7-client-ca\") pod \"route-controller-manager-578f67dc67-mxvsj\" (UID: \"e0feb08f-5d53-4c06-bd0e-be43f4af01e7\") " pod="openshift-route-controller-manager/route-controller-manager-578f67dc67-mxvsj" Dec 10 15:29:39 crc kubenswrapper[4755]: I1210 15:29:39.436714 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k65k2\" (UniqueName: \"kubernetes.io/projected/e0feb08f-5d53-4c06-bd0e-be43f4af01e7-kube-api-access-k65k2\") pod \"route-controller-manager-578f67dc67-mxvsj\" (UID: \"e0feb08f-5d53-4c06-bd0e-be43f4af01e7\") " pod="openshift-route-controller-manager/route-controller-manager-578f67dc67-mxvsj" Dec 10 15:29:39 crc kubenswrapper[4755]: I1210 15:29:39.436737 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e0feb08f-5d53-4c06-bd0e-be43f4af01e7-serving-cert\") pod \"route-controller-manager-578f67dc67-mxvsj\" (UID: \"e0feb08f-5d53-4c06-bd0e-be43f4af01e7\") " pod="openshift-route-controller-manager/route-controller-manager-578f67dc67-mxvsj" Dec 10 15:29:39 crc kubenswrapper[4755]: I1210 15:29:39.436759 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/74ae1538-009a-4594-8b75-e3400c38ce6f-proxy-ca-bundles\") pod \"controller-manager-865488587c-ls86h\" (UID: \"74ae1538-009a-4594-8b75-e3400c38ce6f\") " pod="openshift-controller-manager/controller-manager-865488587c-ls86h" Dec 10 15:29:39 crc kubenswrapper[4755]: I1210 15:29:39.436785 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/74ae1538-009a-4594-8b75-e3400c38ce6f-serving-cert\") pod \"controller-manager-865488587c-ls86h\" (UID: \"74ae1538-009a-4594-8b75-e3400c38ce6f\") " pod="openshift-controller-manager/controller-manager-865488587c-ls86h" Dec 10 15:29:39 crc kubenswrapper[4755]: I1210 15:29:39.436875 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74ae1538-009a-4594-8b75-e3400c38ce6f-config\") pod \"controller-manager-865488587c-ls86h\" (UID: \"74ae1538-009a-4594-8b75-e3400c38ce6f\") " pod="openshift-controller-manager/controller-manager-865488587c-ls86h" Dec 10 15:29:39 crc kubenswrapper[4755]: I1210 15:29:39.437571 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/74ae1538-009a-4594-8b75-e3400c38ce6f-client-ca\") pod \"controller-manager-865488587c-ls86h\" (UID: \"74ae1538-009a-4594-8b75-e3400c38ce6f\") " pod="openshift-controller-manager/controller-manager-865488587c-ls86h" Dec 10 15:29:39 crc kubenswrapper[4755]: I1210 15:29:39.438410 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74ae1538-009a-4594-8b75-e3400c38ce6f-config\") pod \"controller-manager-865488587c-ls86h\" (UID: \"74ae1538-009a-4594-8b75-e3400c38ce6f\") " pod="openshift-controller-manager/controller-manager-865488587c-ls86h" Dec 10 15:29:39 crc kubenswrapper[4755]: I1210 15:29:39.439094 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0feb08f-5d53-4c06-bd0e-be43f4af01e7-config\") pod \"route-controller-manager-578f67dc67-mxvsj\" (UID: \"e0feb08f-5d53-4c06-bd0e-be43f4af01e7\") " pod="openshift-route-controller-manager/route-controller-manager-578f67dc67-mxvsj" Dec 10 15:29:39 crc kubenswrapper[4755]: I1210 15:29:39.439793 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e0feb08f-5d53-4c06-bd0e-be43f4af01e7-client-ca\") pod \"route-controller-manager-578f67dc67-mxvsj\" (UID: \"e0feb08f-5d53-4c06-bd0e-be43f4af01e7\") " pod="openshift-route-controller-manager/route-controller-manager-578f67dc67-mxvsj" Dec 10 15:29:39 crc kubenswrapper[4755]: I1210 15:29:39.440179 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/74ae1538-009a-4594-8b75-e3400c38ce6f-proxy-ca-bundles\") pod \"controller-manager-865488587c-ls86h\" (UID: \"74ae1538-009a-4594-8b75-e3400c38ce6f\") " pod="openshift-controller-manager/controller-manager-865488587c-ls86h" Dec 10 15:29:39 crc kubenswrapper[4755]: I1210 15:29:39.442120 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e0feb08f-5d53-4c06-bd0e-be43f4af01e7-serving-cert\") pod \"route-controller-manager-578f67dc67-mxvsj\" (UID: \"e0feb08f-5d53-4c06-bd0e-be43f4af01e7\") " pod="openshift-route-controller-manager/route-controller-manager-578f67dc67-mxvsj" Dec 10 15:29:39 crc kubenswrapper[4755]: I1210 15:29:39.454261 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/74ae1538-009a-4594-8b75-e3400c38ce6f-serving-cert\") pod \"controller-manager-865488587c-ls86h\" (UID: \"74ae1538-009a-4594-8b75-e3400c38ce6f\") " pod="openshift-controller-manager/controller-manager-865488587c-ls86h" Dec 10 15:29:39 crc kubenswrapper[4755]: I1210 15:29:39.456501 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k65k2\" (UniqueName: \"kubernetes.io/projected/e0feb08f-5d53-4c06-bd0e-be43f4af01e7-kube-api-access-k65k2\") pod \"route-controller-manager-578f67dc67-mxvsj\" (UID: \"e0feb08f-5d53-4c06-bd0e-be43f4af01e7\") " pod="openshift-route-controller-manager/route-controller-manager-578f67dc67-mxvsj" Dec 10 15:29:39 crc kubenswrapper[4755]: I1210 15:29:39.457798 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bgkp\" (UniqueName: \"kubernetes.io/projected/74ae1538-009a-4594-8b75-e3400c38ce6f-kube-api-access-7bgkp\") pod \"controller-manager-865488587c-ls86h\" (UID: \"74ae1538-009a-4594-8b75-e3400c38ce6f\") " pod="openshift-controller-manager/controller-manager-865488587c-ls86h" Dec 10 15:29:39 crc kubenswrapper[4755]: I1210 15:29:39.536938 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-578f67dc67-mxvsj" Dec 10 15:29:39 crc kubenswrapper[4755]: I1210 15:29:39.560436 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-865488587c-ls86h" Dec 10 15:29:39 crc kubenswrapper[4755]: I1210 15:29:39.763003 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ff565ca-4cd7-42a8-b161-e6dc43ab29b0" path="/var/lib/kubelet/pods/8ff565ca-4cd7-42a8-b161-e6dc43ab29b0/volumes" Dec 10 15:29:39 crc kubenswrapper[4755]: I1210 15:29:39.763759 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99862552-2d0b-4c68-8a7d-72c9c8993c96" path="/var/lib/kubelet/pods/99862552-2d0b-4c68-8a7d-72c9c8993c96/volumes" Dec 10 15:29:39 crc kubenswrapper[4755]: I1210 15:29:39.971876 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-578f67dc67-mxvsj"] Dec 10 15:29:40 crc kubenswrapper[4755]: I1210 15:29:40.057691 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-865488587c-ls86h"] Dec 10 15:29:40 crc kubenswrapper[4755]: W1210 15:29:40.069973 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod74ae1538_009a_4594_8b75_e3400c38ce6f.slice/crio-539dded956593abbf00de39314e4827e5f7901740d0f54fb99c686b4575b2c1b WatchSource:0}: Error finding container 539dded956593abbf00de39314e4827e5f7901740d0f54fb99c686b4575b2c1b: Status 404 returned error can't find the container with id 539dded956593abbf00de39314e4827e5f7901740d0f54fb99c686b4575b2c1b Dec 10 15:29:40 crc kubenswrapper[4755]: I1210 15:29:40.359025 4755 patch_prober.go:28] interesting pod/machine-config-daemon-ggt8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 15:29:40 crc kubenswrapper[4755]: I1210 15:29:40.359088 4755 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 15:29:40 crc kubenswrapper[4755]: I1210 15:29:40.664294 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-578f67dc67-mxvsj" event={"ID":"e0feb08f-5d53-4c06-bd0e-be43f4af01e7","Type":"ContainerStarted","Data":"2b243c2433aa6a371ecd6d17cbf8b588bcf57d2b766046f8bdeeec182fb866b4"} Dec 10 15:29:40 crc kubenswrapper[4755]: I1210 15:29:40.665375 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-865488587c-ls86h" event={"ID":"74ae1538-009a-4594-8b75-e3400c38ce6f","Type":"ContainerStarted","Data":"539dded956593abbf00de39314e4827e5f7901740d0f54fb99c686b4575b2c1b"} Dec 10 15:29:41 crc kubenswrapper[4755]: I1210 15:29:41.346779 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-pwfnv" Dec 10 15:29:41 crc kubenswrapper[4755]: I1210 15:29:41.346828 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-pwfnv" Dec 10 15:29:41 crc kubenswrapper[4755]: I1210 15:29:41.394254 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-pwfnv" Dec 10 15:29:41 crc kubenswrapper[4755]: I1210 15:29:41.690275 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-578f67dc67-mxvsj" event={"ID":"e0feb08f-5d53-4c06-bd0e-be43f4af01e7","Type":"ContainerStarted","Data":"96b358499846f22b3ac4687d504451daa6465cb09274872063e159943f4a8531"} Dec 10 15:29:41 crc kubenswrapper[4755]: I1210 15:29:41.739946 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-pwfnv" Dec 10 15:29:41 crc kubenswrapper[4755]: I1210 15:29:41.952237 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-s8l8s" Dec 10 15:29:41 crc kubenswrapper[4755]: I1210 15:29:41.953441 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-s8l8s" Dec 10 15:29:42 crc kubenswrapper[4755]: I1210 15:29:42.000246 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-s8l8s" Dec 10 15:29:42 crc kubenswrapper[4755]: I1210 15:29:42.046141 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-6mm2l"] Dec 10 15:29:42 crc kubenswrapper[4755]: I1210 15:29:42.046834 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-6mm2l" Dec 10 15:29:42 crc kubenswrapper[4755]: I1210 15:29:42.066412 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-6mm2l"] Dec 10 15:29:42 crc kubenswrapper[4755]: I1210 15:29:42.171216 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/437e8ac9-b322-4428-8faf-70ed78a0f981-bound-sa-token\") pod \"image-registry-66df7c8f76-6mm2l\" (UID: \"437e8ac9-b322-4428-8faf-70ed78a0f981\") " pod="openshift-image-registry/image-registry-66df7c8f76-6mm2l" Dec 10 15:29:42 crc kubenswrapper[4755]: I1210 15:29:42.171278 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/437e8ac9-b322-4428-8faf-70ed78a0f981-trusted-ca\") pod \"image-registry-66df7c8f76-6mm2l\" (UID: \"437e8ac9-b322-4428-8faf-70ed78a0f981\") " pod="openshift-image-registry/image-registry-66df7c8f76-6mm2l" Dec 10 15:29:42 crc kubenswrapper[4755]: I1210 15:29:42.171308 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/437e8ac9-b322-4428-8faf-70ed78a0f981-registry-tls\") pod \"image-registry-66df7c8f76-6mm2l\" (UID: \"437e8ac9-b322-4428-8faf-70ed78a0f981\") " pod="openshift-image-registry/image-registry-66df7c8f76-6mm2l" Dec 10 15:29:42 crc kubenswrapper[4755]: I1210 15:29:42.171340 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/437e8ac9-b322-4428-8faf-70ed78a0f981-installation-pull-secrets\") pod \"image-registry-66df7c8f76-6mm2l\" (UID: \"437e8ac9-b322-4428-8faf-70ed78a0f981\") " pod="openshift-image-registry/image-registry-66df7c8f76-6mm2l" Dec 10 15:29:42 crc kubenswrapper[4755]: I1210 15:29:42.171411 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/437e8ac9-b322-4428-8faf-70ed78a0f981-registry-certificates\") pod \"image-registry-66df7c8f76-6mm2l\" (UID: \"437e8ac9-b322-4428-8faf-70ed78a0f981\") " pod="openshift-image-registry/image-registry-66df7c8f76-6mm2l" Dec 10 15:29:42 crc kubenswrapper[4755]: I1210 15:29:42.171434 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/437e8ac9-b322-4428-8faf-70ed78a0f981-ca-trust-extracted\") pod \"image-registry-66df7c8f76-6mm2l\" (UID: \"437e8ac9-b322-4428-8faf-70ed78a0f981\") " pod="openshift-image-registry/image-registry-66df7c8f76-6mm2l" Dec 10 15:29:42 crc kubenswrapper[4755]: I1210 15:29:42.171462 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-6mm2l\" (UID: \"437e8ac9-b322-4428-8faf-70ed78a0f981\") " pod="openshift-image-registry/image-registry-66df7c8f76-6mm2l" Dec 10 15:29:42 crc kubenswrapper[4755]: I1210 15:29:42.171503 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5ckt\" (UniqueName: \"kubernetes.io/projected/437e8ac9-b322-4428-8faf-70ed78a0f981-kube-api-access-x5ckt\") pod \"image-registry-66df7c8f76-6mm2l\" (UID: \"437e8ac9-b322-4428-8faf-70ed78a0f981\") " pod="openshift-image-registry/image-registry-66df7c8f76-6mm2l" Dec 10 15:29:42 crc kubenswrapper[4755]: I1210 15:29:42.191885 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-6mm2l\" (UID: \"437e8ac9-b322-4428-8faf-70ed78a0f981\") " pod="openshift-image-registry/image-registry-66df7c8f76-6mm2l" Dec 10 15:29:42 crc kubenswrapper[4755]: I1210 15:29:42.272764 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5ckt\" (UniqueName: \"kubernetes.io/projected/437e8ac9-b322-4428-8faf-70ed78a0f981-kube-api-access-x5ckt\") pod \"image-registry-66df7c8f76-6mm2l\" (UID: \"437e8ac9-b322-4428-8faf-70ed78a0f981\") " pod="openshift-image-registry/image-registry-66df7c8f76-6mm2l" Dec 10 15:29:42 crc kubenswrapper[4755]: I1210 15:29:42.272821 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/437e8ac9-b322-4428-8faf-70ed78a0f981-bound-sa-token\") pod \"image-registry-66df7c8f76-6mm2l\" (UID: \"437e8ac9-b322-4428-8faf-70ed78a0f981\") " pod="openshift-image-registry/image-registry-66df7c8f76-6mm2l" Dec 10 15:29:42 crc kubenswrapper[4755]: I1210 15:29:42.272845 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/437e8ac9-b322-4428-8faf-70ed78a0f981-trusted-ca\") pod \"image-registry-66df7c8f76-6mm2l\" (UID: \"437e8ac9-b322-4428-8faf-70ed78a0f981\") " pod="openshift-image-registry/image-registry-66df7c8f76-6mm2l" Dec 10 15:29:42 crc kubenswrapper[4755]: I1210 15:29:42.272869 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/437e8ac9-b322-4428-8faf-70ed78a0f981-registry-tls\") pod \"image-registry-66df7c8f76-6mm2l\" (UID: \"437e8ac9-b322-4428-8faf-70ed78a0f981\") " pod="openshift-image-registry/image-registry-66df7c8f76-6mm2l" Dec 10 15:29:42 crc kubenswrapper[4755]: I1210 15:29:42.272895 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/437e8ac9-b322-4428-8faf-70ed78a0f981-installation-pull-secrets\") pod \"image-registry-66df7c8f76-6mm2l\" (UID: \"437e8ac9-b322-4428-8faf-70ed78a0f981\") " pod="openshift-image-registry/image-registry-66df7c8f76-6mm2l" Dec 10 15:29:42 crc kubenswrapper[4755]: I1210 15:29:42.272915 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/437e8ac9-b322-4428-8faf-70ed78a0f981-registry-certificates\") pod \"image-registry-66df7c8f76-6mm2l\" (UID: \"437e8ac9-b322-4428-8faf-70ed78a0f981\") " pod="openshift-image-registry/image-registry-66df7c8f76-6mm2l" Dec 10 15:29:42 crc kubenswrapper[4755]: I1210 15:29:42.272941 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/437e8ac9-b322-4428-8faf-70ed78a0f981-ca-trust-extracted\") pod \"image-registry-66df7c8f76-6mm2l\" (UID: \"437e8ac9-b322-4428-8faf-70ed78a0f981\") " pod="openshift-image-registry/image-registry-66df7c8f76-6mm2l" Dec 10 15:29:42 crc kubenswrapper[4755]: I1210 15:29:42.273407 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/437e8ac9-b322-4428-8faf-70ed78a0f981-ca-trust-extracted\") pod \"image-registry-66df7c8f76-6mm2l\" (UID: \"437e8ac9-b322-4428-8faf-70ed78a0f981\") " pod="openshift-image-registry/image-registry-66df7c8f76-6mm2l" Dec 10 15:29:42 crc kubenswrapper[4755]: I1210 15:29:42.274253 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/437e8ac9-b322-4428-8faf-70ed78a0f981-trusted-ca\") pod \"image-registry-66df7c8f76-6mm2l\" (UID: \"437e8ac9-b322-4428-8faf-70ed78a0f981\") " pod="openshift-image-registry/image-registry-66df7c8f76-6mm2l" Dec 10 15:29:42 crc kubenswrapper[4755]: I1210 15:29:42.274276 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/437e8ac9-b322-4428-8faf-70ed78a0f981-registry-certificates\") pod \"image-registry-66df7c8f76-6mm2l\" (UID: \"437e8ac9-b322-4428-8faf-70ed78a0f981\") " pod="openshift-image-registry/image-registry-66df7c8f76-6mm2l" Dec 10 15:29:42 crc kubenswrapper[4755]: I1210 15:29:42.277808 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/437e8ac9-b322-4428-8faf-70ed78a0f981-registry-tls\") pod \"image-registry-66df7c8f76-6mm2l\" (UID: \"437e8ac9-b322-4428-8faf-70ed78a0f981\") " pod="openshift-image-registry/image-registry-66df7c8f76-6mm2l" Dec 10 15:29:42 crc kubenswrapper[4755]: I1210 15:29:42.278404 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/437e8ac9-b322-4428-8faf-70ed78a0f981-installation-pull-secrets\") pod \"image-registry-66df7c8f76-6mm2l\" (UID: \"437e8ac9-b322-4428-8faf-70ed78a0f981\") " pod="openshift-image-registry/image-registry-66df7c8f76-6mm2l" Dec 10 15:29:42 crc kubenswrapper[4755]: I1210 15:29:42.304479 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/437e8ac9-b322-4428-8faf-70ed78a0f981-bound-sa-token\") pod \"image-registry-66df7c8f76-6mm2l\" (UID: \"437e8ac9-b322-4428-8faf-70ed78a0f981\") " pod="openshift-image-registry/image-registry-66df7c8f76-6mm2l" Dec 10 15:29:42 crc kubenswrapper[4755]: I1210 15:29:42.305379 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5ckt\" (UniqueName: \"kubernetes.io/projected/437e8ac9-b322-4428-8faf-70ed78a0f981-kube-api-access-x5ckt\") pod \"image-registry-66df7c8f76-6mm2l\" (UID: \"437e8ac9-b322-4428-8faf-70ed78a0f981\") " pod="openshift-image-registry/image-registry-66df7c8f76-6mm2l" Dec 10 15:29:42 crc kubenswrapper[4755]: I1210 15:29:42.363404 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-6mm2l" Dec 10 15:29:42 crc kubenswrapper[4755]: I1210 15:29:42.557291 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-6mm2l"] Dec 10 15:29:42 crc kubenswrapper[4755]: W1210 15:29:42.563008 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod437e8ac9_b322_4428_8faf_70ed78a0f981.slice/crio-3642fc19278645f157924735abfa5c99947742817ba26cfd5adb054e70849dd6 WatchSource:0}: Error finding container 3642fc19278645f157924735abfa5c99947742817ba26cfd5adb054e70849dd6: Status 404 returned error can't find the container with id 3642fc19278645f157924735abfa5c99947742817ba26cfd5adb054e70849dd6 Dec 10 15:29:42 crc kubenswrapper[4755]: I1210 15:29:42.701880 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-6mm2l" event={"ID":"437e8ac9-b322-4428-8faf-70ed78a0f981","Type":"ContainerStarted","Data":"feceea58a6478049ba4735ab58b70a28254cfff7f9414afa325dadb74c3b4da3"} Dec 10 15:29:42 crc kubenswrapper[4755]: I1210 15:29:42.701933 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-6mm2l" event={"ID":"437e8ac9-b322-4428-8faf-70ed78a0f981","Type":"ContainerStarted","Data":"3642fc19278645f157924735abfa5c99947742817ba26cfd5adb054e70849dd6"} Dec 10 15:29:42 crc kubenswrapper[4755]: I1210 15:29:42.701976 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-6mm2l" Dec 10 15:29:42 crc kubenswrapper[4755]: I1210 15:29:42.703909 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-865488587c-ls86h" event={"ID":"74ae1538-009a-4594-8b75-e3400c38ce6f","Type":"ContainerStarted","Data":"1abcdb9493b9a926e3fa5bf1eb7295b9913f8deb4a5817fa370cace8a33a2d95"} Dec 10 15:29:42 crc kubenswrapper[4755]: I1210 15:29:42.733347 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-6mm2l" podStartSLOduration=0.73332499 podStartE2EDuration="733.32499ms" podCreationTimestamp="2025-12-10 15:29:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:29:42.726994526 +0000 UTC m=+379.327878158" watchObservedRunningTime="2025-12-10 15:29:42.73332499 +0000 UTC m=+379.334208622" Dec 10 15:29:42 crc kubenswrapper[4755]: I1210 15:29:42.751763 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-s8l8s" Dec 10 15:29:42 crc kubenswrapper[4755]: I1210 15:29:42.765243 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-865488587c-ls86h" podStartSLOduration=5.765225267 podStartE2EDuration="5.765225267s" podCreationTimestamp="2025-12-10 15:29:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:29:42.743974069 +0000 UTC m=+379.344857711" watchObservedRunningTime="2025-12-10 15:29:42.765225267 +0000 UTC m=+379.366108899" Dec 10 15:29:42 crc kubenswrapper[4755]: I1210 15:29:42.765888 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-578f67dc67-mxvsj" podStartSLOduration=5.765880316 podStartE2EDuration="5.765880316s" podCreationTimestamp="2025-12-10 15:29:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:29:42.760777918 +0000 UTC m=+379.361661550" watchObservedRunningTime="2025-12-10 15:29:42.765880316 +0000 UTC m=+379.366763948" Dec 10 15:29:43 crc kubenswrapper[4755]: I1210 15:29:43.712219 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-578f67dc67-mxvsj" Dec 10 15:29:43 crc kubenswrapper[4755]: I1210 15:29:43.712607 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-865488587c-ls86h" Dec 10 15:29:43 crc kubenswrapper[4755]: I1210 15:29:43.717206 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-865488587c-ls86h" Dec 10 15:29:43 crc kubenswrapper[4755]: I1210 15:29:43.718098 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-578f67dc67-mxvsj" Dec 10 15:29:43 crc kubenswrapper[4755]: I1210 15:29:43.745682 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ltxsm" Dec 10 15:29:43 crc kubenswrapper[4755]: I1210 15:29:43.745772 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-ltxsm" Dec 10 15:29:43 crc kubenswrapper[4755]: I1210 15:29:43.803841 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ltxsm" Dec 10 15:29:44 crc kubenswrapper[4755]: I1210 15:29:44.346942 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-sx998" Dec 10 15:29:44 crc kubenswrapper[4755]: I1210 15:29:44.347005 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-sx998" Dec 10 15:29:44 crc kubenswrapper[4755]: I1210 15:29:44.391508 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-sx998" Dec 10 15:29:44 crc kubenswrapper[4755]: I1210 15:29:44.758891 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ltxsm" Dec 10 15:29:44 crc kubenswrapper[4755]: I1210 15:29:44.763850 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-sx998" Dec 10 15:30:00 crc kubenswrapper[4755]: I1210 15:30:00.172539 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29423010-xw2jd"] Dec 10 15:30:00 crc kubenswrapper[4755]: I1210 15:30:00.174020 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29423010-xw2jd" Dec 10 15:30:00 crc kubenswrapper[4755]: I1210 15:30:00.176052 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 10 15:30:00 crc kubenswrapper[4755]: I1210 15:30:00.176333 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29423010-xw2jd"] Dec 10 15:30:00 crc kubenswrapper[4755]: I1210 15:30:00.176418 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 10 15:30:00 crc kubenswrapper[4755]: I1210 15:30:00.346223 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/17322d90-1142-418c-81fc-13cc5e7396a9-config-volume\") pod \"collect-profiles-29423010-xw2jd\" (UID: \"17322d90-1142-418c-81fc-13cc5e7396a9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423010-xw2jd" Dec 10 15:30:00 crc kubenswrapper[4755]: I1210 15:30:00.346302 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4jnx\" (UniqueName: \"kubernetes.io/projected/17322d90-1142-418c-81fc-13cc5e7396a9-kube-api-access-t4jnx\") pod \"collect-profiles-29423010-xw2jd\" (UID: \"17322d90-1142-418c-81fc-13cc5e7396a9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423010-xw2jd" Dec 10 15:30:00 crc kubenswrapper[4755]: I1210 15:30:00.346340 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/17322d90-1142-418c-81fc-13cc5e7396a9-secret-volume\") pod \"collect-profiles-29423010-xw2jd\" (UID: \"17322d90-1142-418c-81fc-13cc5e7396a9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423010-xw2jd" Dec 10 15:30:00 crc kubenswrapper[4755]: I1210 15:30:00.447754 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/17322d90-1142-418c-81fc-13cc5e7396a9-config-volume\") pod \"collect-profiles-29423010-xw2jd\" (UID: \"17322d90-1142-418c-81fc-13cc5e7396a9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423010-xw2jd" Dec 10 15:30:00 crc kubenswrapper[4755]: I1210 15:30:00.447830 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4jnx\" (UniqueName: \"kubernetes.io/projected/17322d90-1142-418c-81fc-13cc5e7396a9-kube-api-access-t4jnx\") pod \"collect-profiles-29423010-xw2jd\" (UID: \"17322d90-1142-418c-81fc-13cc5e7396a9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423010-xw2jd" Dec 10 15:30:00 crc kubenswrapper[4755]: I1210 15:30:00.447862 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/17322d90-1142-418c-81fc-13cc5e7396a9-secret-volume\") pod \"collect-profiles-29423010-xw2jd\" (UID: \"17322d90-1142-418c-81fc-13cc5e7396a9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423010-xw2jd" Dec 10 15:30:00 crc kubenswrapper[4755]: I1210 15:30:00.448921 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/17322d90-1142-418c-81fc-13cc5e7396a9-config-volume\") pod \"collect-profiles-29423010-xw2jd\" (UID: \"17322d90-1142-418c-81fc-13cc5e7396a9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423010-xw2jd" Dec 10 15:30:00 crc kubenswrapper[4755]: I1210 15:30:00.454932 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/17322d90-1142-418c-81fc-13cc5e7396a9-secret-volume\") pod \"collect-profiles-29423010-xw2jd\" (UID: \"17322d90-1142-418c-81fc-13cc5e7396a9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423010-xw2jd" Dec 10 15:30:00 crc kubenswrapper[4755]: I1210 15:30:00.473353 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4jnx\" (UniqueName: \"kubernetes.io/projected/17322d90-1142-418c-81fc-13cc5e7396a9-kube-api-access-t4jnx\") pod \"collect-profiles-29423010-xw2jd\" (UID: \"17322d90-1142-418c-81fc-13cc5e7396a9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423010-xw2jd" Dec 10 15:30:00 crc kubenswrapper[4755]: I1210 15:30:00.492109 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29423010-xw2jd" Dec 10 15:30:00 crc kubenswrapper[4755]: I1210 15:30:00.909688 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29423010-xw2jd"] Dec 10 15:30:01 crc kubenswrapper[4755]: I1210 15:30:01.811210 4755 generic.go:334] "Generic (PLEG): container finished" podID="17322d90-1142-418c-81fc-13cc5e7396a9" containerID="0082b52cec7eab4ea15c0fa209c2eaed0fa17f49da5d3abc46764effa2ba9b7a" exitCode=0 Dec 10 15:30:01 crc kubenswrapper[4755]: I1210 15:30:01.811449 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29423010-xw2jd" event={"ID":"17322d90-1142-418c-81fc-13cc5e7396a9","Type":"ContainerDied","Data":"0082b52cec7eab4ea15c0fa209c2eaed0fa17f49da5d3abc46764effa2ba9b7a"} Dec 10 15:30:01 crc kubenswrapper[4755]: I1210 15:30:01.811844 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29423010-xw2jd" event={"ID":"17322d90-1142-418c-81fc-13cc5e7396a9","Type":"ContainerStarted","Data":"e7741b1a5cc528b456056de76b3c8e1d3e249466670546866459b4b57430cad5"} Dec 10 15:30:02 crc kubenswrapper[4755]: I1210 15:30:02.368899 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-6mm2l" Dec 10 15:30:02 crc kubenswrapper[4755]: I1210 15:30:02.458462 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-mqq47"] Dec 10 15:30:03 crc kubenswrapper[4755]: I1210 15:30:03.155445 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29423010-xw2jd" Dec 10 15:30:03 crc kubenswrapper[4755]: I1210 15:30:03.284255 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/17322d90-1142-418c-81fc-13cc5e7396a9-config-volume\") pod \"17322d90-1142-418c-81fc-13cc5e7396a9\" (UID: \"17322d90-1142-418c-81fc-13cc5e7396a9\") " Dec 10 15:30:03 crc kubenswrapper[4755]: I1210 15:30:03.284315 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/17322d90-1142-418c-81fc-13cc5e7396a9-secret-volume\") pod \"17322d90-1142-418c-81fc-13cc5e7396a9\" (UID: \"17322d90-1142-418c-81fc-13cc5e7396a9\") " Dec 10 15:30:03 crc kubenswrapper[4755]: I1210 15:30:03.284364 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t4jnx\" (UniqueName: \"kubernetes.io/projected/17322d90-1142-418c-81fc-13cc5e7396a9-kube-api-access-t4jnx\") pod \"17322d90-1142-418c-81fc-13cc5e7396a9\" (UID: \"17322d90-1142-418c-81fc-13cc5e7396a9\") " Dec 10 15:30:03 crc kubenswrapper[4755]: I1210 15:30:03.285812 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17322d90-1142-418c-81fc-13cc5e7396a9-config-volume" (OuterVolumeSpecName: "config-volume") pod "17322d90-1142-418c-81fc-13cc5e7396a9" (UID: "17322d90-1142-418c-81fc-13cc5e7396a9"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:30:03 crc kubenswrapper[4755]: I1210 15:30:03.291018 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17322d90-1142-418c-81fc-13cc5e7396a9-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "17322d90-1142-418c-81fc-13cc5e7396a9" (UID: "17322d90-1142-418c-81fc-13cc5e7396a9"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:30:03 crc kubenswrapper[4755]: I1210 15:30:03.291266 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17322d90-1142-418c-81fc-13cc5e7396a9-kube-api-access-t4jnx" (OuterVolumeSpecName: "kube-api-access-t4jnx") pod "17322d90-1142-418c-81fc-13cc5e7396a9" (UID: "17322d90-1142-418c-81fc-13cc5e7396a9"). InnerVolumeSpecName "kube-api-access-t4jnx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:30:03 crc kubenswrapper[4755]: I1210 15:30:03.386073 4755 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/17322d90-1142-418c-81fc-13cc5e7396a9-config-volume\") on node \"crc\" DevicePath \"\"" Dec 10 15:30:03 crc kubenswrapper[4755]: I1210 15:30:03.386114 4755 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/17322d90-1142-418c-81fc-13cc5e7396a9-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 10 15:30:03 crc kubenswrapper[4755]: I1210 15:30:03.386127 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t4jnx\" (UniqueName: \"kubernetes.io/projected/17322d90-1142-418c-81fc-13cc5e7396a9-kube-api-access-t4jnx\") on node \"crc\" DevicePath \"\"" Dec 10 15:30:03 crc kubenswrapper[4755]: I1210 15:30:03.858461 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29423010-xw2jd" event={"ID":"17322d90-1142-418c-81fc-13cc5e7396a9","Type":"ContainerDied","Data":"e7741b1a5cc528b456056de76b3c8e1d3e249466670546866459b4b57430cad5"} Dec 10 15:30:03 crc kubenswrapper[4755]: I1210 15:30:03.858706 4755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e7741b1a5cc528b456056de76b3c8e1d3e249466670546866459b4b57430cad5" Dec 10 15:30:03 crc kubenswrapper[4755]: I1210 15:30:03.858552 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29423010-xw2jd" Dec 10 15:30:10 crc kubenswrapper[4755]: I1210 15:30:10.359365 4755 patch_prober.go:28] interesting pod/machine-config-daemon-ggt8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 15:30:10 crc kubenswrapper[4755]: I1210 15:30:10.359844 4755 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 15:30:27 crc kubenswrapper[4755]: I1210 15:30:27.511756 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-mqq47" podUID="1189a5c2-6e43-4e4b-8181-d2bd78031673" containerName="registry" containerID="cri-o://0f4e5839a9d0857d1dbeb93bf6c7272db758ce854ae2d68e86fe9feb32381b28" gracePeriod=30 Dec 10 15:30:27 crc kubenswrapper[4755]: I1210 15:30:27.939872 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-mqq47" Dec 10 15:30:27 crc kubenswrapper[4755]: I1210 15:30:27.986988 4755 generic.go:334] "Generic (PLEG): container finished" podID="1189a5c2-6e43-4e4b-8181-d2bd78031673" containerID="0f4e5839a9d0857d1dbeb93bf6c7272db758ce854ae2d68e86fe9feb32381b28" exitCode=0 Dec 10 15:30:27 crc kubenswrapper[4755]: I1210 15:30:27.987036 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-mqq47" event={"ID":"1189a5c2-6e43-4e4b-8181-d2bd78031673","Type":"ContainerDied","Data":"0f4e5839a9d0857d1dbeb93bf6c7272db758ce854ae2d68e86fe9feb32381b28"} Dec 10 15:30:27 crc kubenswrapper[4755]: I1210 15:30:27.987068 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-mqq47" event={"ID":"1189a5c2-6e43-4e4b-8181-d2bd78031673","Type":"ContainerDied","Data":"fb834714129853b62b5ac115923507eecd20ae33bd8c07b59e3ee2c77832d659"} Dec 10 15:30:27 crc kubenswrapper[4755]: I1210 15:30:27.987087 4755 scope.go:117] "RemoveContainer" containerID="0f4e5839a9d0857d1dbeb93bf6c7272db758ce854ae2d68e86fe9feb32381b28" Dec 10 15:30:27 crc kubenswrapper[4755]: I1210 15:30:27.987220 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-mqq47" Dec 10 15:30:28 crc kubenswrapper[4755]: I1210 15:30:28.003336 4755 scope.go:117] "RemoveContainer" containerID="0f4e5839a9d0857d1dbeb93bf6c7272db758ce854ae2d68e86fe9feb32381b28" Dec 10 15:30:28 crc kubenswrapper[4755]: E1210 15:30:28.004007 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f4e5839a9d0857d1dbeb93bf6c7272db758ce854ae2d68e86fe9feb32381b28\": container with ID starting with 0f4e5839a9d0857d1dbeb93bf6c7272db758ce854ae2d68e86fe9feb32381b28 not found: ID does not exist" containerID="0f4e5839a9d0857d1dbeb93bf6c7272db758ce854ae2d68e86fe9feb32381b28" Dec 10 15:30:28 crc kubenswrapper[4755]: I1210 15:30:28.004048 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f4e5839a9d0857d1dbeb93bf6c7272db758ce854ae2d68e86fe9feb32381b28"} err="failed to get container status \"0f4e5839a9d0857d1dbeb93bf6c7272db758ce854ae2d68e86fe9feb32381b28\": rpc error: code = NotFound desc = could not find container \"0f4e5839a9d0857d1dbeb93bf6c7272db758ce854ae2d68e86fe9feb32381b28\": container with ID starting with 0f4e5839a9d0857d1dbeb93bf6c7272db758ce854ae2d68e86fe9feb32381b28 not found: ID does not exist" Dec 10 15:30:28 crc kubenswrapper[4755]: I1210 15:30:28.105900 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/1189a5c2-6e43-4e4b-8181-d2bd78031673-installation-pull-secrets\") pod \"1189a5c2-6e43-4e4b-8181-d2bd78031673\" (UID: \"1189a5c2-6e43-4e4b-8181-d2bd78031673\") " Dec 10 15:30:28 crc kubenswrapper[4755]: I1210 15:30:28.105992 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1189a5c2-6e43-4e4b-8181-d2bd78031673-trusted-ca\") pod \"1189a5c2-6e43-4e4b-8181-d2bd78031673\" (UID: \"1189a5c2-6e43-4e4b-8181-d2bd78031673\") " Dec 10 15:30:28 crc kubenswrapper[4755]: I1210 15:30:28.106024 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/1189a5c2-6e43-4e4b-8181-d2bd78031673-ca-trust-extracted\") pod \"1189a5c2-6e43-4e4b-8181-d2bd78031673\" (UID: \"1189a5c2-6e43-4e4b-8181-d2bd78031673\") " Dec 10 15:30:28 crc kubenswrapper[4755]: I1210 15:30:28.106076 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/1189a5c2-6e43-4e4b-8181-d2bd78031673-registry-tls\") pod \"1189a5c2-6e43-4e4b-8181-d2bd78031673\" (UID: \"1189a5c2-6e43-4e4b-8181-d2bd78031673\") " Dec 10 15:30:28 crc kubenswrapper[4755]: I1210 15:30:28.106994 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"1189a5c2-6e43-4e4b-8181-d2bd78031673\" (UID: \"1189a5c2-6e43-4e4b-8181-d2bd78031673\") " Dec 10 15:30:28 crc kubenswrapper[4755]: I1210 15:30:28.107074 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1189a5c2-6e43-4e4b-8181-d2bd78031673-bound-sa-token\") pod \"1189a5c2-6e43-4e4b-8181-d2bd78031673\" (UID: \"1189a5c2-6e43-4e4b-8181-d2bd78031673\") " Dec 10 15:30:28 crc kubenswrapper[4755]: I1210 15:30:28.107113 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4ggbv\" (UniqueName: \"kubernetes.io/projected/1189a5c2-6e43-4e4b-8181-d2bd78031673-kube-api-access-4ggbv\") pod \"1189a5c2-6e43-4e4b-8181-d2bd78031673\" (UID: \"1189a5c2-6e43-4e4b-8181-d2bd78031673\") " Dec 10 15:30:28 crc kubenswrapper[4755]: I1210 15:30:28.107184 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/1189a5c2-6e43-4e4b-8181-d2bd78031673-registry-certificates\") pod \"1189a5c2-6e43-4e4b-8181-d2bd78031673\" (UID: \"1189a5c2-6e43-4e4b-8181-d2bd78031673\") " Dec 10 15:30:28 crc kubenswrapper[4755]: I1210 15:30:28.107439 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1189a5c2-6e43-4e4b-8181-d2bd78031673-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "1189a5c2-6e43-4e4b-8181-d2bd78031673" (UID: "1189a5c2-6e43-4e4b-8181-d2bd78031673"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:30:28 crc kubenswrapper[4755]: I1210 15:30:28.107574 4755 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1189a5c2-6e43-4e4b-8181-d2bd78031673-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 10 15:30:28 crc kubenswrapper[4755]: I1210 15:30:28.108201 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1189a5c2-6e43-4e4b-8181-d2bd78031673-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "1189a5c2-6e43-4e4b-8181-d2bd78031673" (UID: "1189a5c2-6e43-4e4b-8181-d2bd78031673"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:30:28 crc kubenswrapper[4755]: I1210 15:30:28.112887 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1189a5c2-6e43-4e4b-8181-d2bd78031673-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "1189a5c2-6e43-4e4b-8181-d2bd78031673" (UID: "1189a5c2-6e43-4e4b-8181-d2bd78031673"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:30:28 crc kubenswrapper[4755]: I1210 15:30:28.113343 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1189a5c2-6e43-4e4b-8181-d2bd78031673-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "1189a5c2-6e43-4e4b-8181-d2bd78031673" (UID: "1189a5c2-6e43-4e4b-8181-d2bd78031673"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:30:28 crc kubenswrapper[4755]: I1210 15:30:28.113628 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1189a5c2-6e43-4e4b-8181-d2bd78031673-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "1189a5c2-6e43-4e4b-8181-d2bd78031673" (UID: "1189a5c2-6e43-4e4b-8181-d2bd78031673"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:30:28 crc kubenswrapper[4755]: I1210 15:30:28.114142 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1189a5c2-6e43-4e4b-8181-d2bd78031673-kube-api-access-4ggbv" (OuterVolumeSpecName: "kube-api-access-4ggbv") pod "1189a5c2-6e43-4e4b-8181-d2bd78031673" (UID: "1189a5c2-6e43-4e4b-8181-d2bd78031673"). InnerVolumeSpecName "kube-api-access-4ggbv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:30:28 crc kubenswrapper[4755]: I1210 15:30:28.120857 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "1189a5c2-6e43-4e4b-8181-d2bd78031673" (UID: "1189a5c2-6e43-4e4b-8181-d2bd78031673"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 10 15:30:28 crc kubenswrapper[4755]: I1210 15:30:28.127323 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1189a5c2-6e43-4e4b-8181-d2bd78031673-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "1189a5c2-6e43-4e4b-8181-d2bd78031673" (UID: "1189a5c2-6e43-4e4b-8181-d2bd78031673"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:30:28 crc kubenswrapper[4755]: I1210 15:30:28.209433 4755 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/1189a5c2-6e43-4e4b-8181-d2bd78031673-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 10 15:30:28 crc kubenswrapper[4755]: I1210 15:30:28.209526 4755 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/1189a5c2-6e43-4e4b-8181-d2bd78031673-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 10 15:30:28 crc kubenswrapper[4755]: I1210 15:30:28.209547 4755 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1189a5c2-6e43-4e4b-8181-d2bd78031673-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 10 15:30:28 crc kubenswrapper[4755]: I1210 15:30:28.209565 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4ggbv\" (UniqueName: \"kubernetes.io/projected/1189a5c2-6e43-4e4b-8181-d2bd78031673-kube-api-access-4ggbv\") on node \"crc\" DevicePath \"\"" Dec 10 15:30:28 crc kubenswrapper[4755]: I1210 15:30:28.209586 4755 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/1189a5c2-6e43-4e4b-8181-d2bd78031673-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 10 15:30:28 crc kubenswrapper[4755]: I1210 15:30:28.209604 4755 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/1189a5c2-6e43-4e4b-8181-d2bd78031673-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 10 15:30:28 crc kubenswrapper[4755]: I1210 15:30:28.334502 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-mqq47"] Dec 10 15:30:28 crc kubenswrapper[4755]: I1210 15:30:28.340417 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-mqq47"] Dec 10 15:30:29 crc kubenswrapper[4755]: I1210 15:30:29.766582 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1189a5c2-6e43-4e4b-8181-d2bd78031673" path="/var/lib/kubelet/pods/1189a5c2-6e43-4e4b-8181-d2bd78031673/volumes" Dec 10 15:30:40 crc kubenswrapper[4755]: I1210 15:30:40.359932 4755 patch_prober.go:28] interesting pod/machine-config-daemon-ggt8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 15:30:40 crc kubenswrapper[4755]: I1210 15:30:40.360725 4755 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 15:30:40 crc kubenswrapper[4755]: I1210 15:30:40.360803 4755 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" Dec 10 15:30:40 crc kubenswrapper[4755]: I1210 15:30:40.361767 4755 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f86ac3eae537ebb7a44f728c6faf4f748c2bb88ff37965117af600f929730d8f"} pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 10 15:30:40 crc kubenswrapper[4755]: I1210 15:30:40.361924 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" containerName="machine-config-daemon" containerID="cri-o://f86ac3eae537ebb7a44f728c6faf4f748c2bb88ff37965117af600f929730d8f" gracePeriod=600 Dec 10 15:30:41 crc kubenswrapper[4755]: I1210 15:30:41.060992 4755 generic.go:334] "Generic (PLEG): container finished" podID="b132a8b9-1c99-414d-8773-229bf36b305d" containerID="f86ac3eae537ebb7a44f728c6faf4f748c2bb88ff37965117af600f929730d8f" exitCode=0 Dec 10 15:30:41 crc kubenswrapper[4755]: I1210 15:30:41.061052 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" event={"ID":"b132a8b9-1c99-414d-8773-229bf36b305d","Type":"ContainerDied","Data":"f86ac3eae537ebb7a44f728c6faf4f748c2bb88ff37965117af600f929730d8f"} Dec 10 15:30:41 crc kubenswrapper[4755]: I1210 15:30:41.061318 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" event={"ID":"b132a8b9-1c99-414d-8773-229bf36b305d","Type":"ContainerStarted","Data":"e0512fff55aaaeeb22a338a748ccafc0fe3e36f21ae6e952762dc39e4ce559fe"} Dec 10 15:30:41 crc kubenswrapper[4755]: I1210 15:30:41.061342 4755 scope.go:117] "RemoveContainer" containerID="a40c4bdaa23a60a665b8f565720d79b68cac62d40246be94fc6cd314b1bb3656" Dec 10 15:32:40 crc kubenswrapper[4755]: I1210 15:32:40.358962 4755 patch_prober.go:28] interesting pod/machine-config-daemon-ggt8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 15:32:40 crc kubenswrapper[4755]: I1210 15:32:40.359568 4755 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 15:33:10 crc kubenswrapper[4755]: I1210 15:33:10.359252 4755 patch_prober.go:28] interesting pod/machine-config-daemon-ggt8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 15:33:10 crc kubenswrapper[4755]: I1210 15:33:10.359759 4755 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 15:33:36 crc kubenswrapper[4755]: I1210 15:33:36.497635 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dn67g"] Dec 10 15:33:36 crc kubenswrapper[4755]: E1210 15:33:36.498316 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17322d90-1142-418c-81fc-13cc5e7396a9" containerName="collect-profiles" Dec 10 15:33:36 crc kubenswrapper[4755]: I1210 15:33:36.498329 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="17322d90-1142-418c-81fc-13cc5e7396a9" containerName="collect-profiles" Dec 10 15:33:36 crc kubenswrapper[4755]: E1210 15:33:36.498341 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1189a5c2-6e43-4e4b-8181-d2bd78031673" containerName="registry" Dec 10 15:33:36 crc kubenswrapper[4755]: I1210 15:33:36.498346 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="1189a5c2-6e43-4e4b-8181-d2bd78031673" containerName="registry" Dec 10 15:33:36 crc kubenswrapper[4755]: I1210 15:33:36.498437 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="1189a5c2-6e43-4e4b-8181-d2bd78031673" containerName="registry" Dec 10 15:33:36 crc kubenswrapper[4755]: I1210 15:33:36.498450 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="17322d90-1142-418c-81fc-13cc5e7396a9" containerName="collect-profiles" Dec 10 15:33:36 crc kubenswrapper[4755]: I1210 15:33:36.499163 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dn67g" Dec 10 15:33:36 crc kubenswrapper[4755]: I1210 15:33:36.502172 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 10 15:33:36 crc kubenswrapper[4755]: I1210 15:33:36.510362 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dn67g"] Dec 10 15:33:36 crc kubenswrapper[4755]: I1210 15:33:36.587116 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nn28l\" (UniqueName: \"kubernetes.io/projected/995e5079-efb3-40a6-b2bd-4fa4e6f040c1-kube-api-access-nn28l\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dn67g\" (UID: \"995e5079-efb3-40a6-b2bd-4fa4e6f040c1\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dn67g" Dec 10 15:33:36 crc kubenswrapper[4755]: I1210 15:33:36.587181 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/995e5079-efb3-40a6-b2bd-4fa4e6f040c1-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dn67g\" (UID: \"995e5079-efb3-40a6-b2bd-4fa4e6f040c1\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dn67g" Dec 10 15:33:36 crc kubenswrapper[4755]: I1210 15:33:36.587201 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/995e5079-efb3-40a6-b2bd-4fa4e6f040c1-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dn67g\" (UID: \"995e5079-efb3-40a6-b2bd-4fa4e6f040c1\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dn67g" Dec 10 15:33:36 crc kubenswrapper[4755]: I1210 15:33:36.687829 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/995e5079-efb3-40a6-b2bd-4fa4e6f040c1-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dn67g\" (UID: \"995e5079-efb3-40a6-b2bd-4fa4e6f040c1\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dn67g" Dec 10 15:33:36 crc kubenswrapper[4755]: I1210 15:33:36.687883 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/995e5079-efb3-40a6-b2bd-4fa4e6f040c1-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dn67g\" (UID: \"995e5079-efb3-40a6-b2bd-4fa4e6f040c1\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dn67g" Dec 10 15:33:36 crc kubenswrapper[4755]: I1210 15:33:36.687945 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nn28l\" (UniqueName: \"kubernetes.io/projected/995e5079-efb3-40a6-b2bd-4fa4e6f040c1-kube-api-access-nn28l\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dn67g\" (UID: \"995e5079-efb3-40a6-b2bd-4fa4e6f040c1\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dn67g" Dec 10 15:33:36 crc kubenswrapper[4755]: I1210 15:33:36.688402 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/995e5079-efb3-40a6-b2bd-4fa4e6f040c1-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dn67g\" (UID: \"995e5079-efb3-40a6-b2bd-4fa4e6f040c1\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dn67g" Dec 10 15:33:36 crc kubenswrapper[4755]: I1210 15:33:36.688428 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/995e5079-efb3-40a6-b2bd-4fa4e6f040c1-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dn67g\" (UID: \"995e5079-efb3-40a6-b2bd-4fa4e6f040c1\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dn67g" Dec 10 15:33:36 crc kubenswrapper[4755]: I1210 15:33:36.704389 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nn28l\" (UniqueName: \"kubernetes.io/projected/995e5079-efb3-40a6-b2bd-4fa4e6f040c1-kube-api-access-nn28l\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dn67g\" (UID: \"995e5079-efb3-40a6-b2bd-4fa4e6f040c1\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dn67g" Dec 10 15:33:36 crc kubenswrapper[4755]: I1210 15:33:36.817324 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dn67g" Dec 10 15:33:37 crc kubenswrapper[4755]: I1210 15:33:37.013257 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dn67g"] Dec 10 15:33:37 crc kubenswrapper[4755]: I1210 15:33:37.051941 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dn67g" event={"ID":"995e5079-efb3-40a6-b2bd-4fa4e6f040c1","Type":"ContainerStarted","Data":"e8f90712af10923ebfd1f1c15feffc677a36db823bce2c5c69829d327456896f"} Dec 10 15:33:38 crc kubenswrapper[4755]: I1210 15:33:38.060047 4755 generic.go:334] "Generic (PLEG): container finished" podID="995e5079-efb3-40a6-b2bd-4fa4e6f040c1" containerID="7636d922dddee35ed9b389aa6c3fbdf03e3fcb10034643bc80f47a52fa1e319e" exitCode=0 Dec 10 15:33:38 crc kubenswrapper[4755]: I1210 15:33:38.060108 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dn67g" event={"ID":"995e5079-efb3-40a6-b2bd-4fa4e6f040c1","Type":"ContainerDied","Data":"7636d922dddee35ed9b389aa6c3fbdf03e3fcb10034643bc80f47a52fa1e319e"} Dec 10 15:33:38 crc kubenswrapper[4755]: I1210 15:33:38.063371 4755 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 10 15:33:40 crc kubenswrapper[4755]: I1210 15:33:40.070789 4755 generic.go:334] "Generic (PLEG): container finished" podID="995e5079-efb3-40a6-b2bd-4fa4e6f040c1" containerID="9abf1610f150fd66c116ce35a96a3e98239f419c57a3ea89b56e03ea508eb53f" exitCode=0 Dec 10 15:33:40 crc kubenswrapper[4755]: I1210 15:33:40.070861 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dn67g" event={"ID":"995e5079-efb3-40a6-b2bd-4fa4e6f040c1","Type":"ContainerDied","Data":"9abf1610f150fd66c116ce35a96a3e98239f419c57a3ea89b56e03ea508eb53f"} Dec 10 15:33:40 crc kubenswrapper[4755]: I1210 15:33:40.359655 4755 patch_prober.go:28] interesting pod/machine-config-daemon-ggt8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 15:33:40 crc kubenswrapper[4755]: I1210 15:33:40.359719 4755 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 15:33:40 crc kubenswrapper[4755]: I1210 15:33:40.359765 4755 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" Dec 10 15:33:40 crc kubenswrapper[4755]: I1210 15:33:40.360390 4755 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e0512fff55aaaeeb22a338a748ccafc0fe3e36f21ae6e952762dc39e4ce559fe"} pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 10 15:33:40 crc kubenswrapper[4755]: I1210 15:33:40.360453 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" containerName="machine-config-daemon" containerID="cri-o://e0512fff55aaaeeb22a338a748ccafc0fe3e36f21ae6e952762dc39e4ce559fe" gracePeriod=600 Dec 10 15:33:41 crc kubenswrapper[4755]: I1210 15:33:41.084538 4755 generic.go:334] "Generic (PLEG): container finished" podID="995e5079-efb3-40a6-b2bd-4fa4e6f040c1" containerID="9b573573b95d7069d33ea590a0b603fe36ce4a246ced8e6a3a606e1b32e91c85" exitCode=0 Dec 10 15:33:41 crc kubenswrapper[4755]: I1210 15:33:41.084534 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dn67g" event={"ID":"995e5079-efb3-40a6-b2bd-4fa4e6f040c1","Type":"ContainerDied","Data":"9b573573b95d7069d33ea590a0b603fe36ce4a246ced8e6a3a606e1b32e91c85"} Dec 10 15:33:41 crc kubenswrapper[4755]: I1210 15:33:41.089757 4755 generic.go:334] "Generic (PLEG): container finished" podID="b132a8b9-1c99-414d-8773-229bf36b305d" containerID="e0512fff55aaaeeb22a338a748ccafc0fe3e36f21ae6e952762dc39e4ce559fe" exitCode=0 Dec 10 15:33:41 crc kubenswrapper[4755]: I1210 15:33:41.090053 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" event={"ID":"b132a8b9-1c99-414d-8773-229bf36b305d","Type":"ContainerDied","Data":"e0512fff55aaaeeb22a338a748ccafc0fe3e36f21ae6e952762dc39e4ce559fe"} Dec 10 15:33:41 crc kubenswrapper[4755]: I1210 15:33:41.090149 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" event={"ID":"b132a8b9-1c99-414d-8773-229bf36b305d","Type":"ContainerStarted","Data":"a3bec46d814cc9fbc9935f1242adb126dce3912edb10a563b43df294190d9363"} Dec 10 15:33:41 crc kubenswrapper[4755]: I1210 15:33:41.090195 4755 scope.go:117] "RemoveContainer" containerID="f86ac3eae537ebb7a44f728c6faf4f748c2bb88ff37965117af600f929730d8f" Dec 10 15:33:42 crc kubenswrapper[4755]: I1210 15:33:42.370894 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dn67g" Dec 10 15:33:42 crc kubenswrapper[4755]: I1210 15:33:42.561819 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/995e5079-efb3-40a6-b2bd-4fa4e6f040c1-bundle\") pod \"995e5079-efb3-40a6-b2bd-4fa4e6f040c1\" (UID: \"995e5079-efb3-40a6-b2bd-4fa4e6f040c1\") " Dec 10 15:33:42 crc kubenswrapper[4755]: I1210 15:33:42.561928 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nn28l\" (UniqueName: \"kubernetes.io/projected/995e5079-efb3-40a6-b2bd-4fa4e6f040c1-kube-api-access-nn28l\") pod \"995e5079-efb3-40a6-b2bd-4fa4e6f040c1\" (UID: \"995e5079-efb3-40a6-b2bd-4fa4e6f040c1\") " Dec 10 15:33:42 crc kubenswrapper[4755]: I1210 15:33:42.561966 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/995e5079-efb3-40a6-b2bd-4fa4e6f040c1-util\") pod \"995e5079-efb3-40a6-b2bd-4fa4e6f040c1\" (UID: \"995e5079-efb3-40a6-b2bd-4fa4e6f040c1\") " Dec 10 15:33:42 crc kubenswrapper[4755]: I1210 15:33:42.564993 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/995e5079-efb3-40a6-b2bd-4fa4e6f040c1-bundle" (OuterVolumeSpecName: "bundle") pod "995e5079-efb3-40a6-b2bd-4fa4e6f040c1" (UID: "995e5079-efb3-40a6-b2bd-4fa4e6f040c1"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:33:42 crc kubenswrapper[4755]: I1210 15:33:42.568302 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/995e5079-efb3-40a6-b2bd-4fa4e6f040c1-kube-api-access-nn28l" (OuterVolumeSpecName: "kube-api-access-nn28l") pod "995e5079-efb3-40a6-b2bd-4fa4e6f040c1" (UID: "995e5079-efb3-40a6-b2bd-4fa4e6f040c1"). InnerVolumeSpecName "kube-api-access-nn28l". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:33:42 crc kubenswrapper[4755]: I1210 15:33:42.663785 4755 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/995e5079-efb3-40a6-b2bd-4fa4e6f040c1-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:33:42 crc kubenswrapper[4755]: I1210 15:33:42.663860 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nn28l\" (UniqueName: \"kubernetes.io/projected/995e5079-efb3-40a6-b2bd-4fa4e6f040c1-kube-api-access-nn28l\") on node \"crc\" DevicePath \"\"" Dec 10 15:33:42 crc kubenswrapper[4755]: I1210 15:33:42.848020 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/995e5079-efb3-40a6-b2bd-4fa4e6f040c1-util" (OuterVolumeSpecName: "util") pod "995e5079-efb3-40a6-b2bd-4fa4e6f040c1" (UID: "995e5079-efb3-40a6-b2bd-4fa4e6f040c1"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:33:42 crc kubenswrapper[4755]: I1210 15:33:42.866063 4755 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/995e5079-efb3-40a6-b2bd-4fa4e6f040c1-util\") on node \"crc\" DevicePath \"\"" Dec 10 15:33:43 crc kubenswrapper[4755]: I1210 15:33:43.105766 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dn67g" event={"ID":"995e5079-efb3-40a6-b2bd-4fa4e6f040c1","Type":"ContainerDied","Data":"e8f90712af10923ebfd1f1c15feffc677a36db823bce2c5c69829d327456896f"} Dec 10 15:33:43 crc kubenswrapper[4755]: I1210 15:33:43.105814 4755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e8f90712af10923ebfd1f1c15feffc677a36db823bce2c5c69829d327456896f" Dec 10 15:33:43 crc kubenswrapper[4755]: I1210 15:33:43.105868 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dn67g" Dec 10 15:33:47 crc kubenswrapper[4755]: I1210 15:33:47.599322 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-6lfvk"] Dec 10 15:33:47 crc kubenswrapper[4755]: I1210 15:33:47.600116 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" podUID="4b1da51a-99c9-4f8e-920d-ce0973af6370" containerName="ovn-controller" containerID="cri-o://335bcab3a79f09796e97560365e1211fb30ddf288f4773c05ab353197add4365" gracePeriod=30 Dec 10 15:33:47 crc kubenswrapper[4755]: I1210 15:33:47.600249 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" podUID="4b1da51a-99c9-4f8e-920d-ce0973af6370" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://0e547993b9f2fa37bf924f909c47b62eb0cc02b659596b1cad9bbc42fdde8f9d" gracePeriod=30 Dec 10 15:33:47 crc kubenswrapper[4755]: I1210 15:33:47.600249 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" podUID="4b1da51a-99c9-4f8e-920d-ce0973af6370" containerName="sbdb" containerID="cri-o://59bf59d1b7fbc365a916fbbceca7ae30b7ebc754b34f2f7a34c2e21e1e1d2166" gracePeriod=30 Dec 10 15:33:47 crc kubenswrapper[4755]: I1210 15:33:47.600292 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" podUID="4b1da51a-99c9-4f8e-920d-ce0973af6370" containerName="kube-rbac-proxy-node" containerID="cri-o://6eb065dc6c0cc8914cb95553eb2683d894fb9a4e78ce7fac73bcce8d7f6cced9" gracePeriod=30 Dec 10 15:33:47 crc kubenswrapper[4755]: I1210 15:33:47.600335 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" podUID="4b1da51a-99c9-4f8e-920d-ce0973af6370" containerName="ovn-acl-logging" containerID="cri-o://602b4e49987fa2cc6b54b822110aececbdddaf2bce8f27cce4ed906768d45791" gracePeriod=30 Dec 10 15:33:47 crc kubenswrapper[4755]: I1210 15:33:47.600243 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" podUID="4b1da51a-99c9-4f8e-920d-ce0973af6370" containerName="nbdb" containerID="cri-o://e9ba47683cc23d5b531a45f0658b6a9378650400b35b5372642b0430a5ac503f" gracePeriod=30 Dec 10 15:33:47 crc kubenswrapper[4755]: I1210 15:33:47.600375 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" podUID="4b1da51a-99c9-4f8e-920d-ce0973af6370" containerName="northd" containerID="cri-o://3a75407e83508af9adebb09c6466a966dd791d29f690c539656f9bd3396d7031" gracePeriod=30 Dec 10 15:33:47 crc kubenswrapper[4755]: I1210 15:33:47.635346 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" podUID="4b1da51a-99c9-4f8e-920d-ce0973af6370" containerName="ovnkube-controller" containerID="cri-o://decd94009593b6ceb6559ae2b8598a9f4fdd922a3c94226d5086a7a25cc40280" gracePeriod=30 Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.132654 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6lfvk_4b1da51a-99c9-4f8e-920d-ce0973af6370/ovnkube-controller/3.log" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.136914 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6lfvk_4b1da51a-99c9-4f8e-920d-ce0973af6370/ovn-acl-logging/0.log" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.137514 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6lfvk_4b1da51a-99c9-4f8e-920d-ce0973af6370/ovn-controller/0.log" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.137892 4755 generic.go:334] "Generic (PLEG): container finished" podID="4b1da51a-99c9-4f8e-920d-ce0973af6370" containerID="decd94009593b6ceb6559ae2b8598a9f4fdd922a3c94226d5086a7a25cc40280" exitCode=0 Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.137916 4755 generic.go:334] "Generic (PLEG): container finished" podID="4b1da51a-99c9-4f8e-920d-ce0973af6370" containerID="59bf59d1b7fbc365a916fbbceca7ae30b7ebc754b34f2f7a34c2e21e1e1d2166" exitCode=0 Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.137924 4755 generic.go:334] "Generic (PLEG): container finished" podID="4b1da51a-99c9-4f8e-920d-ce0973af6370" containerID="e9ba47683cc23d5b531a45f0658b6a9378650400b35b5372642b0430a5ac503f" exitCode=0 Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.137931 4755 generic.go:334] "Generic (PLEG): container finished" podID="4b1da51a-99c9-4f8e-920d-ce0973af6370" containerID="3a75407e83508af9adebb09c6466a966dd791d29f690c539656f9bd3396d7031" exitCode=0 Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.137939 4755 generic.go:334] "Generic (PLEG): container finished" podID="4b1da51a-99c9-4f8e-920d-ce0973af6370" containerID="0e547993b9f2fa37bf924f909c47b62eb0cc02b659596b1cad9bbc42fdde8f9d" exitCode=0 Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.137948 4755 generic.go:334] "Generic (PLEG): container finished" podID="4b1da51a-99c9-4f8e-920d-ce0973af6370" containerID="6eb065dc6c0cc8914cb95553eb2683d894fb9a4e78ce7fac73bcce8d7f6cced9" exitCode=0 Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.137954 4755 generic.go:334] "Generic (PLEG): container finished" podID="4b1da51a-99c9-4f8e-920d-ce0973af6370" containerID="602b4e49987fa2cc6b54b822110aececbdddaf2bce8f27cce4ed906768d45791" exitCode=143 Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.137964 4755 generic.go:334] "Generic (PLEG): container finished" podID="4b1da51a-99c9-4f8e-920d-ce0973af6370" containerID="335bcab3a79f09796e97560365e1211fb30ddf288f4773c05ab353197add4365" exitCode=143 Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.137987 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" event={"ID":"4b1da51a-99c9-4f8e-920d-ce0973af6370","Type":"ContainerDied","Data":"decd94009593b6ceb6559ae2b8598a9f4fdd922a3c94226d5086a7a25cc40280"} Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.138075 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" event={"ID":"4b1da51a-99c9-4f8e-920d-ce0973af6370","Type":"ContainerDied","Data":"59bf59d1b7fbc365a916fbbceca7ae30b7ebc754b34f2f7a34c2e21e1e1d2166"} Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.138093 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" event={"ID":"4b1da51a-99c9-4f8e-920d-ce0973af6370","Type":"ContainerDied","Data":"e9ba47683cc23d5b531a45f0658b6a9378650400b35b5372642b0430a5ac503f"} Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.138110 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" event={"ID":"4b1da51a-99c9-4f8e-920d-ce0973af6370","Type":"ContainerDied","Data":"3a75407e83508af9adebb09c6466a966dd791d29f690c539656f9bd3396d7031"} Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.138122 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" event={"ID":"4b1da51a-99c9-4f8e-920d-ce0973af6370","Type":"ContainerDied","Data":"0e547993b9f2fa37bf924f909c47b62eb0cc02b659596b1cad9bbc42fdde8f9d"} Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.138133 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" event={"ID":"4b1da51a-99c9-4f8e-920d-ce0973af6370","Type":"ContainerDied","Data":"6eb065dc6c0cc8914cb95553eb2683d894fb9a4e78ce7fac73bcce8d7f6cced9"} Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.138145 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" event={"ID":"4b1da51a-99c9-4f8e-920d-ce0973af6370","Type":"ContainerDied","Data":"602b4e49987fa2cc6b54b822110aececbdddaf2bce8f27cce4ed906768d45791"} Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.138159 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" event={"ID":"4b1da51a-99c9-4f8e-920d-ce0973af6370","Type":"ContainerDied","Data":"335bcab3a79f09796e97560365e1211fb30ddf288f4773c05ab353197add4365"} Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.138164 4755 scope.go:117] "RemoveContainer" containerID="0386a60f9d2d9c0cec943720b300e0cd71348b81b74234f19f1c51d34142b089" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.145927 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zl2tx_796da6d5-6ccd-4786-a03e-9a8e47a55031/kube-multus/2.log" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.146356 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zl2tx_796da6d5-6ccd-4786-a03e-9a8e47a55031/kube-multus/1.log" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.146410 4755 generic.go:334] "Generic (PLEG): container finished" podID="796da6d5-6ccd-4786-a03e-9a8e47a55031" containerID="ddcd6ca2f982a307a418e96d428250bc2a8ea077d211a8856f484cd779d4fa36" exitCode=2 Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.146450 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-zl2tx" event={"ID":"796da6d5-6ccd-4786-a03e-9a8e47a55031","Type":"ContainerDied","Data":"ddcd6ca2f982a307a418e96d428250bc2a8ea077d211a8856f484cd779d4fa36"} Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.147068 4755 scope.go:117] "RemoveContainer" containerID="ddcd6ca2f982a307a418e96d428250bc2a8ea077d211a8856f484cd779d4fa36" Dec 10 15:33:48 crc kubenswrapper[4755]: E1210 15:33:48.147238 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-zl2tx_openshift-multus(796da6d5-6ccd-4786-a03e-9a8e47a55031)\"" pod="openshift-multus/multus-zl2tx" podUID="796da6d5-6ccd-4786-a03e-9a8e47a55031" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.188322 4755 scope.go:117] "RemoveContainer" containerID="2e0f974f9ba614dcaef08cf7168b77eeee007dfe65cc4e32df9b8e45005ff4ed" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.396326 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6lfvk_4b1da51a-99c9-4f8e-920d-ce0973af6370/ovn-acl-logging/0.log" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.396746 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6lfvk_4b1da51a-99c9-4f8e-920d-ce0973af6370/ovn-controller/0.log" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.397088 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.464278 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-9q4ng"] Dec 10 15:33:48 crc kubenswrapper[4755]: E1210 15:33:48.464765 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b1da51a-99c9-4f8e-920d-ce0973af6370" containerName="ovnkube-controller" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.464778 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b1da51a-99c9-4f8e-920d-ce0973af6370" containerName="ovnkube-controller" Dec 10 15:33:48 crc kubenswrapper[4755]: E1210 15:33:48.464787 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b1da51a-99c9-4f8e-920d-ce0973af6370" containerName="ovnkube-controller" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.464794 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b1da51a-99c9-4f8e-920d-ce0973af6370" containerName="ovnkube-controller" Dec 10 15:33:48 crc kubenswrapper[4755]: E1210 15:33:48.464808 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b1da51a-99c9-4f8e-920d-ce0973af6370" containerName="ovn-controller" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.464817 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b1da51a-99c9-4f8e-920d-ce0973af6370" containerName="ovn-controller" Dec 10 15:33:48 crc kubenswrapper[4755]: E1210 15:33:48.464826 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b1da51a-99c9-4f8e-920d-ce0973af6370" containerName="ovnkube-controller" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.464835 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b1da51a-99c9-4f8e-920d-ce0973af6370" containerName="ovnkube-controller" Dec 10 15:33:48 crc kubenswrapper[4755]: E1210 15:33:48.464843 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b1da51a-99c9-4f8e-920d-ce0973af6370" containerName="kube-rbac-proxy-ovn-metrics" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.464849 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b1da51a-99c9-4f8e-920d-ce0973af6370" containerName="kube-rbac-proxy-ovn-metrics" Dec 10 15:33:48 crc kubenswrapper[4755]: E1210 15:33:48.464857 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="995e5079-efb3-40a6-b2bd-4fa4e6f040c1" containerName="extract" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.464862 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="995e5079-efb3-40a6-b2bd-4fa4e6f040c1" containerName="extract" Dec 10 15:33:48 crc kubenswrapper[4755]: E1210 15:33:48.464870 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b1da51a-99c9-4f8e-920d-ce0973af6370" containerName="ovnkube-controller" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.464876 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b1da51a-99c9-4f8e-920d-ce0973af6370" containerName="ovnkube-controller" Dec 10 15:33:48 crc kubenswrapper[4755]: E1210 15:33:48.464886 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b1da51a-99c9-4f8e-920d-ce0973af6370" containerName="ovn-acl-logging" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.464893 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b1da51a-99c9-4f8e-920d-ce0973af6370" containerName="ovn-acl-logging" Dec 10 15:33:48 crc kubenswrapper[4755]: E1210 15:33:48.464902 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="995e5079-efb3-40a6-b2bd-4fa4e6f040c1" containerName="util" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.464910 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="995e5079-efb3-40a6-b2bd-4fa4e6f040c1" containerName="util" Dec 10 15:33:48 crc kubenswrapper[4755]: E1210 15:33:48.464920 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b1da51a-99c9-4f8e-920d-ce0973af6370" containerName="kubecfg-setup" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.464927 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b1da51a-99c9-4f8e-920d-ce0973af6370" containerName="kubecfg-setup" Dec 10 15:33:48 crc kubenswrapper[4755]: E1210 15:33:48.464938 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="995e5079-efb3-40a6-b2bd-4fa4e6f040c1" containerName="pull" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.464945 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="995e5079-efb3-40a6-b2bd-4fa4e6f040c1" containerName="pull" Dec 10 15:33:48 crc kubenswrapper[4755]: E1210 15:33:48.464955 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b1da51a-99c9-4f8e-920d-ce0973af6370" containerName="sbdb" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.464961 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b1da51a-99c9-4f8e-920d-ce0973af6370" containerName="sbdb" Dec 10 15:33:48 crc kubenswrapper[4755]: E1210 15:33:48.464970 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b1da51a-99c9-4f8e-920d-ce0973af6370" containerName="kube-rbac-proxy-node" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.464976 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b1da51a-99c9-4f8e-920d-ce0973af6370" containerName="kube-rbac-proxy-node" Dec 10 15:33:48 crc kubenswrapper[4755]: E1210 15:33:48.464983 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b1da51a-99c9-4f8e-920d-ce0973af6370" containerName="northd" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.464990 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b1da51a-99c9-4f8e-920d-ce0973af6370" containerName="northd" Dec 10 15:33:48 crc kubenswrapper[4755]: E1210 15:33:48.464996 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b1da51a-99c9-4f8e-920d-ce0973af6370" containerName="nbdb" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.465002 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b1da51a-99c9-4f8e-920d-ce0973af6370" containerName="nbdb" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.465090 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b1da51a-99c9-4f8e-920d-ce0973af6370" containerName="ovnkube-controller" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.465099 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b1da51a-99c9-4f8e-920d-ce0973af6370" containerName="ovn-acl-logging" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.465107 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b1da51a-99c9-4f8e-920d-ce0973af6370" containerName="ovnkube-controller" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.465115 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b1da51a-99c9-4f8e-920d-ce0973af6370" containerName="ovnkube-controller" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.465121 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b1da51a-99c9-4f8e-920d-ce0973af6370" containerName="kube-rbac-proxy-ovn-metrics" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.465128 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b1da51a-99c9-4f8e-920d-ce0973af6370" containerName="ovn-controller" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.465135 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b1da51a-99c9-4f8e-920d-ce0973af6370" containerName="ovnkube-controller" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.465142 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b1da51a-99c9-4f8e-920d-ce0973af6370" containerName="sbdb" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.465151 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b1da51a-99c9-4f8e-920d-ce0973af6370" containerName="nbdb" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.465158 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b1da51a-99c9-4f8e-920d-ce0973af6370" containerName="kube-rbac-proxy-node" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.465167 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b1da51a-99c9-4f8e-920d-ce0973af6370" containerName="northd" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.465174 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="995e5079-efb3-40a6-b2bd-4fa4e6f040c1" containerName="extract" Dec 10 15:33:48 crc kubenswrapper[4755]: E1210 15:33:48.465274 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b1da51a-99c9-4f8e-920d-ce0973af6370" containerName="ovnkube-controller" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.465281 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b1da51a-99c9-4f8e-920d-ce0973af6370" containerName="ovnkube-controller" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.465365 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b1da51a-99c9-4f8e-920d-ce0973af6370" containerName="ovnkube-controller" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.466899 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-9q4ng" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.481300 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4b1da51a-99c9-4f8e-920d-ce0973af6370-ovnkube-script-lib\") pod \"4b1da51a-99c9-4f8e-920d-ce0973af6370\" (UID: \"4b1da51a-99c9-4f8e-920d-ce0973af6370\") " Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.481340 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4b1da51a-99c9-4f8e-920d-ce0973af6370-run-openvswitch\") pod \"4b1da51a-99c9-4f8e-920d-ce0973af6370\" (UID: \"4b1da51a-99c9-4f8e-920d-ce0973af6370\") " Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.481364 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4b1da51a-99c9-4f8e-920d-ce0973af6370-host-var-lib-cni-networks-ovn-kubernetes\") pod \"4b1da51a-99c9-4f8e-920d-ce0973af6370\" (UID: \"4b1da51a-99c9-4f8e-920d-ce0973af6370\") " Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.481380 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4b1da51a-99c9-4f8e-920d-ce0973af6370-host-slash\") pod \"4b1da51a-99c9-4f8e-920d-ce0973af6370\" (UID: \"4b1da51a-99c9-4f8e-920d-ce0973af6370\") " Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.481402 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4b1da51a-99c9-4f8e-920d-ce0973af6370-systemd-units\") pod \"4b1da51a-99c9-4f8e-920d-ce0973af6370\" (UID: \"4b1da51a-99c9-4f8e-920d-ce0973af6370\") " Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.481420 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4b1da51a-99c9-4f8e-920d-ce0973af6370-ovn-node-metrics-cert\") pod \"4b1da51a-99c9-4f8e-920d-ce0973af6370\" (UID: \"4b1da51a-99c9-4f8e-920d-ce0973af6370\") " Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.481457 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4b1da51a-99c9-4f8e-920d-ce0973af6370-log-socket\") pod \"4b1da51a-99c9-4f8e-920d-ce0973af6370\" (UID: \"4b1da51a-99c9-4f8e-920d-ce0973af6370\") " Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.481501 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4b1da51a-99c9-4f8e-920d-ce0973af6370-host-kubelet\") pod \"4b1da51a-99c9-4f8e-920d-ce0973af6370\" (UID: \"4b1da51a-99c9-4f8e-920d-ce0973af6370\") " Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.481443 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4b1da51a-99c9-4f8e-920d-ce0973af6370-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "4b1da51a-99c9-4f8e-920d-ce0973af6370" (UID: "4b1da51a-99c9-4f8e-920d-ce0973af6370"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.481545 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4b1da51a-99c9-4f8e-920d-ce0973af6370-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "4b1da51a-99c9-4f8e-920d-ce0973af6370" (UID: "4b1da51a-99c9-4f8e-920d-ce0973af6370"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.481555 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4b1da51a-99c9-4f8e-920d-ce0973af6370-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "4b1da51a-99c9-4f8e-920d-ce0973af6370" (UID: "4b1da51a-99c9-4f8e-920d-ce0973af6370"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.481526 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4b1da51a-99c9-4f8e-920d-ce0973af6370-host-cni-bin\") pod \"4b1da51a-99c9-4f8e-920d-ce0973af6370\" (UID: \"4b1da51a-99c9-4f8e-920d-ce0973af6370\") " Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.481457 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4b1da51a-99c9-4f8e-920d-ce0973af6370-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "4b1da51a-99c9-4f8e-920d-ce0973af6370" (UID: "4b1da51a-99c9-4f8e-920d-ce0973af6370"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.481563 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4b1da51a-99c9-4f8e-920d-ce0973af6370-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "4b1da51a-99c9-4f8e-920d-ce0973af6370" (UID: "4b1da51a-99c9-4f8e-920d-ce0973af6370"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.481600 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4b1da51a-99c9-4f8e-920d-ce0973af6370-node-log\") pod \"4b1da51a-99c9-4f8e-920d-ce0973af6370\" (UID: \"4b1da51a-99c9-4f8e-920d-ce0973af6370\") " Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.481500 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4b1da51a-99c9-4f8e-920d-ce0973af6370-host-slash" (OuterVolumeSpecName: "host-slash") pod "4b1da51a-99c9-4f8e-920d-ce0973af6370" (UID: "4b1da51a-99c9-4f8e-920d-ce0973af6370"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.481515 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4b1da51a-99c9-4f8e-920d-ce0973af6370-log-socket" (OuterVolumeSpecName: "log-socket") pod "4b1da51a-99c9-4f8e-920d-ce0973af6370" (UID: "4b1da51a-99c9-4f8e-920d-ce0973af6370"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.481625 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4b1da51a-99c9-4f8e-920d-ce0973af6370-node-log" (OuterVolumeSpecName: "node-log") pod "4b1da51a-99c9-4f8e-920d-ce0973af6370" (UID: "4b1da51a-99c9-4f8e-920d-ce0973af6370"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.481638 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zmtm2\" (UniqueName: \"kubernetes.io/projected/4b1da51a-99c9-4f8e-920d-ce0973af6370-kube-api-access-zmtm2\") pod \"4b1da51a-99c9-4f8e-920d-ce0973af6370\" (UID: \"4b1da51a-99c9-4f8e-920d-ce0973af6370\") " Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.481711 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4b1da51a-99c9-4f8e-920d-ce0973af6370-ovnkube-config\") pod \"4b1da51a-99c9-4f8e-920d-ce0973af6370\" (UID: \"4b1da51a-99c9-4f8e-920d-ce0973af6370\") " Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.481744 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4b1da51a-99c9-4f8e-920d-ce0973af6370-host-cni-netd\") pod \"4b1da51a-99c9-4f8e-920d-ce0973af6370\" (UID: \"4b1da51a-99c9-4f8e-920d-ce0973af6370\") " Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.481761 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4b1da51a-99c9-4f8e-920d-ce0973af6370-run-ovn\") pod \"4b1da51a-99c9-4f8e-920d-ce0973af6370\" (UID: \"4b1da51a-99c9-4f8e-920d-ce0973af6370\") " Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.481785 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4b1da51a-99c9-4f8e-920d-ce0973af6370-var-lib-openvswitch\") pod \"4b1da51a-99c9-4f8e-920d-ce0973af6370\" (UID: \"4b1da51a-99c9-4f8e-920d-ce0973af6370\") " Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.481790 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b1da51a-99c9-4f8e-920d-ce0973af6370-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "4b1da51a-99c9-4f8e-920d-ce0973af6370" (UID: "4b1da51a-99c9-4f8e-920d-ce0973af6370"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.481804 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4b1da51a-99c9-4f8e-920d-ce0973af6370-env-overrides\") pod \"4b1da51a-99c9-4f8e-920d-ce0973af6370\" (UID: \"4b1da51a-99c9-4f8e-920d-ce0973af6370\") " Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.481830 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4b1da51a-99c9-4f8e-920d-ce0973af6370-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "4b1da51a-99c9-4f8e-920d-ce0973af6370" (UID: "4b1da51a-99c9-4f8e-920d-ce0973af6370"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.481850 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4b1da51a-99c9-4f8e-920d-ce0973af6370-etc-openvswitch\") pod \"4b1da51a-99c9-4f8e-920d-ce0973af6370\" (UID: \"4b1da51a-99c9-4f8e-920d-ce0973af6370\") " Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.481882 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4b1da51a-99c9-4f8e-920d-ce0973af6370-host-run-ovn-kubernetes\") pod \"4b1da51a-99c9-4f8e-920d-ce0973af6370\" (UID: \"4b1da51a-99c9-4f8e-920d-ce0973af6370\") " Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.481896 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4b1da51a-99c9-4f8e-920d-ce0973af6370-host-run-netns\") pod \"4b1da51a-99c9-4f8e-920d-ce0973af6370\" (UID: \"4b1da51a-99c9-4f8e-920d-ce0973af6370\") " Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.481910 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4b1da51a-99c9-4f8e-920d-ce0973af6370-run-systemd\") pod \"4b1da51a-99c9-4f8e-920d-ce0973af6370\" (UID: \"4b1da51a-99c9-4f8e-920d-ce0973af6370\") " Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.482032 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6-etc-openvswitch\") pod \"ovnkube-node-9q4ng\" (UID: \"2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-9q4ng" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.482082 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6-host-cni-bin\") pod \"ovnkube-node-9q4ng\" (UID: \"2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-9q4ng" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.482105 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6-host-slash\") pod \"ovnkube-node-9q4ng\" (UID: \"2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-9q4ng" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.482114 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b1da51a-99c9-4f8e-920d-ce0973af6370-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "4b1da51a-99c9-4f8e-920d-ce0973af6370" (UID: "4b1da51a-99c9-4f8e-920d-ce0973af6370"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.482146 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6-run-ovn\") pod \"ovnkube-node-9q4ng\" (UID: \"2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-9q4ng" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.482183 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6-env-overrides\") pod \"ovnkube-node-9q4ng\" (UID: \"2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-9q4ng" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.482201 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6-host-kubelet\") pod \"ovnkube-node-9q4ng\" (UID: \"2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-9q4ng" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.482219 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6-ovn-node-metrics-cert\") pod \"ovnkube-node-9q4ng\" (UID: \"2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-9q4ng" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.482241 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6-ovnkube-config\") pod \"ovnkube-node-9q4ng\" (UID: \"2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-9q4ng" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.482292 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-9q4ng\" (UID: \"2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-9q4ng" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.482317 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldn77\" (UniqueName: \"kubernetes.io/projected/2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6-kube-api-access-ldn77\") pod \"ovnkube-node-9q4ng\" (UID: \"2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-9q4ng" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.482342 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6-var-lib-openvswitch\") pod \"ovnkube-node-9q4ng\" (UID: \"2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-9q4ng" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.482369 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6-host-cni-netd\") pod \"ovnkube-node-9q4ng\" (UID: \"2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-9q4ng" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.482384 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6-log-socket\") pod \"ovnkube-node-9q4ng\" (UID: \"2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-9q4ng" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.482404 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6-run-openvswitch\") pod \"ovnkube-node-9q4ng\" (UID: \"2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-9q4ng" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.482424 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6-host-run-netns\") pod \"ovnkube-node-9q4ng\" (UID: \"2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-9q4ng" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.482146 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4b1da51a-99c9-4f8e-920d-ce0973af6370-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "4b1da51a-99c9-4f8e-920d-ce0973af6370" (UID: "4b1da51a-99c9-4f8e-920d-ce0973af6370"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.482164 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4b1da51a-99c9-4f8e-920d-ce0973af6370-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "4b1da51a-99c9-4f8e-920d-ce0973af6370" (UID: "4b1da51a-99c9-4f8e-920d-ce0973af6370"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.482391 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b1da51a-99c9-4f8e-920d-ce0973af6370-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "4b1da51a-99c9-4f8e-920d-ce0973af6370" (UID: "4b1da51a-99c9-4f8e-920d-ce0973af6370"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.482413 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4b1da51a-99c9-4f8e-920d-ce0973af6370-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "4b1da51a-99c9-4f8e-920d-ce0973af6370" (UID: "4b1da51a-99c9-4f8e-920d-ce0973af6370"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.482457 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4b1da51a-99c9-4f8e-920d-ce0973af6370-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "4b1da51a-99c9-4f8e-920d-ce0973af6370" (UID: "4b1da51a-99c9-4f8e-920d-ce0973af6370"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.482496 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4b1da51a-99c9-4f8e-920d-ce0973af6370-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "4b1da51a-99c9-4f8e-920d-ce0973af6370" (UID: "4b1da51a-99c9-4f8e-920d-ce0973af6370"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.482594 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6-systemd-units\") pod \"ovnkube-node-9q4ng\" (UID: \"2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-9q4ng" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.482692 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6-ovnkube-script-lib\") pod \"ovnkube-node-9q4ng\" (UID: \"2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-9q4ng" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.482738 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6-run-systemd\") pod \"ovnkube-node-9q4ng\" (UID: \"2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-9q4ng" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.482774 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6-node-log\") pod \"ovnkube-node-9q4ng\" (UID: \"2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-9q4ng" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.482871 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6-host-run-ovn-kubernetes\") pod \"ovnkube-node-9q4ng\" (UID: \"2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-9q4ng" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.482943 4755 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4b1da51a-99c9-4f8e-920d-ce0973af6370-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.482961 4755 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4b1da51a-99c9-4f8e-920d-ce0973af6370-host-run-netns\") on node \"crc\" DevicePath \"\"" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.482971 4755 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4b1da51a-99c9-4f8e-920d-ce0973af6370-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.482981 4755 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4b1da51a-99c9-4f8e-920d-ce0973af6370-run-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.482992 4755 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4b1da51a-99c9-4f8e-920d-ce0973af6370-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.483002 4755 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4b1da51a-99c9-4f8e-920d-ce0973af6370-host-slash\") on node \"crc\" DevicePath \"\"" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.483011 4755 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4b1da51a-99c9-4f8e-920d-ce0973af6370-systemd-units\") on node \"crc\" DevicePath \"\"" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.483021 4755 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4b1da51a-99c9-4f8e-920d-ce0973af6370-log-socket\") on node \"crc\" DevicePath \"\"" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.483029 4755 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4b1da51a-99c9-4f8e-920d-ce0973af6370-host-kubelet\") on node \"crc\" DevicePath \"\"" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.483038 4755 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4b1da51a-99c9-4f8e-920d-ce0973af6370-host-cni-bin\") on node \"crc\" DevicePath \"\"" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.483047 4755 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4b1da51a-99c9-4f8e-920d-ce0973af6370-node-log\") on node \"crc\" DevicePath \"\"" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.483056 4755 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4b1da51a-99c9-4f8e-920d-ce0973af6370-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.483064 4755 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4b1da51a-99c9-4f8e-920d-ce0973af6370-host-cni-netd\") on node \"crc\" DevicePath \"\"" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.483073 4755 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4b1da51a-99c9-4f8e-920d-ce0973af6370-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.483080 4755 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4b1da51a-99c9-4f8e-920d-ce0973af6370-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.483089 4755 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4b1da51a-99c9-4f8e-920d-ce0973af6370-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.483097 4755 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4b1da51a-99c9-4f8e-920d-ce0973af6370-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.500219 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b1da51a-99c9-4f8e-920d-ce0973af6370-kube-api-access-zmtm2" (OuterVolumeSpecName: "kube-api-access-zmtm2") pod "4b1da51a-99c9-4f8e-920d-ce0973af6370" (UID: "4b1da51a-99c9-4f8e-920d-ce0973af6370"). InnerVolumeSpecName "kube-api-access-zmtm2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.500698 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b1da51a-99c9-4f8e-920d-ce0973af6370-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "4b1da51a-99c9-4f8e-920d-ce0973af6370" (UID: "4b1da51a-99c9-4f8e-920d-ce0973af6370"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.509935 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4b1da51a-99c9-4f8e-920d-ce0973af6370-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "4b1da51a-99c9-4f8e-920d-ce0973af6370" (UID: "4b1da51a-99c9-4f8e-920d-ce0973af6370"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.584612 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6-ovnkube-script-lib\") pod \"ovnkube-node-9q4ng\" (UID: \"2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-9q4ng" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.584658 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6-run-systemd\") pod \"ovnkube-node-9q4ng\" (UID: \"2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-9q4ng" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.584683 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6-node-log\") pod \"ovnkube-node-9q4ng\" (UID: \"2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-9q4ng" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.584703 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6-host-run-ovn-kubernetes\") pod \"ovnkube-node-9q4ng\" (UID: \"2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-9q4ng" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.584721 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6-etc-openvswitch\") pod \"ovnkube-node-9q4ng\" (UID: \"2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-9q4ng" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.584739 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6-host-cni-bin\") pod \"ovnkube-node-9q4ng\" (UID: \"2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-9q4ng" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.584756 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6-host-slash\") pod \"ovnkube-node-9q4ng\" (UID: \"2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-9q4ng" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.584779 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6-run-ovn\") pod \"ovnkube-node-9q4ng\" (UID: \"2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-9q4ng" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.584803 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6-env-overrides\") pod \"ovnkube-node-9q4ng\" (UID: \"2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-9q4ng" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.584819 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6-host-kubelet\") pod \"ovnkube-node-9q4ng\" (UID: \"2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-9q4ng" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.584834 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6-ovn-node-metrics-cert\") pod \"ovnkube-node-9q4ng\" (UID: \"2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-9q4ng" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.584851 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6-ovnkube-config\") pod \"ovnkube-node-9q4ng\" (UID: \"2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-9q4ng" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.584881 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-9q4ng\" (UID: \"2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-9q4ng" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.584901 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldn77\" (UniqueName: \"kubernetes.io/projected/2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6-kube-api-access-ldn77\") pod \"ovnkube-node-9q4ng\" (UID: \"2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-9q4ng" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.584919 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6-var-lib-openvswitch\") pod \"ovnkube-node-9q4ng\" (UID: \"2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-9q4ng" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.584937 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6-host-cni-netd\") pod \"ovnkube-node-9q4ng\" (UID: \"2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-9q4ng" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.584952 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6-log-socket\") pod \"ovnkube-node-9q4ng\" (UID: \"2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-9q4ng" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.584968 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6-run-openvswitch\") pod \"ovnkube-node-9q4ng\" (UID: \"2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-9q4ng" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.584984 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6-host-run-netns\") pod \"ovnkube-node-9q4ng\" (UID: \"2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-9q4ng" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.585005 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6-systemd-units\") pod \"ovnkube-node-9q4ng\" (UID: \"2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-9q4ng" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.585049 4755 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4b1da51a-99c9-4f8e-920d-ce0973af6370-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.585064 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zmtm2\" (UniqueName: \"kubernetes.io/projected/4b1da51a-99c9-4f8e-920d-ce0973af6370-kube-api-access-zmtm2\") on node \"crc\" DevicePath \"\"" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.585075 4755 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4b1da51a-99c9-4f8e-920d-ce0973af6370-run-systemd\") on node \"crc\" DevicePath \"\"" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.585118 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6-systemd-units\") pod \"ovnkube-node-9q4ng\" (UID: \"2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-9q4ng" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.585923 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6-ovnkube-script-lib\") pod \"ovnkube-node-9q4ng\" (UID: \"2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-9q4ng" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.585975 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6-run-systemd\") pod \"ovnkube-node-9q4ng\" (UID: \"2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-9q4ng" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.586005 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6-node-log\") pod \"ovnkube-node-9q4ng\" (UID: \"2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-9q4ng" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.586035 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6-host-run-ovn-kubernetes\") pod \"ovnkube-node-9q4ng\" (UID: \"2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-9q4ng" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.586069 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6-etc-openvswitch\") pod \"ovnkube-node-9q4ng\" (UID: \"2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-9q4ng" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.586099 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6-host-cni-bin\") pod \"ovnkube-node-9q4ng\" (UID: \"2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-9q4ng" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.586128 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6-host-slash\") pod \"ovnkube-node-9q4ng\" (UID: \"2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-9q4ng" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.586155 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6-run-ovn\") pod \"ovnkube-node-9q4ng\" (UID: \"2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-9q4ng" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.586460 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6-env-overrides\") pod \"ovnkube-node-9q4ng\" (UID: \"2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-9q4ng" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.586524 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6-host-kubelet\") pod \"ovnkube-node-9q4ng\" (UID: \"2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-9q4ng" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.586950 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6-var-lib-openvswitch\") pod \"ovnkube-node-9q4ng\" (UID: \"2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-9q4ng" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.587033 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-9q4ng\" (UID: \"2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-9q4ng" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.587053 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6-run-openvswitch\") pod \"ovnkube-node-9q4ng\" (UID: \"2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-9q4ng" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.587083 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6-log-socket\") pod \"ovnkube-node-9q4ng\" (UID: \"2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-9q4ng" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.587145 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6-host-run-netns\") pod \"ovnkube-node-9q4ng\" (UID: \"2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-9q4ng" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.587146 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6-host-cni-netd\") pod \"ovnkube-node-9q4ng\" (UID: \"2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-9q4ng" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.587638 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6-ovnkube-config\") pod \"ovnkube-node-9q4ng\" (UID: \"2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-9q4ng" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.589547 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6-ovn-node-metrics-cert\") pod \"ovnkube-node-9q4ng\" (UID: \"2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-9q4ng" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.608927 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldn77\" (UniqueName: \"kubernetes.io/projected/2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6-kube-api-access-ldn77\") pod \"ovnkube-node-9q4ng\" (UID: \"2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-9q4ng" Dec 10 15:33:48 crc kubenswrapper[4755]: I1210 15:33:48.780113 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-9q4ng" Dec 10 15:33:49 crc kubenswrapper[4755]: I1210 15:33:49.153148 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zl2tx_796da6d5-6ccd-4786-a03e-9a8e47a55031/kube-multus/2.log" Dec 10 15:33:49 crc kubenswrapper[4755]: I1210 15:33:49.155133 4755 generic.go:334] "Generic (PLEG): container finished" podID="2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6" containerID="9dbee32031b294b2dfa8defedd0638b7324d3a0cd8dcbb49bc702074a68a5f00" exitCode=0 Dec 10 15:33:49 crc kubenswrapper[4755]: I1210 15:33:49.155180 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9q4ng" event={"ID":"2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6","Type":"ContainerDied","Data":"9dbee32031b294b2dfa8defedd0638b7324d3a0cd8dcbb49bc702074a68a5f00"} Dec 10 15:33:49 crc kubenswrapper[4755]: I1210 15:33:49.155232 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9q4ng" event={"ID":"2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6","Type":"ContainerStarted","Data":"6d1cd96d6eb951423395bac4660100da912d216b0c3c5d29c14aa830a42a18a2"} Dec 10 15:33:49 crc kubenswrapper[4755]: I1210 15:33:49.160106 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6lfvk_4b1da51a-99c9-4f8e-920d-ce0973af6370/ovn-acl-logging/0.log" Dec 10 15:33:49 crc kubenswrapper[4755]: I1210 15:33:49.160722 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6lfvk_4b1da51a-99c9-4f8e-920d-ce0973af6370/ovn-controller/0.log" Dec 10 15:33:49 crc kubenswrapper[4755]: I1210 15:33:49.161210 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" event={"ID":"4b1da51a-99c9-4f8e-920d-ce0973af6370","Type":"ContainerDied","Data":"f2515ff5ebf31c831fce05186e1650d702d16175753caf69db7cd998523f15f3"} Dec 10 15:33:49 crc kubenswrapper[4755]: I1210 15:33:49.161282 4755 scope.go:117] "RemoveContainer" containerID="decd94009593b6ceb6559ae2b8598a9f4fdd922a3c94226d5086a7a25cc40280" Dec 10 15:33:49 crc kubenswrapper[4755]: I1210 15:33:49.161286 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6lfvk" Dec 10 15:33:49 crc kubenswrapper[4755]: I1210 15:33:49.179828 4755 scope.go:117] "RemoveContainer" containerID="59bf59d1b7fbc365a916fbbceca7ae30b7ebc754b34f2f7a34c2e21e1e1d2166" Dec 10 15:33:49 crc kubenswrapper[4755]: I1210 15:33:49.221702 4755 scope.go:117] "RemoveContainer" containerID="e9ba47683cc23d5b531a45f0658b6a9378650400b35b5372642b0430a5ac503f" Dec 10 15:33:49 crc kubenswrapper[4755]: I1210 15:33:49.233727 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-6lfvk"] Dec 10 15:33:49 crc kubenswrapper[4755]: I1210 15:33:49.239436 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-6lfvk"] Dec 10 15:33:49 crc kubenswrapper[4755]: I1210 15:33:49.251413 4755 scope.go:117] "RemoveContainer" containerID="3a75407e83508af9adebb09c6466a966dd791d29f690c539656f9bd3396d7031" Dec 10 15:33:49 crc kubenswrapper[4755]: I1210 15:33:49.276740 4755 scope.go:117] "RemoveContainer" containerID="0e547993b9f2fa37bf924f909c47b62eb0cc02b659596b1cad9bbc42fdde8f9d" Dec 10 15:33:49 crc kubenswrapper[4755]: I1210 15:33:49.293317 4755 scope.go:117] "RemoveContainer" containerID="6eb065dc6c0cc8914cb95553eb2683d894fb9a4e78ce7fac73bcce8d7f6cced9" Dec 10 15:33:49 crc kubenswrapper[4755]: I1210 15:33:49.318625 4755 scope.go:117] "RemoveContainer" containerID="602b4e49987fa2cc6b54b822110aececbdddaf2bce8f27cce4ed906768d45791" Dec 10 15:33:49 crc kubenswrapper[4755]: I1210 15:33:49.349575 4755 scope.go:117] "RemoveContainer" containerID="335bcab3a79f09796e97560365e1211fb30ddf288f4773c05ab353197add4365" Dec 10 15:33:49 crc kubenswrapper[4755]: I1210 15:33:49.395660 4755 scope.go:117] "RemoveContainer" containerID="375efb27cf6bef06a6a0ffc8f6b2963e1b9ffbca18b3c8e7b402bc552a411f6f" Dec 10 15:33:49 crc kubenswrapper[4755]: I1210 15:33:49.767717 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b1da51a-99c9-4f8e-920d-ce0973af6370" path="/var/lib/kubelet/pods/4b1da51a-99c9-4f8e-920d-ce0973af6370/volumes" Dec 10 15:33:50 crc kubenswrapper[4755]: I1210 15:33:50.171962 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9q4ng" event={"ID":"2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6","Type":"ContainerStarted","Data":"338191b01cd7b4174d3e51f7748b4d18cbec21268c6507e647468d985c8fec46"} Dec 10 15:33:50 crc kubenswrapper[4755]: I1210 15:33:50.172301 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9q4ng" event={"ID":"2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6","Type":"ContainerStarted","Data":"9f1dc3b234cd2b3de7540fa576882f61eca43e338d80d2e1483fc592e0096911"} Dec 10 15:33:50 crc kubenswrapper[4755]: I1210 15:33:50.172316 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9q4ng" event={"ID":"2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6","Type":"ContainerStarted","Data":"83340ca3f87d11bfbb80525a8b80418aff1144a6cc5f9894e05a9425c4129749"} Dec 10 15:33:50 crc kubenswrapper[4755]: I1210 15:33:50.172327 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9q4ng" event={"ID":"2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6","Type":"ContainerStarted","Data":"d3a9e0c47031168896d5bcc67d4d714d3f178aaa46363fc66628de8ea97e308c"} Dec 10 15:33:50 crc kubenswrapper[4755]: I1210 15:33:50.172339 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9q4ng" event={"ID":"2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6","Type":"ContainerStarted","Data":"8bde52279e8a0b76ee69601de18cb530885a092e05674307c8a12a6714bc7a52"} Dec 10 15:33:50 crc kubenswrapper[4755]: I1210 15:33:50.172353 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9q4ng" event={"ID":"2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6","Type":"ContainerStarted","Data":"50520e0213eef5d4a7b322c8564e0581acb0073d4e49ef533f1b7928d69c5144"} Dec 10 15:33:53 crc kubenswrapper[4755]: I1210 15:33:53.192587 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9q4ng" event={"ID":"2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6","Type":"ContainerStarted","Data":"cfbf66f51c6dc3b48d7dc1fc9bdc9e0fc4043378ee18a1cb6c559d6438be37c1"} Dec 10 15:33:55 crc kubenswrapper[4755]: I1210 15:33:55.114747 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-2tlgr"] Dec 10 15:33:55 crc kubenswrapper[4755]: I1210 15:33:55.115699 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-2tlgr" Dec 10 15:33:55 crc kubenswrapper[4755]: I1210 15:33:55.118072 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Dec 10 15:33:55 crc kubenswrapper[4755]: I1210 15:33:55.118441 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-lhhxr" Dec 10 15:33:55 crc kubenswrapper[4755]: I1210 15:33:55.118487 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Dec 10 15:33:55 crc kubenswrapper[4755]: I1210 15:33:55.173425 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kk7rh\" (UniqueName: \"kubernetes.io/projected/a03bc2c9-3ec6-4ca5-9181-88e9cb9fe2be-kube-api-access-kk7rh\") pod \"obo-prometheus-operator-668cf9dfbb-2tlgr\" (UID: \"a03bc2c9-3ec6-4ca5-9181-88e9cb9fe2be\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-2tlgr" Dec 10 15:33:55 crc kubenswrapper[4755]: I1210 15:33:55.206953 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9q4ng" event={"ID":"2c1852ac-fa31-42f1-bf06-1ef1b5b0e9f6","Type":"ContainerStarted","Data":"53287614eddf281cb875ab1a06b8b4a56ba592d7838918bc7a8dfe790576dbf4"} Dec 10 15:33:55 crc kubenswrapper[4755]: I1210 15:33:55.207424 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-9q4ng" Dec 10 15:33:55 crc kubenswrapper[4755]: I1210 15:33:55.207448 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-9q4ng" Dec 10 15:33:55 crc kubenswrapper[4755]: I1210 15:33:55.207457 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-9q4ng" Dec 10 15:33:55 crc kubenswrapper[4755]: I1210 15:33:55.258599 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-9q4ng" Dec 10 15:33:55 crc kubenswrapper[4755]: I1210 15:33:55.260820 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-9q4ng" podStartSLOduration=7.260795796 podStartE2EDuration="7.260795796s" podCreationTimestamp="2025-12-10 15:33:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:33:55.257154495 +0000 UTC m=+631.858038147" watchObservedRunningTime="2025-12-10 15:33:55.260795796 +0000 UTC m=+631.861679428" Dec 10 15:33:55 crc kubenswrapper[4755]: I1210 15:33:55.270307 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-f66fc9f65-5j7mt"] Dec 10 15:33:55 crc kubenswrapper[4755]: I1210 15:33:55.271089 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-f66fc9f65-5j7mt" Dec 10 15:33:55 crc kubenswrapper[4755]: I1210 15:33:55.274133 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-v6vch" Dec 10 15:33:55 crc kubenswrapper[4755]: I1210 15:33:55.274429 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Dec 10 15:33:55 crc kubenswrapper[4755]: I1210 15:33:55.275212 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kk7rh\" (UniqueName: \"kubernetes.io/projected/a03bc2c9-3ec6-4ca5-9181-88e9cb9fe2be-kube-api-access-kk7rh\") pod \"obo-prometheus-operator-668cf9dfbb-2tlgr\" (UID: \"a03bc2c9-3ec6-4ca5-9181-88e9cb9fe2be\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-2tlgr" Dec 10 15:33:55 crc kubenswrapper[4755]: I1210 15:33:55.280205 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-9q4ng" Dec 10 15:33:55 crc kubenswrapper[4755]: I1210 15:33:55.292606 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-f66fc9f65-4kjbh"] Dec 10 15:33:55 crc kubenswrapper[4755]: I1210 15:33:55.293227 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-f66fc9f65-4kjbh" Dec 10 15:33:55 crc kubenswrapper[4755]: I1210 15:33:55.299737 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kk7rh\" (UniqueName: \"kubernetes.io/projected/a03bc2c9-3ec6-4ca5-9181-88e9cb9fe2be-kube-api-access-kk7rh\") pod \"obo-prometheus-operator-668cf9dfbb-2tlgr\" (UID: \"a03bc2c9-3ec6-4ca5-9181-88e9cb9fe2be\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-2tlgr" Dec 10 15:33:55 crc kubenswrapper[4755]: I1210 15:33:55.376041 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/95d667d8-b323-4d59-84e6-ffaa553526c7-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-f66fc9f65-4kjbh\" (UID: \"95d667d8-b323-4d59-84e6-ffaa553526c7\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-f66fc9f65-4kjbh" Dec 10 15:33:55 crc kubenswrapper[4755]: I1210 15:33:55.376149 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2ace45ac-e8fa-4b58-b40c-c32fbc5a1c6e-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-f66fc9f65-5j7mt\" (UID: \"2ace45ac-e8fa-4b58-b40c-c32fbc5a1c6e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-f66fc9f65-5j7mt" Dec 10 15:33:55 crc kubenswrapper[4755]: I1210 15:33:55.376177 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/95d667d8-b323-4d59-84e6-ffaa553526c7-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-f66fc9f65-4kjbh\" (UID: \"95d667d8-b323-4d59-84e6-ffaa553526c7\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-f66fc9f65-4kjbh" Dec 10 15:33:55 crc kubenswrapper[4755]: I1210 15:33:55.376352 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2ace45ac-e8fa-4b58-b40c-c32fbc5a1c6e-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-f66fc9f65-5j7mt\" (UID: \"2ace45ac-e8fa-4b58-b40c-c32fbc5a1c6e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-f66fc9f65-5j7mt" Dec 10 15:33:55 crc kubenswrapper[4755]: I1210 15:33:55.430198 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-2tlgr" Dec 10 15:33:55 crc kubenswrapper[4755]: I1210 15:33:55.453278 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-57crm"] Dec 10 15:33:55 crc kubenswrapper[4755]: I1210 15:33:55.454317 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-57crm" Dec 10 15:33:55 crc kubenswrapper[4755]: I1210 15:33:55.460139 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-sc5fs" Dec 10 15:33:55 crc kubenswrapper[4755]: E1210 15:33:55.462936 4755 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-2tlgr_openshift-operators_a03bc2c9-3ec6-4ca5-9181-88e9cb9fe2be_0(b68998ba6e14fff9b3905ca0955f524f0df33d4c79efe8bba4562374980879c5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 10 15:33:55 crc kubenswrapper[4755]: E1210 15:33:55.463016 4755 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-2tlgr_openshift-operators_a03bc2c9-3ec6-4ca5-9181-88e9cb9fe2be_0(b68998ba6e14fff9b3905ca0955f524f0df33d4c79efe8bba4562374980879c5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-2tlgr" Dec 10 15:33:55 crc kubenswrapper[4755]: E1210 15:33:55.463043 4755 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-2tlgr_openshift-operators_a03bc2c9-3ec6-4ca5-9181-88e9cb9fe2be_0(b68998ba6e14fff9b3905ca0955f524f0df33d4c79efe8bba4562374980879c5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-2tlgr" Dec 10 15:33:55 crc kubenswrapper[4755]: E1210 15:33:55.463103 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-668cf9dfbb-2tlgr_openshift-operators(a03bc2c9-3ec6-4ca5-9181-88e9cb9fe2be)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-668cf9dfbb-2tlgr_openshift-operators(a03bc2c9-3ec6-4ca5-9181-88e9cb9fe2be)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-2tlgr_openshift-operators_a03bc2c9-3ec6-4ca5-9181-88e9cb9fe2be_0(b68998ba6e14fff9b3905ca0955f524f0df33d4c79efe8bba4562374980879c5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-2tlgr" podUID="a03bc2c9-3ec6-4ca5-9181-88e9cb9fe2be" Dec 10 15:33:55 crc kubenswrapper[4755]: I1210 15:33:55.466198 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-2tlgr"] Dec 10 15:33:55 crc kubenswrapper[4755]: I1210 15:33:55.466822 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Dec 10 15:33:55 crc kubenswrapper[4755]: I1210 15:33:55.478074 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2ace45ac-e8fa-4b58-b40c-c32fbc5a1c6e-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-f66fc9f65-5j7mt\" (UID: \"2ace45ac-e8fa-4b58-b40c-c32fbc5a1c6e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-f66fc9f65-5j7mt" Dec 10 15:33:55 crc kubenswrapper[4755]: I1210 15:33:55.478173 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/95d667d8-b323-4d59-84e6-ffaa553526c7-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-f66fc9f65-4kjbh\" (UID: \"95d667d8-b323-4d59-84e6-ffaa553526c7\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-f66fc9f65-4kjbh" Dec 10 15:33:55 crc kubenswrapper[4755]: I1210 15:33:55.479014 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2ace45ac-e8fa-4b58-b40c-c32fbc5a1c6e-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-f66fc9f65-5j7mt\" (UID: \"2ace45ac-e8fa-4b58-b40c-c32fbc5a1c6e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-f66fc9f65-5j7mt" Dec 10 15:33:55 crc kubenswrapper[4755]: I1210 15:33:55.479104 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/95d667d8-b323-4d59-84e6-ffaa553526c7-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-f66fc9f65-4kjbh\" (UID: \"95d667d8-b323-4d59-84e6-ffaa553526c7\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-f66fc9f65-4kjbh" Dec 10 15:33:55 crc kubenswrapper[4755]: I1210 15:33:55.486055 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2ace45ac-e8fa-4b58-b40c-c32fbc5a1c6e-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-f66fc9f65-5j7mt\" (UID: \"2ace45ac-e8fa-4b58-b40c-c32fbc5a1c6e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-f66fc9f65-5j7mt" Dec 10 15:33:55 crc kubenswrapper[4755]: I1210 15:33:55.486142 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2ace45ac-e8fa-4b58-b40c-c32fbc5a1c6e-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-f66fc9f65-5j7mt\" (UID: \"2ace45ac-e8fa-4b58-b40c-c32fbc5a1c6e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-f66fc9f65-5j7mt" Dec 10 15:33:55 crc kubenswrapper[4755]: I1210 15:33:55.486499 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/95d667d8-b323-4d59-84e6-ffaa553526c7-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-f66fc9f65-4kjbh\" (UID: \"95d667d8-b323-4d59-84e6-ffaa553526c7\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-f66fc9f65-4kjbh" Dec 10 15:33:55 crc kubenswrapper[4755]: I1210 15:33:55.497554 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/95d667d8-b323-4d59-84e6-ffaa553526c7-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-f66fc9f65-4kjbh\" (UID: \"95d667d8-b323-4d59-84e6-ffaa553526c7\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-f66fc9f65-4kjbh" Dec 10 15:33:55 crc kubenswrapper[4755]: I1210 15:33:55.519822 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-f66fc9f65-4kjbh"] Dec 10 15:33:55 crc kubenswrapper[4755]: I1210 15:33:55.523560 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-f66fc9f65-5j7mt"] Dec 10 15:33:55 crc kubenswrapper[4755]: I1210 15:33:55.559758 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-57crm"] Dec 10 15:33:55 crc kubenswrapper[4755]: I1210 15:33:55.580163 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nz672\" (UniqueName: \"kubernetes.io/projected/aed5adbf-d512-4557-bb9a-474301586611-kube-api-access-nz672\") pod \"observability-operator-d8bb48f5d-57crm\" (UID: \"aed5adbf-d512-4557-bb9a-474301586611\") " pod="openshift-operators/observability-operator-d8bb48f5d-57crm" Dec 10 15:33:55 crc kubenswrapper[4755]: I1210 15:33:55.580222 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/aed5adbf-d512-4557-bb9a-474301586611-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-57crm\" (UID: \"aed5adbf-d512-4557-bb9a-474301586611\") " pod="openshift-operators/observability-operator-d8bb48f5d-57crm" Dec 10 15:33:55 crc kubenswrapper[4755]: I1210 15:33:55.599097 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-f66fc9f65-5j7mt" Dec 10 15:33:55 crc kubenswrapper[4755]: E1210 15:33:55.618817 4755 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-f66fc9f65-5j7mt_openshift-operators_2ace45ac-e8fa-4b58-b40c-c32fbc5a1c6e_0(4e90bec4ab319cc224cbffcdcad55f99a72571b4a0be156f5be1dba22cf30c2a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 10 15:33:55 crc kubenswrapper[4755]: E1210 15:33:55.618890 4755 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-f66fc9f65-5j7mt_openshift-operators_2ace45ac-e8fa-4b58-b40c-c32fbc5a1c6e_0(4e90bec4ab319cc224cbffcdcad55f99a72571b4a0be156f5be1dba22cf30c2a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-f66fc9f65-5j7mt" Dec 10 15:33:55 crc kubenswrapper[4755]: E1210 15:33:55.618919 4755 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-f66fc9f65-5j7mt_openshift-operators_2ace45ac-e8fa-4b58-b40c-c32fbc5a1c6e_0(4e90bec4ab319cc224cbffcdcad55f99a72571b4a0be156f5be1dba22cf30c2a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-f66fc9f65-5j7mt" Dec 10 15:33:55 crc kubenswrapper[4755]: E1210 15:33:55.618975 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-f66fc9f65-5j7mt_openshift-operators(2ace45ac-e8fa-4b58-b40c-c32fbc5a1c6e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-f66fc9f65-5j7mt_openshift-operators(2ace45ac-e8fa-4b58-b40c-c32fbc5a1c6e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-f66fc9f65-5j7mt_openshift-operators_2ace45ac-e8fa-4b58-b40c-c32fbc5a1c6e_0(4e90bec4ab319cc224cbffcdcad55f99a72571b4a0be156f5be1dba22cf30c2a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-f66fc9f65-5j7mt" podUID="2ace45ac-e8fa-4b58-b40c-c32fbc5a1c6e" Dec 10 15:33:55 crc kubenswrapper[4755]: I1210 15:33:55.634891 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-f66fc9f65-4kjbh" Dec 10 15:33:55 crc kubenswrapper[4755]: E1210 15:33:55.655894 4755 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-f66fc9f65-4kjbh_openshift-operators_95d667d8-b323-4d59-84e6-ffaa553526c7_0(a1747a4e3a1e081a00b3941b3631bdd0ea0ef76d71172831d55925aeccac316d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 10 15:33:55 crc kubenswrapper[4755]: E1210 15:33:55.655978 4755 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-f66fc9f65-4kjbh_openshift-operators_95d667d8-b323-4d59-84e6-ffaa553526c7_0(a1747a4e3a1e081a00b3941b3631bdd0ea0ef76d71172831d55925aeccac316d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-f66fc9f65-4kjbh" Dec 10 15:33:55 crc kubenswrapper[4755]: E1210 15:33:55.656006 4755 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-f66fc9f65-4kjbh_openshift-operators_95d667d8-b323-4d59-84e6-ffaa553526c7_0(a1747a4e3a1e081a00b3941b3631bdd0ea0ef76d71172831d55925aeccac316d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-f66fc9f65-4kjbh" Dec 10 15:33:55 crc kubenswrapper[4755]: E1210 15:33:55.656063 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-f66fc9f65-4kjbh_openshift-operators(95d667d8-b323-4d59-84e6-ffaa553526c7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-f66fc9f65-4kjbh_openshift-operators(95d667d8-b323-4d59-84e6-ffaa553526c7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-f66fc9f65-4kjbh_openshift-operators_95d667d8-b323-4d59-84e6-ffaa553526c7_0(a1747a4e3a1e081a00b3941b3631bdd0ea0ef76d71172831d55925aeccac316d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-f66fc9f65-4kjbh" podUID="95d667d8-b323-4d59-84e6-ffaa553526c7" Dec 10 15:33:55 crc kubenswrapper[4755]: I1210 15:33:55.665122 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5446b9c989-rwkrm"] Dec 10 15:33:55 crc kubenswrapper[4755]: I1210 15:33:55.666004 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-rwkrm" Dec 10 15:33:55 crc kubenswrapper[4755]: I1210 15:33:55.667625 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-g25v5" Dec 10 15:33:55 crc kubenswrapper[4755]: I1210 15:33:55.679819 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5446b9c989-rwkrm"] Dec 10 15:33:55 crc kubenswrapper[4755]: I1210 15:33:55.680955 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nz672\" (UniqueName: \"kubernetes.io/projected/aed5adbf-d512-4557-bb9a-474301586611-kube-api-access-nz672\") pod \"observability-operator-d8bb48f5d-57crm\" (UID: \"aed5adbf-d512-4557-bb9a-474301586611\") " pod="openshift-operators/observability-operator-d8bb48f5d-57crm" Dec 10 15:33:55 crc kubenswrapper[4755]: I1210 15:33:55.681011 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/aed5adbf-d512-4557-bb9a-474301586611-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-57crm\" (UID: \"aed5adbf-d512-4557-bb9a-474301586611\") " pod="openshift-operators/observability-operator-d8bb48f5d-57crm" Dec 10 15:33:55 crc kubenswrapper[4755]: I1210 15:33:55.685359 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/aed5adbf-d512-4557-bb9a-474301586611-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-57crm\" (UID: \"aed5adbf-d512-4557-bb9a-474301586611\") " pod="openshift-operators/observability-operator-d8bb48f5d-57crm" Dec 10 15:33:55 crc kubenswrapper[4755]: I1210 15:33:55.704138 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nz672\" (UniqueName: \"kubernetes.io/projected/aed5adbf-d512-4557-bb9a-474301586611-kube-api-access-nz672\") pod \"observability-operator-d8bb48f5d-57crm\" (UID: \"aed5adbf-d512-4557-bb9a-474301586611\") " pod="openshift-operators/observability-operator-d8bb48f5d-57crm" Dec 10 15:33:55 crc kubenswrapper[4755]: I1210 15:33:55.782420 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqkjh\" (UniqueName: \"kubernetes.io/projected/5dad8e56-f3d6-4d95-bd98-96b2f7ae1d6e-kube-api-access-nqkjh\") pod \"perses-operator-5446b9c989-rwkrm\" (UID: \"5dad8e56-f3d6-4d95-bd98-96b2f7ae1d6e\") " pod="openshift-operators/perses-operator-5446b9c989-rwkrm" Dec 10 15:33:55 crc kubenswrapper[4755]: I1210 15:33:55.782517 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/5dad8e56-f3d6-4d95-bd98-96b2f7ae1d6e-openshift-service-ca\") pod \"perses-operator-5446b9c989-rwkrm\" (UID: \"5dad8e56-f3d6-4d95-bd98-96b2f7ae1d6e\") " pod="openshift-operators/perses-operator-5446b9c989-rwkrm" Dec 10 15:33:55 crc kubenswrapper[4755]: I1210 15:33:55.844147 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-57crm" Dec 10 15:33:55 crc kubenswrapper[4755]: E1210 15:33:55.870511 4755 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-57crm_openshift-operators_aed5adbf-d512-4557-bb9a-474301586611_0(c0b61c1659ba70ad1a05d9588de95ef1d9fbef85ceee867b8ef2d3e01fc5f137): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 10 15:33:55 crc kubenswrapper[4755]: E1210 15:33:55.870573 4755 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-57crm_openshift-operators_aed5adbf-d512-4557-bb9a-474301586611_0(c0b61c1659ba70ad1a05d9588de95ef1d9fbef85ceee867b8ef2d3e01fc5f137): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-d8bb48f5d-57crm" Dec 10 15:33:55 crc kubenswrapper[4755]: E1210 15:33:55.870596 4755 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-57crm_openshift-operators_aed5adbf-d512-4557-bb9a-474301586611_0(c0b61c1659ba70ad1a05d9588de95ef1d9fbef85ceee867b8ef2d3e01fc5f137): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-d8bb48f5d-57crm" Dec 10 15:33:55 crc kubenswrapper[4755]: E1210 15:33:55.870639 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-d8bb48f5d-57crm_openshift-operators(aed5adbf-d512-4557-bb9a-474301586611)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-d8bb48f5d-57crm_openshift-operators(aed5adbf-d512-4557-bb9a-474301586611)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-57crm_openshift-operators_aed5adbf-d512-4557-bb9a-474301586611_0(c0b61c1659ba70ad1a05d9588de95ef1d9fbef85ceee867b8ef2d3e01fc5f137): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-d8bb48f5d-57crm" podUID="aed5adbf-d512-4557-bb9a-474301586611" Dec 10 15:33:55 crc kubenswrapper[4755]: I1210 15:33:55.884424 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqkjh\" (UniqueName: \"kubernetes.io/projected/5dad8e56-f3d6-4d95-bd98-96b2f7ae1d6e-kube-api-access-nqkjh\") pod \"perses-operator-5446b9c989-rwkrm\" (UID: \"5dad8e56-f3d6-4d95-bd98-96b2f7ae1d6e\") " pod="openshift-operators/perses-operator-5446b9c989-rwkrm" Dec 10 15:33:55 crc kubenswrapper[4755]: I1210 15:33:55.884613 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/5dad8e56-f3d6-4d95-bd98-96b2f7ae1d6e-openshift-service-ca\") pod \"perses-operator-5446b9c989-rwkrm\" (UID: \"5dad8e56-f3d6-4d95-bd98-96b2f7ae1d6e\") " pod="openshift-operators/perses-operator-5446b9c989-rwkrm" Dec 10 15:33:55 crc kubenswrapper[4755]: I1210 15:33:55.885927 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/5dad8e56-f3d6-4d95-bd98-96b2f7ae1d6e-openshift-service-ca\") pod \"perses-operator-5446b9c989-rwkrm\" (UID: \"5dad8e56-f3d6-4d95-bd98-96b2f7ae1d6e\") " pod="openshift-operators/perses-operator-5446b9c989-rwkrm" Dec 10 15:33:55 crc kubenswrapper[4755]: I1210 15:33:55.905693 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqkjh\" (UniqueName: \"kubernetes.io/projected/5dad8e56-f3d6-4d95-bd98-96b2f7ae1d6e-kube-api-access-nqkjh\") pod \"perses-operator-5446b9c989-rwkrm\" (UID: \"5dad8e56-f3d6-4d95-bd98-96b2f7ae1d6e\") " pod="openshift-operators/perses-operator-5446b9c989-rwkrm" Dec 10 15:33:55 crc kubenswrapper[4755]: I1210 15:33:55.980484 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-rwkrm" Dec 10 15:33:56 crc kubenswrapper[4755]: E1210 15:33:56.021097 4755 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-rwkrm_openshift-operators_5dad8e56-f3d6-4d95-bd98-96b2f7ae1d6e_0(6ca7f6dc404b92d70a4116efa800c4a43496c78ac19273bb929c454fcee06e3d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 10 15:33:56 crc kubenswrapper[4755]: E1210 15:33:56.021210 4755 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-rwkrm_openshift-operators_5dad8e56-f3d6-4d95-bd98-96b2f7ae1d6e_0(6ca7f6dc404b92d70a4116efa800c4a43496c78ac19273bb929c454fcee06e3d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5446b9c989-rwkrm" Dec 10 15:33:56 crc kubenswrapper[4755]: E1210 15:33:56.021238 4755 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-rwkrm_openshift-operators_5dad8e56-f3d6-4d95-bd98-96b2f7ae1d6e_0(6ca7f6dc404b92d70a4116efa800c4a43496c78ac19273bb929c454fcee06e3d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5446b9c989-rwkrm" Dec 10 15:33:56 crc kubenswrapper[4755]: E1210 15:33:56.021297 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5446b9c989-rwkrm_openshift-operators(5dad8e56-f3d6-4d95-bd98-96b2f7ae1d6e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5446b9c989-rwkrm_openshift-operators(5dad8e56-f3d6-4d95-bd98-96b2f7ae1d6e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-rwkrm_openshift-operators_5dad8e56-f3d6-4d95-bd98-96b2f7ae1d6e_0(6ca7f6dc404b92d70a4116efa800c4a43496c78ac19273bb929c454fcee06e3d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5446b9c989-rwkrm" podUID="5dad8e56-f3d6-4d95-bd98-96b2f7ae1d6e" Dec 10 15:33:56 crc kubenswrapper[4755]: I1210 15:33:56.225353 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-f66fc9f65-5j7mt" Dec 10 15:33:56 crc kubenswrapper[4755]: I1210 15:33:56.225375 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-57crm" Dec 10 15:33:56 crc kubenswrapper[4755]: I1210 15:33:56.225451 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-f66fc9f65-4kjbh" Dec 10 15:33:56 crc kubenswrapper[4755]: I1210 15:33:56.225543 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-2tlgr" Dec 10 15:33:56 crc kubenswrapper[4755]: I1210 15:33:56.225681 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-rwkrm" Dec 10 15:33:56 crc kubenswrapper[4755]: I1210 15:33:56.226005 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-f66fc9f65-5j7mt" Dec 10 15:33:56 crc kubenswrapper[4755]: I1210 15:33:56.226124 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-57crm" Dec 10 15:33:56 crc kubenswrapper[4755]: I1210 15:33:56.226580 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-rwkrm" Dec 10 15:33:56 crc kubenswrapper[4755]: I1210 15:33:56.226776 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-f66fc9f65-4kjbh" Dec 10 15:33:56 crc kubenswrapper[4755]: I1210 15:33:56.235686 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-2tlgr" Dec 10 15:33:56 crc kubenswrapper[4755]: E1210 15:33:56.302713 4755 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-f66fc9f65-5j7mt_openshift-operators_2ace45ac-e8fa-4b58-b40c-c32fbc5a1c6e_0(4cb0c4e69a95023fb5a284e3652fd903c761715292fca4fe4cc839e3a7357787): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 10 15:33:56 crc kubenswrapper[4755]: E1210 15:33:56.302788 4755 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-f66fc9f65-5j7mt_openshift-operators_2ace45ac-e8fa-4b58-b40c-c32fbc5a1c6e_0(4cb0c4e69a95023fb5a284e3652fd903c761715292fca4fe4cc839e3a7357787): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-f66fc9f65-5j7mt" Dec 10 15:33:56 crc kubenswrapper[4755]: E1210 15:33:56.302816 4755 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-f66fc9f65-5j7mt_openshift-operators_2ace45ac-e8fa-4b58-b40c-c32fbc5a1c6e_0(4cb0c4e69a95023fb5a284e3652fd903c761715292fca4fe4cc839e3a7357787): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-f66fc9f65-5j7mt" Dec 10 15:33:56 crc kubenswrapper[4755]: E1210 15:33:56.302868 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-f66fc9f65-5j7mt_openshift-operators(2ace45ac-e8fa-4b58-b40c-c32fbc5a1c6e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-f66fc9f65-5j7mt_openshift-operators(2ace45ac-e8fa-4b58-b40c-c32fbc5a1c6e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-f66fc9f65-5j7mt_openshift-operators_2ace45ac-e8fa-4b58-b40c-c32fbc5a1c6e_0(4cb0c4e69a95023fb5a284e3652fd903c761715292fca4fe4cc839e3a7357787): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-f66fc9f65-5j7mt" podUID="2ace45ac-e8fa-4b58-b40c-c32fbc5a1c6e" Dec 10 15:33:56 crc kubenswrapper[4755]: E1210 15:33:56.313253 4755 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-rwkrm_openshift-operators_5dad8e56-f3d6-4d95-bd98-96b2f7ae1d6e_0(0e9ac7684516f2c10b36ac31f3ff3573b318e4193a2633d74159a06e69599e98): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 10 15:33:56 crc kubenswrapper[4755]: E1210 15:33:56.313316 4755 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-rwkrm_openshift-operators_5dad8e56-f3d6-4d95-bd98-96b2f7ae1d6e_0(0e9ac7684516f2c10b36ac31f3ff3573b318e4193a2633d74159a06e69599e98): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5446b9c989-rwkrm" Dec 10 15:33:56 crc kubenswrapper[4755]: E1210 15:33:56.313339 4755 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-rwkrm_openshift-operators_5dad8e56-f3d6-4d95-bd98-96b2f7ae1d6e_0(0e9ac7684516f2c10b36ac31f3ff3573b318e4193a2633d74159a06e69599e98): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5446b9c989-rwkrm" Dec 10 15:33:56 crc kubenswrapper[4755]: E1210 15:33:56.313380 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5446b9c989-rwkrm_openshift-operators(5dad8e56-f3d6-4d95-bd98-96b2f7ae1d6e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5446b9c989-rwkrm_openshift-operators(5dad8e56-f3d6-4d95-bd98-96b2f7ae1d6e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-rwkrm_openshift-operators_5dad8e56-f3d6-4d95-bd98-96b2f7ae1d6e_0(0e9ac7684516f2c10b36ac31f3ff3573b318e4193a2633d74159a06e69599e98): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5446b9c989-rwkrm" podUID="5dad8e56-f3d6-4d95-bd98-96b2f7ae1d6e" Dec 10 15:33:56 crc kubenswrapper[4755]: E1210 15:33:56.318339 4755 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-57crm_openshift-operators_aed5adbf-d512-4557-bb9a-474301586611_0(e580ee2e62f56448cce574757fe1ad42f37cbd5f31f23e53dda1c075d5680f0e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 10 15:33:56 crc kubenswrapper[4755]: E1210 15:33:56.318390 4755 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-57crm_openshift-operators_aed5adbf-d512-4557-bb9a-474301586611_0(e580ee2e62f56448cce574757fe1ad42f37cbd5f31f23e53dda1c075d5680f0e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-d8bb48f5d-57crm" Dec 10 15:33:56 crc kubenswrapper[4755]: E1210 15:33:56.318411 4755 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-57crm_openshift-operators_aed5adbf-d512-4557-bb9a-474301586611_0(e580ee2e62f56448cce574757fe1ad42f37cbd5f31f23e53dda1c075d5680f0e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-d8bb48f5d-57crm" Dec 10 15:33:56 crc kubenswrapper[4755]: E1210 15:33:56.318448 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-d8bb48f5d-57crm_openshift-operators(aed5adbf-d512-4557-bb9a-474301586611)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-d8bb48f5d-57crm_openshift-operators(aed5adbf-d512-4557-bb9a-474301586611)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-57crm_openshift-operators_aed5adbf-d512-4557-bb9a-474301586611_0(e580ee2e62f56448cce574757fe1ad42f37cbd5f31f23e53dda1c075d5680f0e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-d8bb48f5d-57crm" podUID="aed5adbf-d512-4557-bb9a-474301586611" Dec 10 15:33:56 crc kubenswrapper[4755]: E1210 15:33:56.322930 4755 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-2tlgr_openshift-operators_a03bc2c9-3ec6-4ca5-9181-88e9cb9fe2be_0(bc66b6dccc27ff3c78f1d89e1ad8e58c62a023c2338998a0eba5f3df1f521800): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 10 15:33:56 crc kubenswrapper[4755]: E1210 15:33:56.322989 4755 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-2tlgr_openshift-operators_a03bc2c9-3ec6-4ca5-9181-88e9cb9fe2be_0(bc66b6dccc27ff3c78f1d89e1ad8e58c62a023c2338998a0eba5f3df1f521800): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-2tlgr" Dec 10 15:33:56 crc kubenswrapper[4755]: E1210 15:33:56.323013 4755 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-2tlgr_openshift-operators_a03bc2c9-3ec6-4ca5-9181-88e9cb9fe2be_0(bc66b6dccc27ff3c78f1d89e1ad8e58c62a023c2338998a0eba5f3df1f521800): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-2tlgr" Dec 10 15:33:56 crc kubenswrapper[4755]: E1210 15:33:56.323050 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-668cf9dfbb-2tlgr_openshift-operators(a03bc2c9-3ec6-4ca5-9181-88e9cb9fe2be)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-668cf9dfbb-2tlgr_openshift-operators(a03bc2c9-3ec6-4ca5-9181-88e9cb9fe2be)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-2tlgr_openshift-operators_a03bc2c9-3ec6-4ca5-9181-88e9cb9fe2be_0(bc66b6dccc27ff3c78f1d89e1ad8e58c62a023c2338998a0eba5f3df1f521800): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-2tlgr" podUID="a03bc2c9-3ec6-4ca5-9181-88e9cb9fe2be" Dec 10 15:33:56 crc kubenswrapper[4755]: E1210 15:33:56.328983 4755 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-f66fc9f65-4kjbh_openshift-operators_95d667d8-b323-4d59-84e6-ffaa553526c7_0(42f0d96f5e332ccc62e4f34999df41ec66dc9719d9be4fe4493c470124952936): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 10 15:33:56 crc kubenswrapper[4755]: E1210 15:33:56.329037 4755 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-f66fc9f65-4kjbh_openshift-operators_95d667d8-b323-4d59-84e6-ffaa553526c7_0(42f0d96f5e332ccc62e4f34999df41ec66dc9719d9be4fe4493c470124952936): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-f66fc9f65-4kjbh" Dec 10 15:33:56 crc kubenswrapper[4755]: E1210 15:33:56.329061 4755 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-f66fc9f65-4kjbh_openshift-operators_95d667d8-b323-4d59-84e6-ffaa553526c7_0(42f0d96f5e332ccc62e4f34999df41ec66dc9719d9be4fe4493c470124952936): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-f66fc9f65-4kjbh" Dec 10 15:33:56 crc kubenswrapper[4755]: E1210 15:33:56.329102 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-f66fc9f65-4kjbh_openshift-operators(95d667d8-b323-4d59-84e6-ffaa553526c7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-f66fc9f65-4kjbh_openshift-operators(95d667d8-b323-4d59-84e6-ffaa553526c7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-f66fc9f65-4kjbh_openshift-operators_95d667d8-b323-4d59-84e6-ffaa553526c7_0(42f0d96f5e332ccc62e4f34999df41ec66dc9719d9be4fe4493c470124952936): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-f66fc9f65-4kjbh" podUID="95d667d8-b323-4d59-84e6-ffaa553526c7" Dec 10 15:34:02 crc kubenswrapper[4755]: I1210 15:34:02.756784 4755 scope.go:117] "RemoveContainer" containerID="ddcd6ca2f982a307a418e96d428250bc2a8ea077d211a8856f484cd779d4fa36" Dec 10 15:34:02 crc kubenswrapper[4755]: E1210 15:34:02.757324 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-zl2tx_openshift-multus(796da6d5-6ccd-4786-a03e-9a8e47a55031)\"" pod="openshift-multus/multus-zl2tx" podUID="796da6d5-6ccd-4786-a03e-9a8e47a55031" Dec 10 15:34:07 crc kubenswrapper[4755]: I1210 15:34:07.757342 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-57crm" Dec 10 15:34:07 crc kubenswrapper[4755]: I1210 15:34:07.758230 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-57crm" Dec 10 15:34:07 crc kubenswrapper[4755]: E1210 15:34:07.786570 4755 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-57crm_openshift-operators_aed5adbf-d512-4557-bb9a-474301586611_0(d8b4f7fcb3c431a93d914134c4ce02e573ab6d9f98711964fd410f98e5e75075): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 10 15:34:07 crc kubenswrapper[4755]: E1210 15:34:07.786641 4755 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-57crm_openshift-operators_aed5adbf-d512-4557-bb9a-474301586611_0(d8b4f7fcb3c431a93d914134c4ce02e573ab6d9f98711964fd410f98e5e75075): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-d8bb48f5d-57crm" Dec 10 15:34:07 crc kubenswrapper[4755]: E1210 15:34:07.786669 4755 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-57crm_openshift-operators_aed5adbf-d512-4557-bb9a-474301586611_0(d8b4f7fcb3c431a93d914134c4ce02e573ab6d9f98711964fd410f98e5e75075): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-d8bb48f5d-57crm" Dec 10 15:34:07 crc kubenswrapper[4755]: E1210 15:34:07.786724 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-d8bb48f5d-57crm_openshift-operators(aed5adbf-d512-4557-bb9a-474301586611)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-d8bb48f5d-57crm_openshift-operators(aed5adbf-d512-4557-bb9a-474301586611)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-57crm_openshift-operators_aed5adbf-d512-4557-bb9a-474301586611_0(d8b4f7fcb3c431a93d914134c4ce02e573ab6d9f98711964fd410f98e5e75075): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-d8bb48f5d-57crm" podUID="aed5adbf-d512-4557-bb9a-474301586611" Dec 10 15:34:09 crc kubenswrapper[4755]: I1210 15:34:09.757654 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-f66fc9f65-5j7mt" Dec 10 15:34:09 crc kubenswrapper[4755]: I1210 15:34:09.758094 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-f66fc9f65-5j7mt" Dec 10 15:34:09 crc kubenswrapper[4755]: E1210 15:34:09.780848 4755 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-f66fc9f65-5j7mt_openshift-operators_2ace45ac-e8fa-4b58-b40c-c32fbc5a1c6e_0(19f3dbd77cef39a78d5331ea11340ef026952ca7456068ae987b743d8a4e14e9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 10 15:34:09 crc kubenswrapper[4755]: E1210 15:34:09.780979 4755 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-f66fc9f65-5j7mt_openshift-operators_2ace45ac-e8fa-4b58-b40c-c32fbc5a1c6e_0(19f3dbd77cef39a78d5331ea11340ef026952ca7456068ae987b743d8a4e14e9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-f66fc9f65-5j7mt" Dec 10 15:34:09 crc kubenswrapper[4755]: E1210 15:34:09.781052 4755 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-f66fc9f65-5j7mt_openshift-operators_2ace45ac-e8fa-4b58-b40c-c32fbc5a1c6e_0(19f3dbd77cef39a78d5331ea11340ef026952ca7456068ae987b743d8a4e14e9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-f66fc9f65-5j7mt" Dec 10 15:34:09 crc kubenswrapper[4755]: E1210 15:34:09.781172 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-f66fc9f65-5j7mt_openshift-operators(2ace45ac-e8fa-4b58-b40c-c32fbc5a1c6e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-f66fc9f65-5j7mt_openshift-operators(2ace45ac-e8fa-4b58-b40c-c32fbc5a1c6e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-f66fc9f65-5j7mt_openshift-operators_2ace45ac-e8fa-4b58-b40c-c32fbc5a1c6e_0(19f3dbd77cef39a78d5331ea11340ef026952ca7456068ae987b743d8a4e14e9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-f66fc9f65-5j7mt" podUID="2ace45ac-e8fa-4b58-b40c-c32fbc5a1c6e" Dec 10 15:34:10 crc kubenswrapper[4755]: I1210 15:34:10.756827 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-rwkrm" Dec 10 15:34:10 crc kubenswrapper[4755]: I1210 15:34:10.757488 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-rwkrm" Dec 10 15:34:10 crc kubenswrapper[4755]: I1210 15:34:10.756874 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-2tlgr" Dec 10 15:34:10 crc kubenswrapper[4755]: I1210 15:34:10.756938 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-f66fc9f65-4kjbh" Dec 10 15:34:10 crc kubenswrapper[4755]: I1210 15:34:10.759182 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-f66fc9f65-4kjbh" Dec 10 15:34:10 crc kubenswrapper[4755]: I1210 15:34:10.760738 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-2tlgr" Dec 10 15:34:10 crc kubenswrapper[4755]: E1210 15:34:10.817346 4755 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-rwkrm_openshift-operators_5dad8e56-f3d6-4d95-bd98-96b2f7ae1d6e_0(a8908186cfbe63902b17cca562ab2486ad44bb5df5db7d2c0c79fafa0c7de4bf): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 10 15:34:10 crc kubenswrapper[4755]: E1210 15:34:10.817533 4755 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-rwkrm_openshift-operators_5dad8e56-f3d6-4d95-bd98-96b2f7ae1d6e_0(a8908186cfbe63902b17cca562ab2486ad44bb5df5db7d2c0c79fafa0c7de4bf): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5446b9c989-rwkrm" Dec 10 15:34:10 crc kubenswrapper[4755]: E1210 15:34:10.817648 4755 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-rwkrm_openshift-operators_5dad8e56-f3d6-4d95-bd98-96b2f7ae1d6e_0(a8908186cfbe63902b17cca562ab2486ad44bb5df5db7d2c0c79fafa0c7de4bf): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5446b9c989-rwkrm" Dec 10 15:34:10 crc kubenswrapper[4755]: E1210 15:34:10.817793 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5446b9c989-rwkrm_openshift-operators(5dad8e56-f3d6-4d95-bd98-96b2f7ae1d6e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5446b9c989-rwkrm_openshift-operators(5dad8e56-f3d6-4d95-bd98-96b2f7ae1d6e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-rwkrm_openshift-operators_5dad8e56-f3d6-4d95-bd98-96b2f7ae1d6e_0(a8908186cfbe63902b17cca562ab2486ad44bb5df5db7d2c0c79fafa0c7de4bf): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5446b9c989-rwkrm" podUID="5dad8e56-f3d6-4d95-bd98-96b2f7ae1d6e" Dec 10 15:34:10 crc kubenswrapper[4755]: E1210 15:34:10.824526 4755 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-2tlgr_openshift-operators_a03bc2c9-3ec6-4ca5-9181-88e9cb9fe2be_0(abf26bc27f14057fdddc1e1b739efa5328762826a221ba944ad829ec7391a1fa): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 10 15:34:10 crc kubenswrapper[4755]: E1210 15:34:10.824626 4755 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-2tlgr_openshift-operators_a03bc2c9-3ec6-4ca5-9181-88e9cb9fe2be_0(abf26bc27f14057fdddc1e1b739efa5328762826a221ba944ad829ec7391a1fa): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-2tlgr" Dec 10 15:34:10 crc kubenswrapper[4755]: E1210 15:34:10.824695 4755 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-2tlgr_openshift-operators_a03bc2c9-3ec6-4ca5-9181-88e9cb9fe2be_0(abf26bc27f14057fdddc1e1b739efa5328762826a221ba944ad829ec7391a1fa): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-2tlgr" Dec 10 15:34:10 crc kubenswrapper[4755]: E1210 15:34:10.824799 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-668cf9dfbb-2tlgr_openshift-operators(a03bc2c9-3ec6-4ca5-9181-88e9cb9fe2be)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-668cf9dfbb-2tlgr_openshift-operators(a03bc2c9-3ec6-4ca5-9181-88e9cb9fe2be)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-2tlgr_openshift-operators_a03bc2c9-3ec6-4ca5-9181-88e9cb9fe2be_0(abf26bc27f14057fdddc1e1b739efa5328762826a221ba944ad829ec7391a1fa): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-2tlgr" podUID="a03bc2c9-3ec6-4ca5-9181-88e9cb9fe2be" Dec 10 15:34:10 crc kubenswrapper[4755]: E1210 15:34:10.828711 4755 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-f66fc9f65-4kjbh_openshift-operators_95d667d8-b323-4d59-84e6-ffaa553526c7_0(78db897799e67fac4883649a55cdf4c0833b07d7f5222f5730eb05d0a41980b2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 10 15:34:10 crc kubenswrapper[4755]: E1210 15:34:10.828789 4755 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-f66fc9f65-4kjbh_openshift-operators_95d667d8-b323-4d59-84e6-ffaa553526c7_0(78db897799e67fac4883649a55cdf4c0833b07d7f5222f5730eb05d0a41980b2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-f66fc9f65-4kjbh" Dec 10 15:34:10 crc kubenswrapper[4755]: E1210 15:34:10.828810 4755 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-f66fc9f65-4kjbh_openshift-operators_95d667d8-b323-4d59-84e6-ffaa553526c7_0(78db897799e67fac4883649a55cdf4c0833b07d7f5222f5730eb05d0a41980b2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-f66fc9f65-4kjbh" Dec 10 15:34:10 crc kubenswrapper[4755]: E1210 15:34:10.828852 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-f66fc9f65-4kjbh_openshift-operators(95d667d8-b323-4d59-84e6-ffaa553526c7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-f66fc9f65-4kjbh_openshift-operators(95d667d8-b323-4d59-84e6-ffaa553526c7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-f66fc9f65-4kjbh_openshift-operators_95d667d8-b323-4d59-84e6-ffaa553526c7_0(78db897799e67fac4883649a55cdf4c0833b07d7f5222f5730eb05d0a41980b2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-f66fc9f65-4kjbh" podUID="95d667d8-b323-4d59-84e6-ffaa553526c7" Dec 10 15:34:14 crc kubenswrapper[4755]: I1210 15:34:14.757398 4755 scope.go:117] "RemoveContainer" containerID="ddcd6ca2f982a307a418e96d428250bc2a8ea077d211a8856f484cd779d4fa36" Dec 10 15:34:15 crc kubenswrapper[4755]: I1210 15:34:15.322709 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zl2tx_796da6d5-6ccd-4786-a03e-9a8e47a55031/kube-multus/2.log" Dec 10 15:34:15 crc kubenswrapper[4755]: I1210 15:34:15.323008 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-zl2tx" event={"ID":"796da6d5-6ccd-4786-a03e-9a8e47a55031","Type":"ContainerStarted","Data":"7a98a61b882311f6384bdc73b388621720b0c3396ec681f1fc8dd284a22ff0a9"} Dec 10 15:34:18 crc kubenswrapper[4755]: I1210 15:34:18.800500 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-9q4ng" Dec 10 15:34:20 crc kubenswrapper[4755]: I1210 15:34:20.761814 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-f66fc9f65-5j7mt" Dec 10 15:34:20 crc kubenswrapper[4755]: I1210 15:34:20.762616 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-f66fc9f65-5j7mt" Dec 10 15:34:20 crc kubenswrapper[4755]: I1210 15:34:20.761960 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-57crm" Dec 10 15:34:20 crc kubenswrapper[4755]: I1210 15:34:20.763214 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-57crm" Dec 10 15:34:21 crc kubenswrapper[4755]: I1210 15:34:21.010875 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-57crm"] Dec 10 15:34:21 crc kubenswrapper[4755]: W1210 15:34:21.019908 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaed5adbf_d512_4557_bb9a_474301586611.slice/crio-f9c12b34de22a600c4a897ea6c50383abcb67ca7a7a9eb4976ab6522cd1f8e8e WatchSource:0}: Error finding container f9c12b34de22a600c4a897ea6c50383abcb67ca7a7a9eb4976ab6522cd1f8e8e: Status 404 returned error can't find the container with id f9c12b34de22a600c4a897ea6c50383abcb67ca7a7a9eb4976ab6522cd1f8e8e Dec 10 15:34:21 crc kubenswrapper[4755]: I1210 15:34:21.245389 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-f66fc9f65-5j7mt"] Dec 10 15:34:21 crc kubenswrapper[4755]: I1210 15:34:21.366863 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-f66fc9f65-5j7mt" event={"ID":"2ace45ac-e8fa-4b58-b40c-c32fbc5a1c6e","Type":"ContainerStarted","Data":"dfc13bd3a5d843787e49646b127a2fcc73816ed215349717e94f3a231a81cb58"} Dec 10 15:34:21 crc kubenswrapper[4755]: I1210 15:34:21.367661 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-d8bb48f5d-57crm" event={"ID":"aed5adbf-d512-4557-bb9a-474301586611","Type":"ContainerStarted","Data":"f9c12b34de22a600c4a897ea6c50383abcb67ca7a7a9eb4976ab6522cd1f8e8e"} Dec 10 15:34:22 crc kubenswrapper[4755]: I1210 15:34:22.757224 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-f66fc9f65-4kjbh" Dec 10 15:34:22 crc kubenswrapper[4755]: I1210 15:34:22.757861 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-f66fc9f65-4kjbh" Dec 10 15:34:22 crc kubenswrapper[4755]: I1210 15:34:22.985023 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-f66fc9f65-4kjbh"] Dec 10 15:34:23 crc kubenswrapper[4755]: I1210 15:34:23.379884 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-f66fc9f65-4kjbh" event={"ID":"95d667d8-b323-4d59-84e6-ffaa553526c7","Type":"ContainerStarted","Data":"ab92d30c3bfd50829b01fe965bbb012d495a7a204c8e73346ad36f69245642b0"} Dec 10 15:34:23 crc kubenswrapper[4755]: I1210 15:34:23.756675 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-2tlgr" Dec 10 15:34:23 crc kubenswrapper[4755]: I1210 15:34:23.762683 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-2tlgr" Dec 10 15:34:23 crc kubenswrapper[4755]: I1210 15:34:23.979709 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-2tlgr"] Dec 10 15:34:23 crc kubenswrapper[4755]: W1210 15:34:23.988802 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda03bc2c9_3ec6_4ca5_9181_88e9cb9fe2be.slice/crio-bdaef90cae34638b6c2ab209c1ea5eeca0c25886d627152cd1cefefb1dd613f2 WatchSource:0}: Error finding container bdaef90cae34638b6c2ab209c1ea5eeca0c25886d627152cd1cefefb1dd613f2: Status 404 returned error can't find the container with id bdaef90cae34638b6c2ab209c1ea5eeca0c25886d627152cd1cefefb1dd613f2 Dec 10 15:34:24 crc kubenswrapper[4755]: I1210 15:34:24.387203 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-2tlgr" event={"ID":"a03bc2c9-3ec6-4ca5-9181-88e9cb9fe2be","Type":"ContainerStarted","Data":"bdaef90cae34638b6c2ab209c1ea5eeca0c25886d627152cd1cefefb1dd613f2"} Dec 10 15:34:25 crc kubenswrapper[4755]: I1210 15:34:25.757456 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-rwkrm" Dec 10 15:34:25 crc kubenswrapper[4755]: I1210 15:34:25.758226 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-rwkrm" Dec 10 15:34:30 crc kubenswrapper[4755]: I1210 15:34:30.422409 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-2tlgr" event={"ID":"a03bc2c9-3ec6-4ca5-9181-88e9cb9fe2be","Type":"ContainerStarted","Data":"5de388338fec24e23b3ac40582d75f1e34d2dcf9f03bef4f453e8b974411f4bb"} Dec 10 15:34:30 crc kubenswrapper[4755]: I1210 15:34:30.424428 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-f66fc9f65-4kjbh" event={"ID":"95d667d8-b323-4d59-84e6-ffaa553526c7","Type":"ContainerStarted","Data":"c0545af4295bcbc5522841356a9e4451e55634f3879a7ff6667808cdf1d63490"} Dec 10 15:34:30 crc kubenswrapper[4755]: I1210 15:34:30.426275 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-f66fc9f65-5j7mt" event={"ID":"2ace45ac-e8fa-4b58-b40c-c32fbc5a1c6e","Type":"ContainerStarted","Data":"5f93113379132d8c758bb7b306f737424b486b19c7d3ca9203f4f08e674561c6"} Dec 10 15:34:30 crc kubenswrapper[4755]: I1210 15:34:30.431418 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-d8bb48f5d-57crm" event={"ID":"aed5adbf-d512-4557-bb9a-474301586611","Type":"ContainerStarted","Data":"4e47aa64f55e1bd128609dd3a34a3a5e8698f2649c55edfc95e58976084ee1f9"} Dec 10 15:34:30 crc kubenswrapper[4755]: I1210 15:34:30.431698 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-d8bb48f5d-57crm" Dec 10 15:34:30 crc kubenswrapper[4755]: I1210 15:34:30.432944 4755 patch_prober.go:28] interesting pod/observability-operator-d8bb48f5d-57crm container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.24:8081/healthz\": dial tcp 10.217.0.24:8081: connect: connection refused" start-of-body= Dec 10 15:34:30 crc kubenswrapper[4755]: I1210 15:34:30.432985 4755 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-d8bb48f5d-57crm" podUID="aed5adbf-d512-4557-bb9a-474301586611" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.24:8081/healthz\": dial tcp 10.217.0.24:8081: connect: connection refused" Dec 10 15:34:30 crc kubenswrapper[4755]: I1210 15:34:30.441619 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-2tlgr" podStartSLOduration=29.313975484 podStartE2EDuration="35.441600762s" podCreationTimestamp="2025-12-10 15:33:55 +0000 UTC" firstStartedPulling="2025-12-10 15:34:23.993830979 +0000 UTC m=+660.594714611" lastFinishedPulling="2025-12-10 15:34:30.121456247 +0000 UTC m=+666.722339889" observedRunningTime="2025-12-10 15:34:30.436879437 +0000 UTC m=+667.037763069" watchObservedRunningTime="2025-12-10 15:34:30.441600762 +0000 UTC m=+667.042484394" Dec 10 15:34:30 crc kubenswrapper[4755]: I1210 15:34:30.455200 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-f66fc9f65-5j7mt" podStartSLOduration=26.58622884 podStartE2EDuration="35.45518104s" podCreationTimestamp="2025-12-10 15:33:55 +0000 UTC" firstStartedPulling="2025-12-10 15:34:21.257489449 +0000 UTC m=+657.858373091" lastFinishedPulling="2025-12-10 15:34:30.126441659 +0000 UTC m=+666.727325291" observedRunningTime="2025-12-10 15:34:30.451807443 +0000 UTC m=+667.052691085" watchObservedRunningTime="2025-12-10 15:34:30.45518104 +0000 UTC m=+667.056064672" Dec 10 15:34:30 crc kubenswrapper[4755]: I1210 15:34:30.478586 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-f66fc9f65-4kjbh" podStartSLOduration=28.370754551 podStartE2EDuration="35.478569318s" podCreationTimestamp="2025-12-10 15:33:55 +0000 UTC" firstStartedPulling="2025-12-10 15:34:22.991876569 +0000 UTC m=+659.592760201" lastFinishedPulling="2025-12-10 15:34:30.099691316 +0000 UTC m=+666.700574968" observedRunningTime="2025-12-10 15:34:30.476881989 +0000 UTC m=+667.077765621" watchObservedRunningTime="2025-12-10 15:34:30.478569318 +0000 UTC m=+667.079452940" Dec 10 15:34:30 crc kubenswrapper[4755]: I1210 15:34:30.511789 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-d8bb48f5d-57crm" podStartSLOduration=26.412179038 podStartE2EDuration="35.511772806s" podCreationTimestamp="2025-12-10 15:33:55 +0000 UTC" firstStartedPulling="2025-12-10 15:34:21.02191333 +0000 UTC m=+657.622796962" lastFinishedPulling="2025-12-10 15:34:30.121507098 +0000 UTC m=+666.722390730" observedRunningTime="2025-12-10 15:34:30.51053903 +0000 UTC m=+667.111422662" watchObservedRunningTime="2025-12-10 15:34:30.511772806 +0000 UTC m=+667.112656428" Dec 10 15:34:30 crc kubenswrapper[4755]: I1210 15:34:30.540091 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5446b9c989-rwkrm"] Dec 10 15:34:30 crc kubenswrapper[4755]: W1210 15:34:30.543608 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5dad8e56_f3d6_4d95_bd98_96b2f7ae1d6e.slice/crio-7522c88917627985c96ec1f6cf3af8f99dedf67a6255a44eed6df85edb8ff12f WatchSource:0}: Error finding container 7522c88917627985c96ec1f6cf3af8f99dedf67a6255a44eed6df85edb8ff12f: Status 404 returned error can't find the container with id 7522c88917627985c96ec1f6cf3af8f99dedf67a6255a44eed6df85edb8ff12f Dec 10 15:34:31 crc kubenswrapper[4755]: I1210 15:34:31.437388 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5446b9c989-rwkrm" event={"ID":"5dad8e56-f3d6-4d95-bd98-96b2f7ae1d6e","Type":"ContainerStarted","Data":"7522c88917627985c96ec1f6cf3af8f99dedf67a6255a44eed6df85edb8ff12f"} Dec 10 15:34:31 crc kubenswrapper[4755]: I1210 15:34:31.439307 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-d8bb48f5d-57crm" Dec 10 15:34:33 crc kubenswrapper[4755]: I1210 15:34:33.449694 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5446b9c989-rwkrm" event={"ID":"5dad8e56-f3d6-4d95-bd98-96b2f7ae1d6e","Type":"ContainerStarted","Data":"0ee502f677a94f8a42bf252b3c820bb488741da52fc31ea82f8b7e051db77caa"} Dec 10 15:34:33 crc kubenswrapper[4755]: I1210 15:34:33.450243 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5446b9c989-rwkrm" Dec 10 15:34:33 crc kubenswrapper[4755]: I1210 15:34:33.477329 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5446b9c989-rwkrm" podStartSLOduration=36.157365017 podStartE2EDuration="38.477297672s" podCreationTimestamp="2025-12-10 15:33:55 +0000 UTC" firstStartedPulling="2025-12-10 15:34:30.546169369 +0000 UTC m=+667.147053001" lastFinishedPulling="2025-12-10 15:34:32.866102024 +0000 UTC m=+669.466985656" observedRunningTime="2025-12-10 15:34:33.476265853 +0000 UTC m=+670.077149505" watchObservedRunningTime="2025-12-10 15:34:33.477297672 +0000 UTC m=+670.078181344" Dec 10 15:34:39 crc kubenswrapper[4755]: I1210 15:34:39.950296 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-ktrqg"] Dec 10 15:34:39 crc kubenswrapper[4755]: I1210 15:34:39.951891 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-ktrqg" Dec 10 15:34:39 crc kubenswrapper[4755]: I1210 15:34:39.954790 4755 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-dnw44" Dec 10 15:34:39 crc kubenswrapper[4755]: I1210 15:34:39.955860 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Dec 10 15:34:39 crc kubenswrapper[4755]: I1210 15:34:39.956330 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Dec 10 15:34:39 crc kubenswrapper[4755]: I1210 15:34:39.968874 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-5b446d88c5-mpd5f"] Dec 10 15:34:39 crc kubenswrapper[4755]: I1210 15:34:39.969790 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-mpd5f" Dec 10 15:34:39 crc kubenswrapper[4755]: I1210 15:34:39.973066 4755 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-v8qls" Dec 10 15:34:39 crc kubenswrapper[4755]: I1210 15:34:39.982481 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-mpd5f"] Dec 10 15:34:39 crc kubenswrapper[4755]: I1210 15:34:39.998183 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-58jbs"] Dec 10 15:34:39 crc kubenswrapper[4755]: I1210 15:34:39.999132 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-58jbs" Dec 10 15:34:40 crc kubenswrapper[4755]: I1210 15:34:40.003492 4755 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-ngn8j" Dec 10 15:34:40 crc kubenswrapper[4755]: I1210 15:34:40.005422 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-58jbs"] Dec 10 15:34:40 crc kubenswrapper[4755]: I1210 15:34:40.016346 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-ktrqg"] Dec 10 15:34:40 crc kubenswrapper[4755]: I1210 15:34:40.091092 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vql6w\" (UniqueName: \"kubernetes.io/projected/a03db35b-ea91-49fd-8658-7af8b10d927e-kube-api-access-vql6w\") pod \"cert-manager-5b446d88c5-mpd5f\" (UID: \"a03db35b-ea91-49fd-8658-7af8b10d927e\") " pod="cert-manager/cert-manager-5b446d88c5-mpd5f" Dec 10 15:34:40 crc kubenswrapper[4755]: I1210 15:34:40.091270 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhlq6\" (UniqueName: \"kubernetes.io/projected/43eded5c-00d1-4ae6-b30e-2c0c8d521325-kube-api-access-qhlq6\") pod \"cert-manager-cainjector-7f985d654d-ktrqg\" (UID: \"43eded5c-00d1-4ae6-b30e-2c0c8d521325\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-ktrqg" Dec 10 15:34:40 crc kubenswrapper[4755]: I1210 15:34:40.091340 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcff2\" (UniqueName: \"kubernetes.io/projected/d45a9cb9-b26c-4eb4-a597-24366eab31b6-kube-api-access-vcff2\") pod \"cert-manager-webhook-5655c58dd6-58jbs\" (UID: \"d45a9cb9-b26c-4eb4-a597-24366eab31b6\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-58jbs" Dec 10 15:34:40 crc kubenswrapper[4755]: I1210 15:34:40.193304 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhlq6\" (UniqueName: \"kubernetes.io/projected/43eded5c-00d1-4ae6-b30e-2c0c8d521325-kube-api-access-qhlq6\") pod \"cert-manager-cainjector-7f985d654d-ktrqg\" (UID: \"43eded5c-00d1-4ae6-b30e-2c0c8d521325\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-ktrqg" Dec 10 15:34:40 crc kubenswrapper[4755]: I1210 15:34:40.193384 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcff2\" (UniqueName: \"kubernetes.io/projected/d45a9cb9-b26c-4eb4-a597-24366eab31b6-kube-api-access-vcff2\") pod \"cert-manager-webhook-5655c58dd6-58jbs\" (UID: \"d45a9cb9-b26c-4eb4-a597-24366eab31b6\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-58jbs" Dec 10 15:34:40 crc kubenswrapper[4755]: I1210 15:34:40.193504 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vql6w\" (UniqueName: \"kubernetes.io/projected/a03db35b-ea91-49fd-8658-7af8b10d927e-kube-api-access-vql6w\") pod \"cert-manager-5b446d88c5-mpd5f\" (UID: \"a03db35b-ea91-49fd-8658-7af8b10d927e\") " pod="cert-manager/cert-manager-5b446d88c5-mpd5f" Dec 10 15:34:40 crc kubenswrapper[4755]: I1210 15:34:40.212151 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcff2\" (UniqueName: \"kubernetes.io/projected/d45a9cb9-b26c-4eb4-a597-24366eab31b6-kube-api-access-vcff2\") pod \"cert-manager-webhook-5655c58dd6-58jbs\" (UID: \"d45a9cb9-b26c-4eb4-a597-24366eab31b6\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-58jbs" Dec 10 15:34:40 crc kubenswrapper[4755]: I1210 15:34:40.215241 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vql6w\" (UniqueName: \"kubernetes.io/projected/a03db35b-ea91-49fd-8658-7af8b10d927e-kube-api-access-vql6w\") pod \"cert-manager-5b446d88c5-mpd5f\" (UID: \"a03db35b-ea91-49fd-8658-7af8b10d927e\") " pod="cert-manager/cert-manager-5b446d88c5-mpd5f" Dec 10 15:34:40 crc kubenswrapper[4755]: I1210 15:34:40.216441 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhlq6\" (UniqueName: \"kubernetes.io/projected/43eded5c-00d1-4ae6-b30e-2c0c8d521325-kube-api-access-qhlq6\") pod \"cert-manager-cainjector-7f985d654d-ktrqg\" (UID: \"43eded5c-00d1-4ae6-b30e-2c0c8d521325\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-ktrqg" Dec 10 15:34:40 crc kubenswrapper[4755]: I1210 15:34:40.266807 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-ktrqg" Dec 10 15:34:40 crc kubenswrapper[4755]: I1210 15:34:40.282028 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-mpd5f" Dec 10 15:34:40 crc kubenswrapper[4755]: I1210 15:34:40.311784 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-58jbs" Dec 10 15:34:40 crc kubenswrapper[4755]: I1210 15:34:40.731011 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-ktrqg"] Dec 10 15:34:40 crc kubenswrapper[4755]: I1210 15:34:40.776188 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-58jbs"] Dec 10 15:34:40 crc kubenswrapper[4755]: I1210 15:34:40.782036 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-mpd5f"] Dec 10 15:34:40 crc kubenswrapper[4755]: W1210 15:34:40.799386 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda03db35b_ea91_49fd_8658_7af8b10d927e.slice/crio-9a7c28c19556df639959ecd8bb39a82e66bd892e2293428f06027ec82ff22a6a WatchSource:0}: Error finding container 9a7c28c19556df639959ecd8bb39a82e66bd892e2293428f06027ec82ff22a6a: Status 404 returned error can't find the container with id 9a7c28c19556df639959ecd8bb39a82e66bd892e2293428f06027ec82ff22a6a Dec 10 15:34:41 crc kubenswrapper[4755]: I1210 15:34:41.505534 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-58jbs" event={"ID":"d45a9cb9-b26c-4eb4-a597-24366eab31b6","Type":"ContainerStarted","Data":"85f29814c676b8252183877134eaaff0b40d1b4415715fab4dd73c54d4c23f4c"} Dec 10 15:34:41 crc kubenswrapper[4755]: I1210 15:34:41.507314 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-mpd5f" event={"ID":"a03db35b-ea91-49fd-8658-7af8b10d927e","Type":"ContainerStarted","Data":"9a7c28c19556df639959ecd8bb39a82e66bd892e2293428f06027ec82ff22a6a"} Dec 10 15:34:41 crc kubenswrapper[4755]: I1210 15:34:41.509626 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-ktrqg" event={"ID":"43eded5c-00d1-4ae6-b30e-2c0c8d521325","Type":"ContainerStarted","Data":"821f38084d1ddd4131022bd8139c984db7a788f1b5a1c9e04be14b6e9eb11385"} Dec 10 15:34:44 crc kubenswrapper[4755]: I1210 15:34:44.543809 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-mpd5f" event={"ID":"a03db35b-ea91-49fd-8658-7af8b10d927e","Type":"ContainerStarted","Data":"8a940e2511c41e9cc7f1a82490ecb021df545c8c996b5d3c82009abff5d1f715"} Dec 10 15:34:44 crc kubenswrapper[4755]: I1210 15:34:44.548802 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-ktrqg" event={"ID":"43eded5c-00d1-4ae6-b30e-2c0c8d521325","Type":"ContainerStarted","Data":"26f572285dfeec5fad84115248345ff121a3aed509cc4e9ed2aa9fa82a72a4bb"} Dec 10 15:34:44 crc kubenswrapper[4755]: I1210 15:34:44.549869 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-58jbs" event={"ID":"d45a9cb9-b26c-4eb4-a597-24366eab31b6","Type":"ContainerStarted","Data":"5c17e2f07e2dadeb18ffe22709881f297d1bd55640a43f162c44932694c02994"} Dec 10 15:34:44 crc kubenswrapper[4755]: I1210 15:34:44.550130 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-5655c58dd6-58jbs" Dec 10 15:34:44 crc kubenswrapper[4755]: I1210 15:34:44.597685 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-5b446d88c5-mpd5f" podStartSLOduration=2.404769372 podStartE2EDuration="5.597670093s" podCreationTimestamp="2025-12-10 15:34:39 +0000 UTC" firstStartedPulling="2025-12-10 15:34:40.803460086 +0000 UTC m=+677.404343718" lastFinishedPulling="2025-12-10 15:34:43.996360807 +0000 UTC m=+680.597244439" observedRunningTime="2025-12-10 15:34:44.566013369 +0000 UTC m=+681.166897021" watchObservedRunningTime="2025-12-10 15:34:44.597670093 +0000 UTC m=+681.198553725" Dec 10 15:34:44 crc kubenswrapper[4755]: I1210 15:34:44.597905 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-5655c58dd6-58jbs" podStartSLOduration=2.386454878 podStartE2EDuration="5.597901109s" podCreationTimestamp="2025-12-10 15:34:39 +0000 UTC" firstStartedPulling="2025-12-10 15:34:40.784101103 +0000 UTC m=+677.384984735" lastFinishedPulling="2025-12-10 15:34:43.995547334 +0000 UTC m=+680.596430966" observedRunningTime="2025-12-10 15:34:44.594698538 +0000 UTC m=+681.195582180" watchObservedRunningTime="2025-12-10 15:34:44.597901109 +0000 UTC m=+681.198784741" Dec 10 15:34:44 crc kubenswrapper[4755]: I1210 15:34:44.616979 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7f985d654d-ktrqg" podStartSLOduration=2.297350093 podStartE2EDuration="5.616960804s" podCreationTimestamp="2025-12-10 15:34:39 +0000 UTC" firstStartedPulling="2025-12-10 15:34:40.731207292 +0000 UTC m=+677.332090924" lastFinishedPulling="2025-12-10 15:34:44.050818003 +0000 UTC m=+680.651701635" observedRunningTime="2025-12-10 15:34:44.613973268 +0000 UTC m=+681.214856910" watchObservedRunningTime="2025-12-10 15:34:44.616960804 +0000 UTC m=+681.217844436" Dec 10 15:34:45 crc kubenswrapper[4755]: I1210 15:34:45.984263 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5446b9c989-rwkrm" Dec 10 15:34:50 crc kubenswrapper[4755]: I1210 15:34:50.322659 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-5655c58dd6-58jbs" Dec 10 15:35:15 crc kubenswrapper[4755]: I1210 15:35:15.189915 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c1t8zr8"] Dec 10 15:35:15 crc kubenswrapper[4755]: I1210 15:35:15.191773 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c1t8zr8" Dec 10 15:35:15 crc kubenswrapper[4755]: I1210 15:35:15.193781 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 10 15:35:15 crc kubenswrapper[4755]: I1210 15:35:15.201428 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c1t8zr8"] Dec 10 15:35:15 crc kubenswrapper[4755]: I1210 15:35:15.264779 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/54eaef18-b1aa-4151-99aa-9e758934bd5c-util\") pod \"7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c1t8zr8\" (UID: \"54eaef18-b1aa-4151-99aa-9e758934bd5c\") " pod="openshift-marketplace/7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c1t8zr8" Dec 10 15:35:15 crc kubenswrapper[4755]: I1210 15:35:15.264842 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/54eaef18-b1aa-4151-99aa-9e758934bd5c-bundle\") pod \"7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c1t8zr8\" (UID: \"54eaef18-b1aa-4151-99aa-9e758934bd5c\") " pod="openshift-marketplace/7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c1t8zr8" Dec 10 15:35:15 crc kubenswrapper[4755]: I1210 15:35:15.264868 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhhnx\" (UniqueName: \"kubernetes.io/projected/54eaef18-b1aa-4151-99aa-9e758934bd5c-kube-api-access-nhhnx\") pod \"7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c1t8zr8\" (UID: \"54eaef18-b1aa-4151-99aa-9e758934bd5c\") " pod="openshift-marketplace/7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c1t8zr8" Dec 10 15:35:15 crc kubenswrapper[4755]: I1210 15:35:15.366197 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/54eaef18-b1aa-4151-99aa-9e758934bd5c-util\") pod \"7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c1t8zr8\" (UID: \"54eaef18-b1aa-4151-99aa-9e758934bd5c\") " pod="openshift-marketplace/7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c1t8zr8" Dec 10 15:35:15 crc kubenswrapper[4755]: I1210 15:35:15.366257 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/54eaef18-b1aa-4151-99aa-9e758934bd5c-bundle\") pod \"7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c1t8zr8\" (UID: \"54eaef18-b1aa-4151-99aa-9e758934bd5c\") " pod="openshift-marketplace/7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c1t8zr8" Dec 10 15:35:15 crc kubenswrapper[4755]: I1210 15:35:15.366290 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhhnx\" (UniqueName: \"kubernetes.io/projected/54eaef18-b1aa-4151-99aa-9e758934bd5c-kube-api-access-nhhnx\") pod \"7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c1t8zr8\" (UID: \"54eaef18-b1aa-4151-99aa-9e758934bd5c\") " pod="openshift-marketplace/7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c1t8zr8" Dec 10 15:35:15 crc kubenswrapper[4755]: I1210 15:35:15.366690 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/54eaef18-b1aa-4151-99aa-9e758934bd5c-util\") pod \"7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c1t8zr8\" (UID: \"54eaef18-b1aa-4151-99aa-9e758934bd5c\") " pod="openshift-marketplace/7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c1t8zr8" Dec 10 15:35:15 crc kubenswrapper[4755]: I1210 15:35:15.366722 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/54eaef18-b1aa-4151-99aa-9e758934bd5c-bundle\") pod \"7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c1t8zr8\" (UID: \"54eaef18-b1aa-4151-99aa-9e758934bd5c\") " pod="openshift-marketplace/7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c1t8zr8" Dec 10 15:35:15 crc kubenswrapper[4755]: I1210 15:35:15.383510 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhhnx\" (UniqueName: \"kubernetes.io/projected/54eaef18-b1aa-4151-99aa-9e758934bd5c-kube-api-access-nhhnx\") pod \"7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c1t8zr8\" (UID: \"54eaef18-b1aa-4151-99aa-9e758934bd5c\") " pod="openshift-marketplace/7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c1t8zr8" Dec 10 15:35:15 crc kubenswrapper[4755]: I1210 15:35:15.508829 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c1t8zr8" Dec 10 15:35:15 crc kubenswrapper[4755]: I1210 15:35:15.828776 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c1t8zr8"] Dec 10 15:35:16 crc kubenswrapper[4755]: I1210 15:35:16.733453 4755 generic.go:334] "Generic (PLEG): container finished" podID="54eaef18-b1aa-4151-99aa-9e758934bd5c" containerID="8ca6b8b2bd24a5946518b677afd1bf9ee6aa1f3838de0c70ad5bab4d1ce0f6f3" exitCode=0 Dec 10 15:35:16 crc kubenswrapper[4755]: I1210 15:35:16.733559 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c1t8zr8" event={"ID":"54eaef18-b1aa-4151-99aa-9e758934bd5c","Type":"ContainerDied","Data":"8ca6b8b2bd24a5946518b677afd1bf9ee6aa1f3838de0c70ad5bab4d1ce0f6f3"} Dec 10 15:35:16 crc kubenswrapper[4755]: I1210 15:35:16.733982 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c1t8zr8" event={"ID":"54eaef18-b1aa-4151-99aa-9e758934bd5c","Type":"ContainerStarted","Data":"d1cb4c30300588de0c5118bcf392ce33f8d72b24df1b6365c324b1fa1e7bc1e6"} Dec 10 15:35:17 crc kubenswrapper[4755]: I1210 15:35:17.465897 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["minio-dev/minio"] Dec 10 15:35:17 crc kubenswrapper[4755]: I1210 15:35:17.466554 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Dec 10 15:35:17 crc kubenswrapper[4755]: I1210 15:35:17.468411 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"kube-root-ca.crt" Dec 10 15:35:17 crc kubenswrapper[4755]: I1210 15:35:17.468496 4755 reflector.go:368] Caches populated for *v1.Secret from object-"minio-dev"/"default-dockercfg-vdb55" Dec 10 15:35:17 crc kubenswrapper[4755]: I1210 15:35:17.468766 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"openshift-service-ca.crt" Dec 10 15:35:17 crc kubenswrapper[4755]: I1210 15:35:17.485823 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Dec 10 15:35:17 crc kubenswrapper[4755]: I1210 15:35:17.597770 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-defaf9f8-ce8b-4380-ae73-47fd9ddaf116\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-defaf9f8-ce8b-4380-ae73-47fd9ddaf116\") pod \"minio\" (UID: \"72309b0b-1923-4244-a862-9213b3184def\") " pod="minio-dev/minio" Dec 10 15:35:17 crc kubenswrapper[4755]: I1210 15:35:17.597838 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zdmm\" (UniqueName: \"kubernetes.io/projected/72309b0b-1923-4244-a862-9213b3184def-kube-api-access-9zdmm\") pod \"minio\" (UID: \"72309b0b-1923-4244-a862-9213b3184def\") " pod="minio-dev/minio" Dec 10 15:35:17 crc kubenswrapper[4755]: I1210 15:35:17.699268 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-defaf9f8-ce8b-4380-ae73-47fd9ddaf116\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-defaf9f8-ce8b-4380-ae73-47fd9ddaf116\") pod \"minio\" (UID: \"72309b0b-1923-4244-a862-9213b3184def\") " pod="minio-dev/minio" Dec 10 15:35:17 crc kubenswrapper[4755]: I1210 15:35:17.699329 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zdmm\" (UniqueName: \"kubernetes.io/projected/72309b0b-1923-4244-a862-9213b3184def-kube-api-access-9zdmm\") pod \"minio\" (UID: \"72309b0b-1923-4244-a862-9213b3184def\") " pod="minio-dev/minio" Dec 10 15:35:17 crc kubenswrapper[4755]: I1210 15:35:17.702374 4755 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 10 15:35:17 crc kubenswrapper[4755]: I1210 15:35:17.702548 4755 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-defaf9f8-ce8b-4380-ae73-47fd9ddaf116\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-defaf9f8-ce8b-4380-ae73-47fd9ddaf116\") pod \"minio\" (UID: \"72309b0b-1923-4244-a862-9213b3184def\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f2fea7cf5ec257ea3ba6f1eabefdb708c6b80c4743e8a005525482ef1b08d5c4/globalmount\"" pod="minio-dev/minio" Dec 10 15:35:17 crc kubenswrapper[4755]: I1210 15:35:17.726610 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-defaf9f8-ce8b-4380-ae73-47fd9ddaf116\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-defaf9f8-ce8b-4380-ae73-47fd9ddaf116\") pod \"minio\" (UID: \"72309b0b-1923-4244-a862-9213b3184def\") " pod="minio-dev/minio" Dec 10 15:35:17 crc kubenswrapper[4755]: I1210 15:35:17.735688 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zdmm\" (UniqueName: \"kubernetes.io/projected/72309b0b-1923-4244-a862-9213b3184def-kube-api-access-9zdmm\") pod \"minio\" (UID: \"72309b0b-1923-4244-a862-9213b3184def\") " pod="minio-dev/minio" Dec 10 15:35:17 crc kubenswrapper[4755]: I1210 15:35:17.804066 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Dec 10 15:35:18 crc kubenswrapper[4755]: I1210 15:35:18.199568 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Dec 10 15:35:18 crc kubenswrapper[4755]: W1210 15:35:18.240852 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod72309b0b_1923_4244_a862_9213b3184def.slice/crio-2280bca65b00d710b38a63d4819d04ef8586eed84a65fc20395c3bc20f4a037f WatchSource:0}: Error finding container 2280bca65b00d710b38a63d4819d04ef8586eed84a65fc20395c3bc20f4a037f: Status 404 returned error can't find the container with id 2280bca65b00d710b38a63d4819d04ef8586eed84a65fc20395c3bc20f4a037f Dec 10 15:35:18 crc kubenswrapper[4755]: I1210 15:35:18.747196 4755 generic.go:334] "Generic (PLEG): container finished" podID="54eaef18-b1aa-4151-99aa-9e758934bd5c" containerID="3a2c071770f008d8ebfa5c6ab8799e27020e76590400f86067e1d08203097d57" exitCode=0 Dec 10 15:35:18 crc kubenswrapper[4755]: I1210 15:35:18.747261 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c1t8zr8" event={"ID":"54eaef18-b1aa-4151-99aa-9e758934bd5c","Type":"ContainerDied","Data":"3a2c071770f008d8ebfa5c6ab8799e27020e76590400f86067e1d08203097d57"} Dec 10 15:35:18 crc kubenswrapper[4755]: I1210 15:35:18.749782 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"72309b0b-1923-4244-a862-9213b3184def","Type":"ContainerStarted","Data":"2280bca65b00d710b38a63d4819d04ef8586eed84a65fc20395c3bc20f4a037f"} Dec 10 15:35:19 crc kubenswrapper[4755]: I1210 15:35:19.758695 4755 generic.go:334] "Generic (PLEG): container finished" podID="54eaef18-b1aa-4151-99aa-9e758934bd5c" containerID="5d2c5cd29fdf6a98797112ae0de175a1a5414020cc207c3fff24cf1dc0579c55" exitCode=0 Dec 10 15:35:19 crc kubenswrapper[4755]: I1210 15:35:19.764970 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c1t8zr8" event={"ID":"54eaef18-b1aa-4151-99aa-9e758934bd5c","Type":"ContainerDied","Data":"5d2c5cd29fdf6a98797112ae0de175a1a5414020cc207c3fff24cf1dc0579c55"} Dec 10 15:35:21 crc kubenswrapper[4755]: I1210 15:35:21.060166 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c1t8zr8" Dec 10 15:35:21 crc kubenswrapper[4755]: I1210 15:35:21.154530 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/54eaef18-b1aa-4151-99aa-9e758934bd5c-util\") pod \"54eaef18-b1aa-4151-99aa-9e758934bd5c\" (UID: \"54eaef18-b1aa-4151-99aa-9e758934bd5c\") " Dec 10 15:35:21 crc kubenswrapper[4755]: I1210 15:35:21.154597 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nhhnx\" (UniqueName: \"kubernetes.io/projected/54eaef18-b1aa-4151-99aa-9e758934bd5c-kube-api-access-nhhnx\") pod \"54eaef18-b1aa-4151-99aa-9e758934bd5c\" (UID: \"54eaef18-b1aa-4151-99aa-9e758934bd5c\") " Dec 10 15:35:21 crc kubenswrapper[4755]: I1210 15:35:21.154623 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/54eaef18-b1aa-4151-99aa-9e758934bd5c-bundle\") pod \"54eaef18-b1aa-4151-99aa-9e758934bd5c\" (UID: \"54eaef18-b1aa-4151-99aa-9e758934bd5c\") " Dec 10 15:35:21 crc kubenswrapper[4755]: I1210 15:35:21.155946 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/54eaef18-b1aa-4151-99aa-9e758934bd5c-bundle" (OuterVolumeSpecName: "bundle") pod "54eaef18-b1aa-4151-99aa-9e758934bd5c" (UID: "54eaef18-b1aa-4151-99aa-9e758934bd5c"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:35:21 crc kubenswrapper[4755]: I1210 15:35:21.160701 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54eaef18-b1aa-4151-99aa-9e758934bd5c-kube-api-access-nhhnx" (OuterVolumeSpecName: "kube-api-access-nhhnx") pod "54eaef18-b1aa-4151-99aa-9e758934bd5c" (UID: "54eaef18-b1aa-4151-99aa-9e758934bd5c"). InnerVolumeSpecName "kube-api-access-nhhnx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:35:21 crc kubenswrapper[4755]: I1210 15:35:21.169409 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/54eaef18-b1aa-4151-99aa-9e758934bd5c-util" (OuterVolumeSpecName: "util") pod "54eaef18-b1aa-4151-99aa-9e758934bd5c" (UID: "54eaef18-b1aa-4151-99aa-9e758934bd5c"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:35:21 crc kubenswrapper[4755]: I1210 15:35:21.256587 4755 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/54eaef18-b1aa-4151-99aa-9e758934bd5c-util\") on node \"crc\" DevicePath \"\"" Dec 10 15:35:21 crc kubenswrapper[4755]: I1210 15:35:21.256615 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nhhnx\" (UniqueName: \"kubernetes.io/projected/54eaef18-b1aa-4151-99aa-9e758934bd5c-kube-api-access-nhhnx\") on node \"crc\" DevicePath \"\"" Dec 10 15:35:21 crc kubenswrapper[4755]: I1210 15:35:21.256627 4755 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/54eaef18-b1aa-4151-99aa-9e758934bd5c-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:35:21 crc kubenswrapper[4755]: I1210 15:35:21.771262 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c1t8zr8" event={"ID":"54eaef18-b1aa-4151-99aa-9e758934bd5c","Type":"ContainerDied","Data":"d1cb4c30300588de0c5118bcf392ce33f8d72b24df1b6365c324b1fa1e7bc1e6"} Dec 10 15:35:21 crc kubenswrapper[4755]: I1210 15:35:21.771624 4755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d1cb4c30300588de0c5118bcf392ce33f8d72b24df1b6365c324b1fa1e7bc1e6" Dec 10 15:35:21 crc kubenswrapper[4755]: I1210 15:35:21.771299 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c1t8zr8" Dec 10 15:35:21 crc kubenswrapper[4755]: I1210 15:35:21.772721 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"72309b0b-1923-4244-a862-9213b3184def","Type":"ContainerStarted","Data":"d5195ea0a665511946803c169f7c88b3fea8203664cc07b517e5917139d11585"} Dec 10 15:35:21 crc kubenswrapper[4755]: I1210 15:35:21.791839 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="minio-dev/minio" podStartSLOduration=4.956856904 podStartE2EDuration="7.791823119s" podCreationTimestamp="2025-12-10 15:35:14 +0000 UTC" firstStartedPulling="2025-12-10 15:35:18.242627193 +0000 UTC m=+714.843510825" lastFinishedPulling="2025-12-10 15:35:21.077593408 +0000 UTC m=+717.678477040" observedRunningTime="2025-12-10 15:35:21.788946382 +0000 UTC m=+718.389830014" watchObservedRunningTime="2025-12-10 15:35:21.791823119 +0000 UTC m=+718.392706751" Dec 10 15:35:27 crc kubenswrapper[4755]: I1210 15:35:27.705495 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-55ff878876-td264"] Dec 10 15:35:27 crc kubenswrapper[4755]: E1210 15:35:27.706299 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54eaef18-b1aa-4151-99aa-9e758934bd5c" containerName="extract" Dec 10 15:35:27 crc kubenswrapper[4755]: I1210 15:35:27.706314 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="54eaef18-b1aa-4151-99aa-9e758934bd5c" containerName="extract" Dec 10 15:35:27 crc kubenswrapper[4755]: E1210 15:35:27.706326 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54eaef18-b1aa-4151-99aa-9e758934bd5c" containerName="util" Dec 10 15:35:27 crc kubenswrapper[4755]: I1210 15:35:27.706334 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="54eaef18-b1aa-4151-99aa-9e758934bd5c" containerName="util" Dec 10 15:35:27 crc kubenswrapper[4755]: E1210 15:35:27.706349 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54eaef18-b1aa-4151-99aa-9e758934bd5c" containerName="pull" Dec 10 15:35:27 crc kubenswrapper[4755]: I1210 15:35:27.706356 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="54eaef18-b1aa-4151-99aa-9e758934bd5c" containerName="pull" Dec 10 15:35:27 crc kubenswrapper[4755]: I1210 15:35:27.706501 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="54eaef18-b1aa-4151-99aa-9e758934bd5c" containerName="extract" Dec 10 15:35:27 crc kubenswrapper[4755]: I1210 15:35:27.707236 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-55ff878876-td264" Dec 10 15:35:27 crc kubenswrapper[4755]: I1210 15:35:27.709378 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-service-cert" Dec 10 15:35:27 crc kubenswrapper[4755]: I1210 15:35:27.709431 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-metrics" Dec 10 15:35:27 crc kubenswrapper[4755]: I1210 15:35:27.709674 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"kube-root-ca.crt" Dec 10 15:35:27 crc kubenswrapper[4755]: I1210 15:35:27.709730 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"loki-operator-manager-config" Dec 10 15:35:27 crc kubenswrapper[4755]: I1210 15:35:27.711918 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-dockercfg-lqnns" Dec 10 15:35:27 crc kubenswrapper[4755]: I1210 15:35:27.713652 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"openshift-service-ca.crt" Dec 10 15:35:27 crc kubenswrapper[4755]: I1210 15:35:27.719679 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-55ff878876-td264"] Dec 10 15:35:27 crc kubenswrapper[4755]: I1210 15:35:27.745382 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ce2a9f6f-bc3d-472a-b820-51118827c3b6-webhook-cert\") pod \"loki-operator-controller-manager-55ff878876-td264\" (UID: \"ce2a9f6f-bc3d-472a-b820-51118827c3b6\") " pod="openshift-operators-redhat/loki-operator-controller-manager-55ff878876-td264" Dec 10 15:35:27 crc kubenswrapper[4755]: I1210 15:35:27.745508 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ce2a9f6f-bc3d-472a-b820-51118827c3b6-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-55ff878876-td264\" (UID: \"ce2a9f6f-bc3d-472a-b820-51118827c3b6\") " pod="openshift-operators-redhat/loki-operator-controller-manager-55ff878876-td264" Dec 10 15:35:27 crc kubenswrapper[4755]: I1210 15:35:27.745544 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/ce2a9f6f-bc3d-472a-b820-51118827c3b6-manager-config\") pod \"loki-operator-controller-manager-55ff878876-td264\" (UID: \"ce2a9f6f-bc3d-472a-b820-51118827c3b6\") " pod="openshift-operators-redhat/loki-operator-controller-manager-55ff878876-td264" Dec 10 15:35:27 crc kubenswrapper[4755]: I1210 15:35:27.745583 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ce2a9f6f-bc3d-472a-b820-51118827c3b6-apiservice-cert\") pod \"loki-operator-controller-manager-55ff878876-td264\" (UID: \"ce2a9f6f-bc3d-472a-b820-51118827c3b6\") " pod="openshift-operators-redhat/loki-operator-controller-manager-55ff878876-td264" Dec 10 15:35:27 crc kubenswrapper[4755]: I1210 15:35:27.745618 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hggs4\" (UniqueName: \"kubernetes.io/projected/ce2a9f6f-bc3d-472a-b820-51118827c3b6-kube-api-access-hggs4\") pod \"loki-operator-controller-manager-55ff878876-td264\" (UID: \"ce2a9f6f-bc3d-472a-b820-51118827c3b6\") " pod="openshift-operators-redhat/loki-operator-controller-manager-55ff878876-td264" Dec 10 15:35:27 crc kubenswrapper[4755]: I1210 15:35:27.846963 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ce2a9f6f-bc3d-472a-b820-51118827c3b6-webhook-cert\") pod \"loki-operator-controller-manager-55ff878876-td264\" (UID: \"ce2a9f6f-bc3d-472a-b820-51118827c3b6\") " pod="openshift-operators-redhat/loki-operator-controller-manager-55ff878876-td264" Dec 10 15:35:27 crc kubenswrapper[4755]: I1210 15:35:27.847012 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ce2a9f6f-bc3d-472a-b820-51118827c3b6-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-55ff878876-td264\" (UID: \"ce2a9f6f-bc3d-472a-b820-51118827c3b6\") " pod="openshift-operators-redhat/loki-operator-controller-manager-55ff878876-td264" Dec 10 15:35:27 crc kubenswrapper[4755]: I1210 15:35:27.847050 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/ce2a9f6f-bc3d-472a-b820-51118827c3b6-manager-config\") pod \"loki-operator-controller-manager-55ff878876-td264\" (UID: \"ce2a9f6f-bc3d-472a-b820-51118827c3b6\") " pod="openshift-operators-redhat/loki-operator-controller-manager-55ff878876-td264" Dec 10 15:35:27 crc kubenswrapper[4755]: I1210 15:35:27.847093 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ce2a9f6f-bc3d-472a-b820-51118827c3b6-apiservice-cert\") pod \"loki-operator-controller-manager-55ff878876-td264\" (UID: \"ce2a9f6f-bc3d-472a-b820-51118827c3b6\") " pod="openshift-operators-redhat/loki-operator-controller-manager-55ff878876-td264" Dec 10 15:35:27 crc kubenswrapper[4755]: I1210 15:35:27.847123 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hggs4\" (UniqueName: \"kubernetes.io/projected/ce2a9f6f-bc3d-472a-b820-51118827c3b6-kube-api-access-hggs4\") pod \"loki-operator-controller-manager-55ff878876-td264\" (UID: \"ce2a9f6f-bc3d-472a-b820-51118827c3b6\") " pod="openshift-operators-redhat/loki-operator-controller-manager-55ff878876-td264" Dec 10 15:35:27 crc kubenswrapper[4755]: I1210 15:35:27.849038 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/ce2a9f6f-bc3d-472a-b820-51118827c3b6-manager-config\") pod \"loki-operator-controller-manager-55ff878876-td264\" (UID: \"ce2a9f6f-bc3d-472a-b820-51118827c3b6\") " pod="openshift-operators-redhat/loki-operator-controller-manager-55ff878876-td264" Dec 10 15:35:27 crc kubenswrapper[4755]: I1210 15:35:27.853365 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ce2a9f6f-bc3d-472a-b820-51118827c3b6-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-55ff878876-td264\" (UID: \"ce2a9f6f-bc3d-472a-b820-51118827c3b6\") " pod="openshift-operators-redhat/loki-operator-controller-manager-55ff878876-td264" Dec 10 15:35:27 crc kubenswrapper[4755]: I1210 15:35:27.854258 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ce2a9f6f-bc3d-472a-b820-51118827c3b6-webhook-cert\") pod \"loki-operator-controller-manager-55ff878876-td264\" (UID: \"ce2a9f6f-bc3d-472a-b820-51118827c3b6\") " pod="openshift-operators-redhat/loki-operator-controller-manager-55ff878876-td264" Dec 10 15:35:27 crc kubenswrapper[4755]: I1210 15:35:27.856026 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ce2a9f6f-bc3d-472a-b820-51118827c3b6-apiservice-cert\") pod \"loki-operator-controller-manager-55ff878876-td264\" (UID: \"ce2a9f6f-bc3d-472a-b820-51118827c3b6\") " pod="openshift-operators-redhat/loki-operator-controller-manager-55ff878876-td264" Dec 10 15:35:27 crc kubenswrapper[4755]: I1210 15:35:27.885288 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hggs4\" (UniqueName: \"kubernetes.io/projected/ce2a9f6f-bc3d-472a-b820-51118827c3b6-kube-api-access-hggs4\") pod \"loki-operator-controller-manager-55ff878876-td264\" (UID: \"ce2a9f6f-bc3d-472a-b820-51118827c3b6\") " pod="openshift-operators-redhat/loki-operator-controller-manager-55ff878876-td264" Dec 10 15:35:28 crc kubenswrapper[4755]: I1210 15:35:28.020516 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-55ff878876-td264" Dec 10 15:35:28 crc kubenswrapper[4755]: I1210 15:35:28.222647 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-55ff878876-td264"] Dec 10 15:35:28 crc kubenswrapper[4755]: I1210 15:35:28.823677 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-55ff878876-td264" event={"ID":"ce2a9f6f-bc3d-472a-b820-51118827c3b6","Type":"ContainerStarted","Data":"ad74b737401375b4f7f3b7846199699f7b117d3af7691673e4042df7c49e5d13"} Dec 10 15:35:32 crc kubenswrapper[4755]: I1210 15:35:32.855009 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-55ff878876-td264" event={"ID":"ce2a9f6f-bc3d-472a-b820-51118827c3b6","Type":"ContainerStarted","Data":"4163142720a374e0fa97f3badc22b4c7558f15335f8c2be94d8f9b64e5404a0e"} Dec 10 15:35:39 crc kubenswrapper[4755]: I1210 15:35:39.903747 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-55ff878876-td264" event={"ID":"ce2a9f6f-bc3d-472a-b820-51118827c3b6","Type":"ContainerStarted","Data":"8697664e7ecc536e3f3e4681f6d5d6d0c52ddaf3cf7f512805a95f14b1dfe796"} Dec 10 15:35:39 crc kubenswrapper[4755]: I1210 15:35:39.904223 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-55ff878876-td264" Dec 10 15:35:39 crc kubenswrapper[4755]: I1210 15:35:39.907716 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-55ff878876-td264" Dec 10 15:35:39 crc kubenswrapper[4755]: I1210 15:35:39.934183 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators-redhat/loki-operator-controller-manager-55ff878876-td264" podStartSLOduration=2.34129895 podStartE2EDuration="12.934157895s" podCreationTimestamp="2025-12-10 15:35:27 +0000 UTC" firstStartedPulling="2025-12-10 15:35:28.239360315 +0000 UTC m=+724.840243947" lastFinishedPulling="2025-12-10 15:35:38.83221926 +0000 UTC m=+735.433102892" observedRunningTime="2025-12-10 15:35:39.924003823 +0000 UTC m=+736.524887455" watchObservedRunningTime="2025-12-10 15:35:39.934157895 +0000 UTC m=+736.535041547" Dec 10 15:35:40 crc kubenswrapper[4755]: I1210 15:35:40.359991 4755 patch_prober.go:28] interesting pod/machine-config-daemon-ggt8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 15:35:40 crc kubenswrapper[4755]: I1210 15:35:40.360079 4755 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 15:35:53 crc kubenswrapper[4755]: I1210 15:35:53.145882 4755 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 10 15:36:10 crc kubenswrapper[4755]: I1210 15:36:10.359103 4755 patch_prober.go:28] interesting pod/machine-config-daemon-ggt8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 15:36:10 crc kubenswrapper[4755]: I1210 15:36:10.359674 4755 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 15:36:13 crc kubenswrapper[4755]: I1210 15:36:13.419460 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f8rvt9"] Dec 10 15:36:13 crc kubenswrapper[4755]: I1210 15:36:13.421290 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f8rvt9" Dec 10 15:36:13 crc kubenswrapper[4755]: I1210 15:36:13.422956 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 10 15:36:13 crc kubenswrapper[4755]: I1210 15:36:13.433521 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f8rvt9"] Dec 10 15:36:13 crc kubenswrapper[4755]: I1210 15:36:13.558410 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52tv9\" (UniqueName: \"kubernetes.io/projected/1be44708-dd96-4718-b835-4b4a8b9e5b9f-kube-api-access-52tv9\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f8rvt9\" (UID: \"1be44708-dd96-4718-b835-4b4a8b9e5b9f\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f8rvt9" Dec 10 15:36:13 crc kubenswrapper[4755]: I1210 15:36:13.558528 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1be44708-dd96-4718-b835-4b4a8b9e5b9f-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f8rvt9\" (UID: \"1be44708-dd96-4718-b835-4b4a8b9e5b9f\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f8rvt9" Dec 10 15:36:13 crc kubenswrapper[4755]: I1210 15:36:13.558648 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1be44708-dd96-4718-b835-4b4a8b9e5b9f-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f8rvt9\" (UID: \"1be44708-dd96-4718-b835-4b4a8b9e5b9f\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f8rvt9" Dec 10 15:36:13 crc kubenswrapper[4755]: I1210 15:36:13.659483 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1be44708-dd96-4718-b835-4b4a8b9e5b9f-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f8rvt9\" (UID: \"1be44708-dd96-4718-b835-4b4a8b9e5b9f\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f8rvt9" Dec 10 15:36:13 crc kubenswrapper[4755]: I1210 15:36:13.659582 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52tv9\" (UniqueName: \"kubernetes.io/projected/1be44708-dd96-4718-b835-4b4a8b9e5b9f-kube-api-access-52tv9\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f8rvt9\" (UID: \"1be44708-dd96-4718-b835-4b4a8b9e5b9f\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f8rvt9" Dec 10 15:36:13 crc kubenswrapper[4755]: I1210 15:36:13.659613 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1be44708-dd96-4718-b835-4b4a8b9e5b9f-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f8rvt9\" (UID: \"1be44708-dd96-4718-b835-4b4a8b9e5b9f\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f8rvt9" Dec 10 15:36:13 crc kubenswrapper[4755]: I1210 15:36:13.660091 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1be44708-dd96-4718-b835-4b4a8b9e5b9f-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f8rvt9\" (UID: \"1be44708-dd96-4718-b835-4b4a8b9e5b9f\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f8rvt9" Dec 10 15:36:13 crc kubenswrapper[4755]: I1210 15:36:13.660148 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1be44708-dd96-4718-b835-4b4a8b9e5b9f-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f8rvt9\" (UID: \"1be44708-dd96-4718-b835-4b4a8b9e5b9f\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f8rvt9" Dec 10 15:36:13 crc kubenswrapper[4755]: I1210 15:36:13.683428 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52tv9\" (UniqueName: \"kubernetes.io/projected/1be44708-dd96-4718-b835-4b4a8b9e5b9f-kube-api-access-52tv9\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f8rvt9\" (UID: \"1be44708-dd96-4718-b835-4b4a8b9e5b9f\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f8rvt9" Dec 10 15:36:13 crc kubenswrapper[4755]: I1210 15:36:13.738858 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f8rvt9" Dec 10 15:36:14 crc kubenswrapper[4755]: I1210 15:36:14.162311 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f8rvt9"] Dec 10 15:36:15 crc kubenswrapper[4755]: I1210 15:36:15.122309 4755 generic.go:334] "Generic (PLEG): container finished" podID="1be44708-dd96-4718-b835-4b4a8b9e5b9f" containerID="335f96eb000fbd610a98d27a243a7bf7b358dcd776fba41715ea1f40bf7b613e" exitCode=0 Dec 10 15:36:15 crc kubenswrapper[4755]: I1210 15:36:15.122369 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f8rvt9" event={"ID":"1be44708-dd96-4718-b835-4b4a8b9e5b9f","Type":"ContainerDied","Data":"335f96eb000fbd610a98d27a243a7bf7b358dcd776fba41715ea1f40bf7b613e"} Dec 10 15:36:15 crc kubenswrapper[4755]: I1210 15:36:15.122407 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f8rvt9" event={"ID":"1be44708-dd96-4718-b835-4b4a8b9e5b9f","Type":"ContainerStarted","Data":"beb24d50fed1793433ee051cef3048f10413dd877ec36ef6d60ab1e13e8ed718"} Dec 10 15:36:15 crc kubenswrapper[4755]: I1210 15:36:15.782505 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qsprs"] Dec 10 15:36:15 crc kubenswrapper[4755]: I1210 15:36:15.783937 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qsprs" Dec 10 15:36:15 crc kubenswrapper[4755]: I1210 15:36:15.789773 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c21b61a9-4d26-4470-bd78-e96601706cf2-catalog-content\") pod \"redhat-operators-qsprs\" (UID: \"c21b61a9-4d26-4470-bd78-e96601706cf2\") " pod="openshift-marketplace/redhat-operators-qsprs" Dec 10 15:36:15 crc kubenswrapper[4755]: I1210 15:36:15.789859 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56t8d\" (UniqueName: \"kubernetes.io/projected/c21b61a9-4d26-4470-bd78-e96601706cf2-kube-api-access-56t8d\") pod \"redhat-operators-qsprs\" (UID: \"c21b61a9-4d26-4470-bd78-e96601706cf2\") " pod="openshift-marketplace/redhat-operators-qsprs" Dec 10 15:36:15 crc kubenswrapper[4755]: I1210 15:36:15.789983 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c21b61a9-4d26-4470-bd78-e96601706cf2-utilities\") pod \"redhat-operators-qsprs\" (UID: \"c21b61a9-4d26-4470-bd78-e96601706cf2\") " pod="openshift-marketplace/redhat-operators-qsprs" Dec 10 15:36:15 crc kubenswrapper[4755]: I1210 15:36:15.795774 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qsprs"] Dec 10 15:36:15 crc kubenswrapper[4755]: I1210 15:36:15.891630 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c21b61a9-4d26-4470-bd78-e96601706cf2-catalog-content\") pod \"redhat-operators-qsprs\" (UID: \"c21b61a9-4d26-4470-bd78-e96601706cf2\") " pod="openshift-marketplace/redhat-operators-qsprs" Dec 10 15:36:15 crc kubenswrapper[4755]: I1210 15:36:15.891679 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56t8d\" (UniqueName: \"kubernetes.io/projected/c21b61a9-4d26-4470-bd78-e96601706cf2-kube-api-access-56t8d\") pod \"redhat-operators-qsprs\" (UID: \"c21b61a9-4d26-4470-bd78-e96601706cf2\") " pod="openshift-marketplace/redhat-operators-qsprs" Dec 10 15:36:15 crc kubenswrapper[4755]: I1210 15:36:15.891717 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c21b61a9-4d26-4470-bd78-e96601706cf2-utilities\") pod \"redhat-operators-qsprs\" (UID: \"c21b61a9-4d26-4470-bd78-e96601706cf2\") " pod="openshift-marketplace/redhat-operators-qsprs" Dec 10 15:36:15 crc kubenswrapper[4755]: I1210 15:36:15.892187 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c21b61a9-4d26-4470-bd78-e96601706cf2-catalog-content\") pod \"redhat-operators-qsprs\" (UID: \"c21b61a9-4d26-4470-bd78-e96601706cf2\") " pod="openshift-marketplace/redhat-operators-qsprs" Dec 10 15:36:15 crc kubenswrapper[4755]: I1210 15:36:15.892246 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c21b61a9-4d26-4470-bd78-e96601706cf2-utilities\") pod \"redhat-operators-qsprs\" (UID: \"c21b61a9-4d26-4470-bd78-e96601706cf2\") " pod="openshift-marketplace/redhat-operators-qsprs" Dec 10 15:36:15 crc kubenswrapper[4755]: I1210 15:36:15.914709 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56t8d\" (UniqueName: \"kubernetes.io/projected/c21b61a9-4d26-4470-bd78-e96601706cf2-kube-api-access-56t8d\") pod \"redhat-operators-qsprs\" (UID: \"c21b61a9-4d26-4470-bd78-e96601706cf2\") " pod="openshift-marketplace/redhat-operators-qsprs" Dec 10 15:36:16 crc kubenswrapper[4755]: I1210 15:36:16.108247 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qsprs" Dec 10 15:36:16 crc kubenswrapper[4755]: I1210 15:36:16.377347 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qsprs"] Dec 10 15:36:16 crc kubenswrapper[4755]: W1210 15:36:16.382009 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc21b61a9_4d26_4470_bd78_e96601706cf2.slice/crio-866e23ae7f024b49b3ab303e280c1668a6383eadb590eccc3709c1216264a293 WatchSource:0}: Error finding container 866e23ae7f024b49b3ab303e280c1668a6383eadb590eccc3709c1216264a293: Status 404 returned error can't find the container with id 866e23ae7f024b49b3ab303e280c1668a6383eadb590eccc3709c1216264a293 Dec 10 15:36:17 crc kubenswrapper[4755]: I1210 15:36:17.134357 4755 generic.go:334] "Generic (PLEG): container finished" podID="c21b61a9-4d26-4470-bd78-e96601706cf2" containerID="ae89ef090b857a0c5fe137736b3f9ec1e227777e3ab1bec9c62278a56c8bb3b9" exitCode=0 Dec 10 15:36:17 crc kubenswrapper[4755]: I1210 15:36:17.134400 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qsprs" event={"ID":"c21b61a9-4d26-4470-bd78-e96601706cf2","Type":"ContainerDied","Data":"ae89ef090b857a0c5fe137736b3f9ec1e227777e3ab1bec9c62278a56c8bb3b9"} Dec 10 15:36:17 crc kubenswrapper[4755]: I1210 15:36:17.134425 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qsprs" event={"ID":"c21b61a9-4d26-4470-bd78-e96601706cf2","Type":"ContainerStarted","Data":"866e23ae7f024b49b3ab303e280c1668a6383eadb590eccc3709c1216264a293"} Dec 10 15:36:20 crc kubenswrapper[4755]: I1210 15:36:20.169735 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qsprs" event={"ID":"c21b61a9-4d26-4470-bd78-e96601706cf2","Type":"ContainerStarted","Data":"8b0716be4b1770eabdf360350d29772a132e55be0fcafc76ab8182dd9ee03ff2"} Dec 10 15:36:20 crc kubenswrapper[4755]: I1210 15:36:20.174545 4755 generic.go:334] "Generic (PLEG): container finished" podID="1be44708-dd96-4718-b835-4b4a8b9e5b9f" containerID="1bce86fb54246c0f8b7f21d14dafb148cc2c738aa5532000de148a599c85f6c3" exitCode=0 Dec 10 15:36:20 crc kubenswrapper[4755]: I1210 15:36:20.174599 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f8rvt9" event={"ID":"1be44708-dd96-4718-b835-4b4a8b9e5b9f","Type":"ContainerDied","Data":"1bce86fb54246c0f8b7f21d14dafb148cc2c738aa5532000de148a599c85f6c3"} Dec 10 15:36:21 crc kubenswrapper[4755]: I1210 15:36:21.183384 4755 generic.go:334] "Generic (PLEG): container finished" podID="c21b61a9-4d26-4470-bd78-e96601706cf2" containerID="8b0716be4b1770eabdf360350d29772a132e55be0fcafc76ab8182dd9ee03ff2" exitCode=0 Dec 10 15:36:21 crc kubenswrapper[4755]: I1210 15:36:21.183443 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qsprs" event={"ID":"c21b61a9-4d26-4470-bd78-e96601706cf2","Type":"ContainerDied","Data":"8b0716be4b1770eabdf360350d29772a132e55be0fcafc76ab8182dd9ee03ff2"} Dec 10 15:36:21 crc kubenswrapper[4755]: I1210 15:36:21.185549 4755 generic.go:334] "Generic (PLEG): container finished" podID="1be44708-dd96-4718-b835-4b4a8b9e5b9f" containerID="26bb446bd9a5320accbaf4e64b2dcbf89f9463790f9fce50bb99e4500089db1c" exitCode=0 Dec 10 15:36:21 crc kubenswrapper[4755]: I1210 15:36:21.185577 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f8rvt9" event={"ID":"1be44708-dd96-4718-b835-4b4a8b9e5b9f","Type":"ContainerDied","Data":"26bb446bd9a5320accbaf4e64b2dcbf89f9463790f9fce50bb99e4500089db1c"} Dec 10 15:36:22 crc kubenswrapper[4755]: I1210 15:36:22.192375 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qsprs" event={"ID":"c21b61a9-4d26-4470-bd78-e96601706cf2","Type":"ContainerStarted","Data":"52c79af40edbc1ba0e2de040cbfda3e6756364cbc0beac3d67a9c578b96eb08c"} Dec 10 15:36:22 crc kubenswrapper[4755]: I1210 15:36:22.214363 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qsprs" podStartSLOduration=2.746617628 podStartE2EDuration="7.214339226s" podCreationTimestamp="2025-12-10 15:36:15 +0000 UTC" firstStartedPulling="2025-12-10 15:36:17.135719242 +0000 UTC m=+773.736602874" lastFinishedPulling="2025-12-10 15:36:21.60344084 +0000 UTC m=+778.204324472" observedRunningTime="2025-12-10 15:36:22.210115229 +0000 UTC m=+778.810998881" watchObservedRunningTime="2025-12-10 15:36:22.214339226 +0000 UTC m=+778.815222848" Dec 10 15:36:22 crc kubenswrapper[4755]: I1210 15:36:22.418027 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f8rvt9" Dec 10 15:36:22 crc kubenswrapper[4755]: I1210 15:36:22.479344 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1be44708-dd96-4718-b835-4b4a8b9e5b9f-bundle\") pod \"1be44708-dd96-4718-b835-4b4a8b9e5b9f\" (UID: \"1be44708-dd96-4718-b835-4b4a8b9e5b9f\") " Dec 10 15:36:22 crc kubenswrapper[4755]: I1210 15:36:22.479508 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-52tv9\" (UniqueName: \"kubernetes.io/projected/1be44708-dd96-4718-b835-4b4a8b9e5b9f-kube-api-access-52tv9\") pod \"1be44708-dd96-4718-b835-4b4a8b9e5b9f\" (UID: \"1be44708-dd96-4718-b835-4b4a8b9e5b9f\") " Dec 10 15:36:22 crc kubenswrapper[4755]: I1210 15:36:22.479652 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1be44708-dd96-4718-b835-4b4a8b9e5b9f-util\") pod \"1be44708-dd96-4718-b835-4b4a8b9e5b9f\" (UID: \"1be44708-dd96-4718-b835-4b4a8b9e5b9f\") " Dec 10 15:36:22 crc kubenswrapper[4755]: I1210 15:36:22.481179 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1be44708-dd96-4718-b835-4b4a8b9e5b9f-bundle" (OuterVolumeSpecName: "bundle") pod "1be44708-dd96-4718-b835-4b4a8b9e5b9f" (UID: "1be44708-dd96-4718-b835-4b4a8b9e5b9f"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:36:22 crc kubenswrapper[4755]: I1210 15:36:22.486218 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1be44708-dd96-4718-b835-4b4a8b9e5b9f-kube-api-access-52tv9" (OuterVolumeSpecName: "kube-api-access-52tv9") pod "1be44708-dd96-4718-b835-4b4a8b9e5b9f" (UID: "1be44708-dd96-4718-b835-4b4a8b9e5b9f"). InnerVolumeSpecName "kube-api-access-52tv9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:36:22 crc kubenswrapper[4755]: I1210 15:36:22.489515 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1be44708-dd96-4718-b835-4b4a8b9e5b9f-util" (OuterVolumeSpecName: "util") pod "1be44708-dd96-4718-b835-4b4a8b9e5b9f" (UID: "1be44708-dd96-4718-b835-4b4a8b9e5b9f"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:36:22 crc kubenswrapper[4755]: I1210 15:36:22.580779 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-52tv9\" (UniqueName: \"kubernetes.io/projected/1be44708-dd96-4718-b835-4b4a8b9e5b9f-kube-api-access-52tv9\") on node \"crc\" DevicePath \"\"" Dec 10 15:36:22 crc kubenswrapper[4755]: I1210 15:36:22.581005 4755 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1be44708-dd96-4718-b835-4b4a8b9e5b9f-util\") on node \"crc\" DevicePath \"\"" Dec 10 15:36:22 crc kubenswrapper[4755]: I1210 15:36:22.581084 4755 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1be44708-dd96-4718-b835-4b4a8b9e5b9f-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:36:23 crc kubenswrapper[4755]: I1210 15:36:23.201788 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f8rvt9" event={"ID":"1be44708-dd96-4718-b835-4b4a8b9e5b9f","Type":"ContainerDied","Data":"beb24d50fed1793433ee051cef3048f10413dd877ec36ef6d60ab1e13e8ed718"} Dec 10 15:36:23 crc kubenswrapper[4755]: I1210 15:36:23.201845 4755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="beb24d50fed1793433ee051cef3048f10413dd877ec36ef6d60ab1e13e8ed718" Dec 10 15:36:23 crc kubenswrapper[4755]: I1210 15:36:23.201815 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f8rvt9" Dec 10 15:36:26 crc kubenswrapper[4755]: I1210 15:36:26.108782 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qsprs" Dec 10 15:36:26 crc kubenswrapper[4755]: I1210 15:36:26.109079 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qsprs" Dec 10 15:36:27 crc kubenswrapper[4755]: I1210 15:36:27.170840 4755 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qsprs" podUID="c21b61a9-4d26-4470-bd78-e96601706cf2" containerName="registry-server" probeResult="failure" output=< Dec 10 15:36:27 crc kubenswrapper[4755]: timeout: failed to connect service ":50051" within 1s Dec 10 15:36:27 crc kubenswrapper[4755]: > Dec 10 15:36:30 crc kubenswrapper[4755]: I1210 15:36:30.189812 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-v9zk6"] Dec 10 15:36:30 crc kubenswrapper[4755]: E1210 15:36:30.190364 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1be44708-dd96-4718-b835-4b4a8b9e5b9f" containerName="pull" Dec 10 15:36:30 crc kubenswrapper[4755]: I1210 15:36:30.190380 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="1be44708-dd96-4718-b835-4b4a8b9e5b9f" containerName="pull" Dec 10 15:36:30 crc kubenswrapper[4755]: E1210 15:36:30.190395 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1be44708-dd96-4718-b835-4b4a8b9e5b9f" containerName="util" Dec 10 15:36:30 crc kubenswrapper[4755]: I1210 15:36:30.190402 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="1be44708-dd96-4718-b835-4b4a8b9e5b9f" containerName="util" Dec 10 15:36:30 crc kubenswrapper[4755]: E1210 15:36:30.190412 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1be44708-dd96-4718-b835-4b4a8b9e5b9f" containerName="extract" Dec 10 15:36:30 crc kubenswrapper[4755]: I1210 15:36:30.190420 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="1be44708-dd96-4718-b835-4b4a8b9e5b9f" containerName="extract" Dec 10 15:36:30 crc kubenswrapper[4755]: I1210 15:36:30.190566 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="1be44708-dd96-4718-b835-4b4a8b9e5b9f" containerName="extract" Dec 10 15:36:30 crc kubenswrapper[4755]: I1210 15:36:30.191077 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-v9zk6" Dec 10 15:36:30 crc kubenswrapper[4755]: I1210 15:36:30.193519 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Dec 10 15:36:30 crc kubenswrapper[4755]: I1210 15:36:30.193607 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Dec 10 15:36:30 crc kubenswrapper[4755]: I1210 15:36:30.193715 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-f7x6f" Dec 10 15:36:30 crc kubenswrapper[4755]: I1210 15:36:30.200133 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-v9zk6"] Dec 10 15:36:30 crc kubenswrapper[4755]: I1210 15:36:30.393244 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jhqp\" (UniqueName: \"kubernetes.io/projected/a0cfbb00-d9e1-46c2-a3b7-f6f5fc8c95c2-kube-api-access-6jhqp\") pod \"nmstate-operator-5b5b58f5c8-v9zk6\" (UID: \"a0cfbb00-d9e1-46c2-a3b7-f6f5fc8c95c2\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-v9zk6" Dec 10 15:36:30 crc kubenswrapper[4755]: I1210 15:36:30.494526 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jhqp\" (UniqueName: \"kubernetes.io/projected/a0cfbb00-d9e1-46c2-a3b7-f6f5fc8c95c2-kube-api-access-6jhqp\") pod \"nmstate-operator-5b5b58f5c8-v9zk6\" (UID: \"a0cfbb00-d9e1-46c2-a3b7-f6f5fc8c95c2\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-v9zk6" Dec 10 15:36:30 crc kubenswrapper[4755]: I1210 15:36:30.517724 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jhqp\" (UniqueName: \"kubernetes.io/projected/a0cfbb00-d9e1-46c2-a3b7-f6f5fc8c95c2-kube-api-access-6jhqp\") pod \"nmstate-operator-5b5b58f5c8-v9zk6\" (UID: \"a0cfbb00-d9e1-46c2-a3b7-f6f5fc8c95c2\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-v9zk6" Dec 10 15:36:30 crc kubenswrapper[4755]: I1210 15:36:30.810447 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-v9zk6" Dec 10 15:36:31 crc kubenswrapper[4755]: I1210 15:36:31.295713 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-v9zk6"] Dec 10 15:36:32 crc kubenswrapper[4755]: I1210 15:36:32.254804 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-v9zk6" event={"ID":"a0cfbb00-d9e1-46c2-a3b7-f6f5fc8c95c2","Type":"ContainerStarted","Data":"9cb066093f0da36e275ace551dde574649717e5c10fa0b71039243eb7bdf0ed5"} Dec 10 15:36:34 crc kubenswrapper[4755]: I1210 15:36:34.268984 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-v9zk6" event={"ID":"a0cfbb00-d9e1-46c2-a3b7-f6f5fc8c95c2","Type":"ContainerStarted","Data":"45e66344735c75883b7dad11971f79a9ba58dad03a50277933e841aaac0b4899"} Dec 10 15:36:34 crc kubenswrapper[4755]: I1210 15:36:34.283579 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-v9zk6" podStartSLOduration=1.9124840920000001 podStartE2EDuration="4.283560313s" podCreationTimestamp="2025-12-10 15:36:30 +0000 UTC" firstStartedPulling="2025-12-10 15:36:31.309666148 +0000 UTC m=+787.910549780" lastFinishedPulling="2025-12-10 15:36:33.680742369 +0000 UTC m=+790.281626001" observedRunningTime="2025-12-10 15:36:34.281312881 +0000 UTC m=+790.882196513" watchObservedRunningTime="2025-12-10 15:36:34.283560313 +0000 UTC m=+790.884443945" Dec 10 15:36:35 crc kubenswrapper[4755]: I1210 15:36:35.259567 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-2lfwl"] Dec 10 15:36:35 crc kubenswrapper[4755]: I1210 15:36:35.260737 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-2lfwl" Dec 10 15:36:35 crc kubenswrapper[4755]: I1210 15:36:35.264009 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-gqn5b" Dec 10 15:36:35 crc kubenswrapper[4755]: I1210 15:36:35.269383 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-8mfv6"] Dec 10 15:36:35 crc kubenswrapper[4755]: I1210 15:36:35.270341 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-8mfv6" Dec 10 15:36:35 crc kubenswrapper[4755]: I1210 15:36:35.292142 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Dec 10 15:36:35 crc kubenswrapper[4755]: I1210 15:36:35.300378 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-2lfwl"] Dec 10 15:36:35 crc kubenswrapper[4755]: I1210 15:36:35.344626 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-8mfv6"] Dec 10 15:36:35 crc kubenswrapper[4755]: I1210 15:36:35.372639 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-jjt7x"] Dec 10 15:36:35 crc kubenswrapper[4755]: I1210 15:36:35.373996 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-jjt7x" Dec 10 15:36:35 crc kubenswrapper[4755]: I1210 15:36:35.414966 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-kb7wz"] Dec 10 15:36:35 crc kubenswrapper[4755]: I1210 15:36:35.415882 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-kb7wz" Dec 10 15:36:35 crc kubenswrapper[4755]: I1210 15:36:35.418404 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Dec 10 15:36:35 crc kubenswrapper[4755]: I1210 15:36:35.418728 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-rqtsz" Dec 10 15:36:35 crc kubenswrapper[4755]: I1210 15:36:35.419234 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Dec 10 15:36:35 crc kubenswrapper[4755]: I1210 15:36:35.438858 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-kb7wz"] Dec 10 15:36:35 crc kubenswrapper[4755]: I1210 15:36:35.461197 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxpn6\" (UniqueName: \"kubernetes.io/projected/03a03d51-0d67-4cf5-b102-74f7d787298e-kube-api-access-lxpn6\") pod \"nmstate-metrics-7f946cbc9-2lfwl\" (UID: \"03a03d51-0d67-4cf5-b102-74f7d787298e\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-2lfwl" Dec 10 15:36:35 crc kubenswrapper[4755]: I1210 15:36:35.461255 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-772l4\" (UniqueName: \"kubernetes.io/projected/a1d36f33-a6fd-4c6c-9739-fb7bfc94ca98-kube-api-access-772l4\") pod \"nmstate-webhook-5f6d4c5ccb-8mfv6\" (UID: \"a1d36f33-a6fd-4c6c-9739-fb7bfc94ca98\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-8mfv6" Dec 10 15:36:35 crc kubenswrapper[4755]: I1210 15:36:35.461276 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/a1d36f33-a6fd-4c6c-9739-fb7bfc94ca98-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-8mfv6\" (UID: \"a1d36f33-a6fd-4c6c-9739-fb7bfc94ca98\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-8mfv6" Dec 10 15:36:35 crc kubenswrapper[4755]: I1210 15:36:35.562421 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/75ec7fc8-4770-41e7-9d6a-d9a43d832125-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-kb7wz\" (UID: \"75ec7fc8-4770-41e7-9d6a-d9a43d832125\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-kb7wz" Dec 10 15:36:35 crc kubenswrapper[4755]: I1210 15:36:35.562478 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/31a7ad79-a502-42d0-ab81-6e22092f7c9e-dbus-socket\") pod \"nmstate-handler-jjt7x\" (UID: \"31a7ad79-a502-42d0-ab81-6e22092f7c9e\") " pod="openshift-nmstate/nmstate-handler-jjt7x" Dec 10 15:36:35 crc kubenswrapper[4755]: I1210 15:36:35.563041 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/31a7ad79-a502-42d0-ab81-6e22092f7c9e-ovs-socket\") pod \"nmstate-handler-jjt7x\" (UID: \"31a7ad79-a502-42d0-ab81-6e22092f7c9e\") " pod="openshift-nmstate/nmstate-handler-jjt7x" Dec 10 15:36:35 crc kubenswrapper[4755]: I1210 15:36:35.563079 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvp44\" (UniqueName: \"kubernetes.io/projected/31a7ad79-a502-42d0-ab81-6e22092f7c9e-kube-api-access-qvp44\") pod \"nmstate-handler-jjt7x\" (UID: \"31a7ad79-a502-42d0-ab81-6e22092f7c9e\") " pod="openshift-nmstate/nmstate-handler-jjt7x" Dec 10 15:36:35 crc kubenswrapper[4755]: I1210 15:36:35.563136 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/75ec7fc8-4770-41e7-9d6a-d9a43d832125-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-kb7wz\" (UID: \"75ec7fc8-4770-41e7-9d6a-d9a43d832125\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-kb7wz" Dec 10 15:36:35 crc kubenswrapper[4755]: I1210 15:36:35.563162 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9mpp\" (UniqueName: \"kubernetes.io/projected/75ec7fc8-4770-41e7-9d6a-d9a43d832125-kube-api-access-k9mpp\") pod \"nmstate-console-plugin-7fbb5f6569-kb7wz\" (UID: \"75ec7fc8-4770-41e7-9d6a-d9a43d832125\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-kb7wz" Dec 10 15:36:35 crc kubenswrapper[4755]: I1210 15:36:35.563193 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxpn6\" (UniqueName: \"kubernetes.io/projected/03a03d51-0d67-4cf5-b102-74f7d787298e-kube-api-access-lxpn6\") pod \"nmstate-metrics-7f946cbc9-2lfwl\" (UID: \"03a03d51-0d67-4cf5-b102-74f7d787298e\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-2lfwl" Dec 10 15:36:35 crc kubenswrapper[4755]: I1210 15:36:35.563233 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-772l4\" (UniqueName: \"kubernetes.io/projected/a1d36f33-a6fd-4c6c-9739-fb7bfc94ca98-kube-api-access-772l4\") pod \"nmstate-webhook-5f6d4c5ccb-8mfv6\" (UID: \"a1d36f33-a6fd-4c6c-9739-fb7bfc94ca98\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-8mfv6" Dec 10 15:36:35 crc kubenswrapper[4755]: I1210 15:36:35.563262 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/a1d36f33-a6fd-4c6c-9739-fb7bfc94ca98-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-8mfv6\" (UID: \"a1d36f33-a6fd-4c6c-9739-fb7bfc94ca98\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-8mfv6" Dec 10 15:36:35 crc kubenswrapper[4755]: I1210 15:36:35.563318 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/31a7ad79-a502-42d0-ab81-6e22092f7c9e-nmstate-lock\") pod \"nmstate-handler-jjt7x\" (UID: \"31a7ad79-a502-42d0-ab81-6e22092f7c9e\") " pod="openshift-nmstate/nmstate-handler-jjt7x" Dec 10 15:36:35 crc kubenswrapper[4755]: E1210 15:36:35.563514 4755 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Dec 10 15:36:35 crc kubenswrapper[4755]: E1210 15:36:35.563569 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a1d36f33-a6fd-4c6c-9739-fb7bfc94ca98-tls-key-pair podName:a1d36f33-a6fd-4c6c-9739-fb7bfc94ca98 nodeName:}" failed. No retries permitted until 2025-12-10 15:36:36.063552907 +0000 UTC m=+792.664436539 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/a1d36f33-a6fd-4c6c-9739-fb7bfc94ca98-tls-key-pair") pod "nmstate-webhook-5f6d4c5ccb-8mfv6" (UID: "a1d36f33-a6fd-4c6c-9739-fb7bfc94ca98") : secret "openshift-nmstate-webhook" not found Dec 10 15:36:35 crc kubenswrapper[4755]: I1210 15:36:35.584873 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxpn6\" (UniqueName: \"kubernetes.io/projected/03a03d51-0d67-4cf5-b102-74f7d787298e-kube-api-access-lxpn6\") pod \"nmstate-metrics-7f946cbc9-2lfwl\" (UID: \"03a03d51-0d67-4cf5-b102-74f7d787298e\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-2lfwl" Dec 10 15:36:35 crc kubenswrapper[4755]: I1210 15:36:35.585363 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-772l4\" (UniqueName: \"kubernetes.io/projected/a1d36f33-a6fd-4c6c-9739-fb7bfc94ca98-kube-api-access-772l4\") pod \"nmstate-webhook-5f6d4c5ccb-8mfv6\" (UID: \"a1d36f33-a6fd-4c6c-9739-fb7bfc94ca98\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-8mfv6" Dec 10 15:36:35 crc kubenswrapper[4755]: I1210 15:36:35.625220 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-b47d6c489-jr8kz"] Dec 10 15:36:35 crc kubenswrapper[4755]: I1210 15:36:35.626102 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-b47d6c489-jr8kz" Dec 10 15:36:35 crc kubenswrapper[4755]: I1210 15:36:35.639814 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-2lfwl" Dec 10 15:36:35 crc kubenswrapper[4755]: I1210 15:36:35.640300 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-b47d6c489-jr8kz"] Dec 10 15:36:35 crc kubenswrapper[4755]: I1210 15:36:35.664507 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2r6qj\" (UniqueName: \"kubernetes.io/projected/7a2ce695-3b6e-431a-9222-b621f0f6412b-kube-api-access-2r6qj\") pod \"console-b47d6c489-jr8kz\" (UID: \"7a2ce695-3b6e-431a-9222-b621f0f6412b\") " pod="openshift-console/console-b47d6c489-jr8kz" Dec 10 15:36:35 crc kubenswrapper[4755]: I1210 15:36:35.664569 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/31a7ad79-a502-42d0-ab81-6e22092f7c9e-ovs-socket\") pod \"nmstate-handler-jjt7x\" (UID: \"31a7ad79-a502-42d0-ab81-6e22092f7c9e\") " pod="openshift-nmstate/nmstate-handler-jjt7x" Dec 10 15:36:35 crc kubenswrapper[4755]: I1210 15:36:35.664593 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7a2ce695-3b6e-431a-9222-b621f0f6412b-console-oauth-config\") pod \"console-b47d6c489-jr8kz\" (UID: \"7a2ce695-3b6e-431a-9222-b621f0f6412b\") " pod="openshift-console/console-b47d6c489-jr8kz" Dec 10 15:36:35 crc kubenswrapper[4755]: I1210 15:36:35.664616 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvp44\" (UniqueName: \"kubernetes.io/projected/31a7ad79-a502-42d0-ab81-6e22092f7c9e-kube-api-access-qvp44\") pod \"nmstate-handler-jjt7x\" (UID: \"31a7ad79-a502-42d0-ab81-6e22092f7c9e\") " pod="openshift-nmstate/nmstate-handler-jjt7x" Dec 10 15:36:35 crc kubenswrapper[4755]: I1210 15:36:35.664653 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/75ec7fc8-4770-41e7-9d6a-d9a43d832125-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-kb7wz\" (UID: \"75ec7fc8-4770-41e7-9d6a-d9a43d832125\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-kb7wz" Dec 10 15:36:35 crc kubenswrapper[4755]: I1210 15:36:35.664676 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7a2ce695-3b6e-431a-9222-b621f0f6412b-service-ca\") pod \"console-b47d6c489-jr8kz\" (UID: \"7a2ce695-3b6e-431a-9222-b621f0f6412b\") " pod="openshift-console/console-b47d6c489-jr8kz" Dec 10 15:36:35 crc kubenswrapper[4755]: I1210 15:36:35.664703 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9mpp\" (UniqueName: \"kubernetes.io/projected/75ec7fc8-4770-41e7-9d6a-d9a43d832125-kube-api-access-k9mpp\") pod \"nmstate-console-plugin-7fbb5f6569-kb7wz\" (UID: \"75ec7fc8-4770-41e7-9d6a-d9a43d832125\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-kb7wz" Dec 10 15:36:35 crc kubenswrapper[4755]: I1210 15:36:35.664728 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a2ce695-3b6e-431a-9222-b621f0f6412b-trusted-ca-bundle\") pod \"console-b47d6c489-jr8kz\" (UID: \"7a2ce695-3b6e-431a-9222-b621f0f6412b\") " pod="openshift-console/console-b47d6c489-jr8kz" Dec 10 15:36:35 crc kubenswrapper[4755]: I1210 15:36:35.664783 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/31a7ad79-a502-42d0-ab81-6e22092f7c9e-nmstate-lock\") pod \"nmstate-handler-jjt7x\" (UID: \"31a7ad79-a502-42d0-ab81-6e22092f7c9e\") " pod="openshift-nmstate/nmstate-handler-jjt7x" Dec 10 15:36:35 crc kubenswrapper[4755]: I1210 15:36:35.664809 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/75ec7fc8-4770-41e7-9d6a-d9a43d832125-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-kb7wz\" (UID: \"75ec7fc8-4770-41e7-9d6a-d9a43d832125\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-kb7wz" Dec 10 15:36:35 crc kubenswrapper[4755]: I1210 15:36:35.664831 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/31a7ad79-a502-42d0-ab81-6e22092f7c9e-dbus-socket\") pod \"nmstate-handler-jjt7x\" (UID: \"31a7ad79-a502-42d0-ab81-6e22092f7c9e\") " pod="openshift-nmstate/nmstate-handler-jjt7x" Dec 10 15:36:35 crc kubenswrapper[4755]: I1210 15:36:35.664854 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7a2ce695-3b6e-431a-9222-b621f0f6412b-oauth-serving-cert\") pod \"console-b47d6c489-jr8kz\" (UID: \"7a2ce695-3b6e-431a-9222-b621f0f6412b\") " pod="openshift-console/console-b47d6c489-jr8kz" Dec 10 15:36:35 crc kubenswrapper[4755]: I1210 15:36:35.664901 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7a2ce695-3b6e-431a-9222-b621f0f6412b-console-config\") pod \"console-b47d6c489-jr8kz\" (UID: \"7a2ce695-3b6e-431a-9222-b621f0f6412b\") " pod="openshift-console/console-b47d6c489-jr8kz" Dec 10 15:36:35 crc kubenswrapper[4755]: I1210 15:36:35.664922 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7a2ce695-3b6e-431a-9222-b621f0f6412b-console-serving-cert\") pod \"console-b47d6c489-jr8kz\" (UID: \"7a2ce695-3b6e-431a-9222-b621f0f6412b\") " pod="openshift-console/console-b47d6c489-jr8kz" Dec 10 15:36:35 crc kubenswrapper[4755]: I1210 15:36:35.665007 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/31a7ad79-a502-42d0-ab81-6e22092f7c9e-ovs-socket\") pod \"nmstate-handler-jjt7x\" (UID: \"31a7ad79-a502-42d0-ab81-6e22092f7c9e\") " pod="openshift-nmstate/nmstate-handler-jjt7x" Dec 10 15:36:35 crc kubenswrapper[4755]: I1210 15:36:35.666120 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/31a7ad79-a502-42d0-ab81-6e22092f7c9e-nmstate-lock\") pod \"nmstate-handler-jjt7x\" (UID: \"31a7ad79-a502-42d0-ab81-6e22092f7c9e\") " pod="openshift-nmstate/nmstate-handler-jjt7x" Dec 10 15:36:35 crc kubenswrapper[4755]: I1210 15:36:35.666638 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/31a7ad79-a502-42d0-ab81-6e22092f7c9e-dbus-socket\") pod \"nmstate-handler-jjt7x\" (UID: \"31a7ad79-a502-42d0-ab81-6e22092f7c9e\") " pod="openshift-nmstate/nmstate-handler-jjt7x" Dec 10 15:36:35 crc kubenswrapper[4755]: I1210 15:36:35.667387 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/75ec7fc8-4770-41e7-9d6a-d9a43d832125-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-kb7wz\" (UID: \"75ec7fc8-4770-41e7-9d6a-d9a43d832125\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-kb7wz" Dec 10 15:36:35 crc kubenswrapper[4755]: I1210 15:36:35.669391 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/75ec7fc8-4770-41e7-9d6a-d9a43d832125-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-kb7wz\" (UID: \"75ec7fc8-4770-41e7-9d6a-d9a43d832125\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-kb7wz" Dec 10 15:36:35 crc kubenswrapper[4755]: I1210 15:36:35.684210 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvp44\" (UniqueName: \"kubernetes.io/projected/31a7ad79-a502-42d0-ab81-6e22092f7c9e-kube-api-access-qvp44\") pod \"nmstate-handler-jjt7x\" (UID: \"31a7ad79-a502-42d0-ab81-6e22092f7c9e\") " pod="openshift-nmstate/nmstate-handler-jjt7x" Dec 10 15:36:35 crc kubenswrapper[4755]: I1210 15:36:35.689412 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9mpp\" (UniqueName: \"kubernetes.io/projected/75ec7fc8-4770-41e7-9d6a-d9a43d832125-kube-api-access-k9mpp\") pod \"nmstate-console-plugin-7fbb5f6569-kb7wz\" (UID: \"75ec7fc8-4770-41e7-9d6a-d9a43d832125\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-kb7wz" Dec 10 15:36:35 crc kubenswrapper[4755]: I1210 15:36:35.691278 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-jjt7x" Dec 10 15:36:35 crc kubenswrapper[4755]: I1210 15:36:35.738877 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-kb7wz" Dec 10 15:36:35 crc kubenswrapper[4755]: I1210 15:36:35.765962 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7a2ce695-3b6e-431a-9222-b621f0f6412b-console-config\") pod \"console-b47d6c489-jr8kz\" (UID: \"7a2ce695-3b6e-431a-9222-b621f0f6412b\") " pod="openshift-console/console-b47d6c489-jr8kz" Dec 10 15:36:35 crc kubenswrapper[4755]: I1210 15:36:35.766020 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7a2ce695-3b6e-431a-9222-b621f0f6412b-console-serving-cert\") pod \"console-b47d6c489-jr8kz\" (UID: \"7a2ce695-3b6e-431a-9222-b621f0f6412b\") " pod="openshift-console/console-b47d6c489-jr8kz" Dec 10 15:36:35 crc kubenswrapper[4755]: I1210 15:36:35.766045 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2r6qj\" (UniqueName: \"kubernetes.io/projected/7a2ce695-3b6e-431a-9222-b621f0f6412b-kube-api-access-2r6qj\") pod \"console-b47d6c489-jr8kz\" (UID: \"7a2ce695-3b6e-431a-9222-b621f0f6412b\") " pod="openshift-console/console-b47d6c489-jr8kz" Dec 10 15:36:35 crc kubenswrapper[4755]: I1210 15:36:35.766091 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7a2ce695-3b6e-431a-9222-b621f0f6412b-console-oauth-config\") pod \"console-b47d6c489-jr8kz\" (UID: \"7a2ce695-3b6e-431a-9222-b621f0f6412b\") " pod="openshift-console/console-b47d6c489-jr8kz" Dec 10 15:36:35 crc kubenswrapper[4755]: I1210 15:36:35.766149 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7a2ce695-3b6e-431a-9222-b621f0f6412b-service-ca\") pod \"console-b47d6c489-jr8kz\" (UID: \"7a2ce695-3b6e-431a-9222-b621f0f6412b\") " pod="openshift-console/console-b47d6c489-jr8kz" Dec 10 15:36:35 crc kubenswrapper[4755]: I1210 15:36:35.766176 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a2ce695-3b6e-431a-9222-b621f0f6412b-trusted-ca-bundle\") pod \"console-b47d6c489-jr8kz\" (UID: \"7a2ce695-3b6e-431a-9222-b621f0f6412b\") " pod="openshift-console/console-b47d6c489-jr8kz" Dec 10 15:36:35 crc kubenswrapper[4755]: I1210 15:36:35.766250 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7a2ce695-3b6e-431a-9222-b621f0f6412b-oauth-serving-cert\") pod \"console-b47d6c489-jr8kz\" (UID: \"7a2ce695-3b6e-431a-9222-b621f0f6412b\") " pod="openshift-console/console-b47d6c489-jr8kz" Dec 10 15:36:35 crc kubenswrapper[4755]: I1210 15:36:35.768019 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a2ce695-3b6e-431a-9222-b621f0f6412b-trusted-ca-bundle\") pod \"console-b47d6c489-jr8kz\" (UID: \"7a2ce695-3b6e-431a-9222-b621f0f6412b\") " pod="openshift-console/console-b47d6c489-jr8kz" Dec 10 15:36:35 crc kubenswrapper[4755]: I1210 15:36:35.769267 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7a2ce695-3b6e-431a-9222-b621f0f6412b-service-ca\") pod \"console-b47d6c489-jr8kz\" (UID: \"7a2ce695-3b6e-431a-9222-b621f0f6412b\") " pod="openshift-console/console-b47d6c489-jr8kz" Dec 10 15:36:35 crc kubenswrapper[4755]: I1210 15:36:35.769371 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7a2ce695-3b6e-431a-9222-b621f0f6412b-oauth-serving-cert\") pod \"console-b47d6c489-jr8kz\" (UID: \"7a2ce695-3b6e-431a-9222-b621f0f6412b\") " pod="openshift-console/console-b47d6c489-jr8kz" Dec 10 15:36:35 crc kubenswrapper[4755]: I1210 15:36:35.770002 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7a2ce695-3b6e-431a-9222-b621f0f6412b-console-config\") pod \"console-b47d6c489-jr8kz\" (UID: \"7a2ce695-3b6e-431a-9222-b621f0f6412b\") " pod="openshift-console/console-b47d6c489-jr8kz" Dec 10 15:36:35 crc kubenswrapper[4755]: I1210 15:36:35.770240 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7a2ce695-3b6e-431a-9222-b621f0f6412b-console-serving-cert\") pod \"console-b47d6c489-jr8kz\" (UID: \"7a2ce695-3b6e-431a-9222-b621f0f6412b\") " pod="openshift-console/console-b47d6c489-jr8kz" Dec 10 15:36:35 crc kubenswrapper[4755]: I1210 15:36:35.771511 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7a2ce695-3b6e-431a-9222-b621f0f6412b-console-oauth-config\") pod \"console-b47d6c489-jr8kz\" (UID: \"7a2ce695-3b6e-431a-9222-b621f0f6412b\") " pod="openshift-console/console-b47d6c489-jr8kz" Dec 10 15:36:35 crc kubenswrapper[4755]: I1210 15:36:35.786295 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2r6qj\" (UniqueName: \"kubernetes.io/projected/7a2ce695-3b6e-431a-9222-b621f0f6412b-kube-api-access-2r6qj\") pod \"console-b47d6c489-jr8kz\" (UID: \"7a2ce695-3b6e-431a-9222-b621f0f6412b\") " pod="openshift-console/console-b47d6c489-jr8kz" Dec 10 15:36:35 crc kubenswrapper[4755]: I1210 15:36:35.889483 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-2lfwl"] Dec 10 15:36:35 crc kubenswrapper[4755]: I1210 15:36:35.944503 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-kb7wz"] Dec 10 15:36:35 crc kubenswrapper[4755]: I1210 15:36:35.951235 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-b47d6c489-jr8kz" Dec 10 15:36:36 crc kubenswrapper[4755]: I1210 15:36:36.075012 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/a1d36f33-a6fd-4c6c-9739-fb7bfc94ca98-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-8mfv6\" (UID: \"a1d36f33-a6fd-4c6c-9739-fb7bfc94ca98\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-8mfv6" Dec 10 15:36:36 crc kubenswrapper[4755]: I1210 15:36:36.079887 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/a1d36f33-a6fd-4c6c-9739-fb7bfc94ca98-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-8mfv6\" (UID: \"a1d36f33-a6fd-4c6c-9739-fb7bfc94ca98\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-8mfv6" Dec 10 15:36:36 crc kubenswrapper[4755]: I1210 15:36:36.151026 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-b47d6c489-jr8kz"] Dec 10 15:36:36 crc kubenswrapper[4755]: W1210 15:36:36.153304 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7a2ce695_3b6e_431a_9222_b621f0f6412b.slice/crio-7f986bee94e048cff62dd03c45f1832e79ee96d0a562c41b43c5e729f6a8cdda WatchSource:0}: Error finding container 7f986bee94e048cff62dd03c45f1832e79ee96d0a562c41b43c5e729f6a8cdda: Status 404 returned error can't find the container with id 7f986bee94e048cff62dd03c45f1832e79ee96d0a562c41b43c5e729f6a8cdda Dec 10 15:36:36 crc kubenswrapper[4755]: I1210 15:36:36.163154 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qsprs" Dec 10 15:36:36 crc kubenswrapper[4755]: I1210 15:36:36.217236 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qsprs" Dec 10 15:36:36 crc kubenswrapper[4755]: I1210 15:36:36.250690 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-8mfv6" Dec 10 15:36:36 crc kubenswrapper[4755]: I1210 15:36:36.298020 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-kb7wz" event={"ID":"75ec7fc8-4770-41e7-9d6a-d9a43d832125","Type":"ContainerStarted","Data":"cb14b4a5dfdddafb3de81fd06facee156ede9d0183a8197b554d7a19fd567880"} Dec 10 15:36:36 crc kubenswrapper[4755]: I1210 15:36:36.302594 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-jjt7x" event={"ID":"31a7ad79-a502-42d0-ab81-6e22092f7c9e","Type":"ContainerStarted","Data":"9a3bc6e155404685b450deaa4f901197d91308153ab4dc4109a79e17aa215872"} Dec 10 15:36:36 crc kubenswrapper[4755]: I1210 15:36:36.304023 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-b47d6c489-jr8kz" event={"ID":"7a2ce695-3b6e-431a-9222-b621f0f6412b","Type":"ContainerStarted","Data":"7f986bee94e048cff62dd03c45f1832e79ee96d0a562c41b43c5e729f6a8cdda"} Dec 10 15:36:36 crc kubenswrapper[4755]: I1210 15:36:36.305624 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-2lfwl" event={"ID":"03a03d51-0d67-4cf5-b102-74f7d787298e","Type":"ContainerStarted","Data":"6fc66113f1645ad629a7473d6421268cbf68663eed42376decbd9b05f52b4484"} Dec 10 15:36:36 crc kubenswrapper[4755]: I1210 15:36:36.440541 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-8mfv6"] Dec 10 15:36:36 crc kubenswrapper[4755]: W1210 15:36:36.447899 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda1d36f33_a6fd_4c6c_9739_fb7bfc94ca98.slice/crio-8716e0d19d2516b82ae590791dccc497199516d13ce9d9d39ba40cba7ec07f8c WatchSource:0}: Error finding container 8716e0d19d2516b82ae590791dccc497199516d13ce9d9d39ba40cba7ec07f8c: Status 404 returned error can't find the container with id 8716e0d19d2516b82ae590791dccc497199516d13ce9d9d39ba40cba7ec07f8c Dec 10 15:36:37 crc kubenswrapper[4755]: I1210 15:36:37.312459 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-8mfv6" event={"ID":"a1d36f33-a6fd-4c6c-9739-fb7bfc94ca98","Type":"ContainerStarted","Data":"8716e0d19d2516b82ae590791dccc497199516d13ce9d9d39ba40cba7ec07f8c"} Dec 10 15:36:37 crc kubenswrapper[4755]: I1210 15:36:37.316194 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-b47d6c489-jr8kz" event={"ID":"7a2ce695-3b6e-431a-9222-b621f0f6412b","Type":"ContainerStarted","Data":"aa43f8a4fa28aa0936e300aefb9aac574c0c3708a218060901a31858e8d55d95"} Dec 10 15:36:37 crc kubenswrapper[4755]: I1210 15:36:37.336054 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-b47d6c489-jr8kz" podStartSLOduration=2.33603321 podStartE2EDuration="2.33603321s" podCreationTimestamp="2025-12-10 15:36:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:36:37.330093816 +0000 UTC m=+793.930977468" watchObservedRunningTime="2025-12-10 15:36:37.33603321 +0000 UTC m=+793.936916852" Dec 10 15:36:38 crc kubenswrapper[4755]: I1210 15:36:38.336774 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qsprs"] Dec 10 15:36:38 crc kubenswrapper[4755]: I1210 15:36:38.337133 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qsprs" podUID="c21b61a9-4d26-4470-bd78-e96601706cf2" containerName="registry-server" containerID="cri-o://52c79af40edbc1ba0e2de040cbfda3e6756364cbc0beac3d67a9c578b96eb08c" gracePeriod=2 Dec 10 15:36:38 crc kubenswrapper[4755]: I1210 15:36:38.686853 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qsprs" Dec 10 15:36:38 crc kubenswrapper[4755]: I1210 15:36:38.710099 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c21b61a9-4d26-4470-bd78-e96601706cf2-catalog-content\") pod \"c21b61a9-4d26-4470-bd78-e96601706cf2\" (UID: \"c21b61a9-4d26-4470-bd78-e96601706cf2\") " Dec 10 15:36:38 crc kubenswrapper[4755]: I1210 15:36:38.710163 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c21b61a9-4d26-4470-bd78-e96601706cf2-utilities\") pod \"c21b61a9-4d26-4470-bd78-e96601706cf2\" (UID: \"c21b61a9-4d26-4470-bd78-e96601706cf2\") " Dec 10 15:36:38 crc kubenswrapper[4755]: I1210 15:36:38.710204 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-56t8d\" (UniqueName: \"kubernetes.io/projected/c21b61a9-4d26-4470-bd78-e96601706cf2-kube-api-access-56t8d\") pod \"c21b61a9-4d26-4470-bd78-e96601706cf2\" (UID: \"c21b61a9-4d26-4470-bd78-e96601706cf2\") " Dec 10 15:36:38 crc kubenswrapper[4755]: I1210 15:36:38.711040 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c21b61a9-4d26-4470-bd78-e96601706cf2-utilities" (OuterVolumeSpecName: "utilities") pod "c21b61a9-4d26-4470-bd78-e96601706cf2" (UID: "c21b61a9-4d26-4470-bd78-e96601706cf2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:36:38 crc kubenswrapper[4755]: I1210 15:36:38.714835 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c21b61a9-4d26-4470-bd78-e96601706cf2-kube-api-access-56t8d" (OuterVolumeSpecName: "kube-api-access-56t8d") pod "c21b61a9-4d26-4470-bd78-e96601706cf2" (UID: "c21b61a9-4d26-4470-bd78-e96601706cf2"). InnerVolumeSpecName "kube-api-access-56t8d". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:36:38 crc kubenswrapper[4755]: I1210 15:36:38.811387 4755 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c21b61a9-4d26-4470-bd78-e96601706cf2-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 15:36:38 crc kubenswrapper[4755]: I1210 15:36:38.811425 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-56t8d\" (UniqueName: \"kubernetes.io/projected/c21b61a9-4d26-4470-bd78-e96601706cf2-kube-api-access-56t8d\") on node \"crc\" DevicePath \"\"" Dec 10 15:36:38 crc kubenswrapper[4755]: I1210 15:36:38.827606 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c21b61a9-4d26-4470-bd78-e96601706cf2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c21b61a9-4d26-4470-bd78-e96601706cf2" (UID: "c21b61a9-4d26-4470-bd78-e96601706cf2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:36:38 crc kubenswrapper[4755]: I1210 15:36:38.913262 4755 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c21b61a9-4d26-4470-bd78-e96601706cf2-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 15:36:39 crc kubenswrapper[4755]: I1210 15:36:39.328971 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-8mfv6" event={"ID":"a1d36f33-a6fd-4c6c-9739-fb7bfc94ca98","Type":"ContainerStarted","Data":"0dea2e01ff73948cd44265b9addcb12298f07513b8dfd9d886ec3a70913b31e9"} Dec 10 15:36:39 crc kubenswrapper[4755]: I1210 15:36:39.329257 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-8mfv6" Dec 10 15:36:39 crc kubenswrapper[4755]: I1210 15:36:39.330836 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-2lfwl" event={"ID":"03a03d51-0d67-4cf5-b102-74f7d787298e","Type":"ContainerStarted","Data":"89ceafe45cd806bc584bec494d2dc5b5363fa1647859813e34a06a4f63652a70"} Dec 10 15:36:39 crc kubenswrapper[4755]: I1210 15:36:39.333577 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-kb7wz" event={"ID":"75ec7fc8-4770-41e7-9d6a-d9a43d832125","Type":"ContainerStarted","Data":"a610445516dbe92b940c3557a369e457c267d29d16bcfab42a4a8522ee032731"} Dec 10 15:36:39 crc kubenswrapper[4755]: I1210 15:36:39.336429 4755 generic.go:334] "Generic (PLEG): container finished" podID="c21b61a9-4d26-4470-bd78-e96601706cf2" containerID="52c79af40edbc1ba0e2de040cbfda3e6756364cbc0beac3d67a9c578b96eb08c" exitCode=0 Dec 10 15:36:39 crc kubenswrapper[4755]: I1210 15:36:39.336499 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qsprs" Dec 10 15:36:39 crc kubenswrapper[4755]: I1210 15:36:39.336519 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qsprs" event={"ID":"c21b61a9-4d26-4470-bd78-e96601706cf2","Type":"ContainerDied","Data":"52c79af40edbc1ba0e2de040cbfda3e6756364cbc0beac3d67a9c578b96eb08c"} Dec 10 15:36:39 crc kubenswrapper[4755]: I1210 15:36:39.336556 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qsprs" event={"ID":"c21b61a9-4d26-4470-bd78-e96601706cf2","Type":"ContainerDied","Data":"866e23ae7f024b49b3ab303e280c1668a6383eadb590eccc3709c1216264a293"} Dec 10 15:36:39 crc kubenswrapper[4755]: I1210 15:36:39.336575 4755 scope.go:117] "RemoveContainer" containerID="52c79af40edbc1ba0e2de040cbfda3e6756364cbc0beac3d67a9c578b96eb08c" Dec 10 15:36:39 crc kubenswrapper[4755]: I1210 15:36:39.339614 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-jjt7x" event={"ID":"31a7ad79-a502-42d0-ab81-6e22092f7c9e","Type":"ContainerStarted","Data":"f1a3fc4272eb8bd12b6008af779189852442937a3d7533f6a618cf790e00c918"} Dec 10 15:36:39 crc kubenswrapper[4755]: I1210 15:36:39.341379 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-jjt7x" Dec 10 15:36:39 crc kubenswrapper[4755]: I1210 15:36:39.356593 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-8mfv6" podStartSLOduration=2.376869892 podStartE2EDuration="4.356558986s" podCreationTimestamp="2025-12-10 15:36:35 +0000 UTC" firstStartedPulling="2025-12-10 15:36:36.449854419 +0000 UTC m=+793.050738061" lastFinishedPulling="2025-12-10 15:36:38.429543523 +0000 UTC m=+795.030427155" observedRunningTime="2025-12-10 15:36:39.352403272 +0000 UTC m=+795.953286904" watchObservedRunningTime="2025-12-10 15:36:39.356558986 +0000 UTC m=+795.957442648" Dec 10 15:36:39 crc kubenswrapper[4755]: I1210 15:36:39.360340 4755 scope.go:117] "RemoveContainer" containerID="8b0716be4b1770eabdf360350d29772a132e55be0fcafc76ab8182dd9ee03ff2" Dec 10 15:36:39 crc kubenswrapper[4755]: I1210 15:36:39.381680 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-jjt7x" podStartSLOduration=1.686352076 podStartE2EDuration="4.381656717s" podCreationTimestamp="2025-12-10 15:36:35 +0000 UTC" firstStartedPulling="2025-12-10 15:36:35.729820501 +0000 UTC m=+792.330704133" lastFinishedPulling="2025-12-10 15:36:38.425125142 +0000 UTC m=+795.026008774" observedRunningTime="2025-12-10 15:36:39.378015587 +0000 UTC m=+795.978899239" watchObservedRunningTime="2025-12-10 15:36:39.381656717 +0000 UTC m=+795.982540359" Dec 10 15:36:39 crc kubenswrapper[4755]: I1210 15:36:39.384241 4755 scope.go:117] "RemoveContainer" containerID="ae89ef090b857a0c5fe137736b3f9ec1e227777e3ab1bec9c62278a56c8bb3b9" Dec 10 15:36:39 crc kubenswrapper[4755]: I1210 15:36:39.396600 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-kb7wz" podStartSLOduration=1.9294737849999999 podStartE2EDuration="4.396575477s" podCreationTimestamp="2025-12-10 15:36:35 +0000 UTC" firstStartedPulling="2025-12-10 15:36:35.958809251 +0000 UTC m=+792.559692883" lastFinishedPulling="2025-12-10 15:36:38.425910943 +0000 UTC m=+795.026794575" observedRunningTime="2025-12-10 15:36:39.392168206 +0000 UTC m=+795.993051878" watchObservedRunningTime="2025-12-10 15:36:39.396575477 +0000 UTC m=+795.997459149" Dec 10 15:36:39 crc kubenswrapper[4755]: I1210 15:36:39.415198 4755 scope.go:117] "RemoveContainer" containerID="52c79af40edbc1ba0e2de040cbfda3e6756364cbc0beac3d67a9c578b96eb08c" Dec 10 15:36:39 crc kubenswrapper[4755]: E1210 15:36:39.415737 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52c79af40edbc1ba0e2de040cbfda3e6756364cbc0beac3d67a9c578b96eb08c\": container with ID starting with 52c79af40edbc1ba0e2de040cbfda3e6756364cbc0beac3d67a9c578b96eb08c not found: ID does not exist" containerID="52c79af40edbc1ba0e2de040cbfda3e6756364cbc0beac3d67a9c578b96eb08c" Dec 10 15:36:39 crc kubenswrapper[4755]: I1210 15:36:39.415791 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52c79af40edbc1ba0e2de040cbfda3e6756364cbc0beac3d67a9c578b96eb08c"} err="failed to get container status \"52c79af40edbc1ba0e2de040cbfda3e6756364cbc0beac3d67a9c578b96eb08c\": rpc error: code = NotFound desc = could not find container \"52c79af40edbc1ba0e2de040cbfda3e6756364cbc0beac3d67a9c578b96eb08c\": container with ID starting with 52c79af40edbc1ba0e2de040cbfda3e6756364cbc0beac3d67a9c578b96eb08c not found: ID does not exist" Dec 10 15:36:39 crc kubenswrapper[4755]: I1210 15:36:39.415825 4755 scope.go:117] "RemoveContainer" containerID="8b0716be4b1770eabdf360350d29772a132e55be0fcafc76ab8182dd9ee03ff2" Dec 10 15:36:39 crc kubenswrapper[4755]: E1210 15:36:39.416119 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b0716be4b1770eabdf360350d29772a132e55be0fcafc76ab8182dd9ee03ff2\": container with ID starting with 8b0716be4b1770eabdf360350d29772a132e55be0fcafc76ab8182dd9ee03ff2 not found: ID does not exist" containerID="8b0716be4b1770eabdf360350d29772a132e55be0fcafc76ab8182dd9ee03ff2" Dec 10 15:36:39 crc kubenswrapper[4755]: I1210 15:36:39.416156 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b0716be4b1770eabdf360350d29772a132e55be0fcafc76ab8182dd9ee03ff2"} err="failed to get container status \"8b0716be4b1770eabdf360350d29772a132e55be0fcafc76ab8182dd9ee03ff2\": rpc error: code = NotFound desc = could not find container \"8b0716be4b1770eabdf360350d29772a132e55be0fcafc76ab8182dd9ee03ff2\": container with ID starting with 8b0716be4b1770eabdf360350d29772a132e55be0fcafc76ab8182dd9ee03ff2 not found: ID does not exist" Dec 10 15:36:39 crc kubenswrapper[4755]: I1210 15:36:39.416175 4755 scope.go:117] "RemoveContainer" containerID="ae89ef090b857a0c5fe137736b3f9ec1e227777e3ab1bec9c62278a56c8bb3b9" Dec 10 15:36:39 crc kubenswrapper[4755]: E1210 15:36:39.416419 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae89ef090b857a0c5fe137736b3f9ec1e227777e3ab1bec9c62278a56c8bb3b9\": container with ID starting with ae89ef090b857a0c5fe137736b3f9ec1e227777e3ab1bec9c62278a56c8bb3b9 not found: ID does not exist" containerID="ae89ef090b857a0c5fe137736b3f9ec1e227777e3ab1bec9c62278a56c8bb3b9" Dec 10 15:36:39 crc kubenswrapper[4755]: I1210 15:36:39.416452 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae89ef090b857a0c5fe137736b3f9ec1e227777e3ab1bec9c62278a56c8bb3b9"} err="failed to get container status \"ae89ef090b857a0c5fe137736b3f9ec1e227777e3ab1bec9c62278a56c8bb3b9\": rpc error: code = NotFound desc = could not find container \"ae89ef090b857a0c5fe137736b3f9ec1e227777e3ab1bec9c62278a56c8bb3b9\": container with ID starting with ae89ef090b857a0c5fe137736b3f9ec1e227777e3ab1bec9c62278a56c8bb3b9 not found: ID does not exist" Dec 10 15:36:39 crc kubenswrapper[4755]: I1210 15:36:39.422532 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qsprs"] Dec 10 15:36:39 crc kubenswrapper[4755]: I1210 15:36:39.428030 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qsprs"] Dec 10 15:36:39 crc kubenswrapper[4755]: I1210 15:36:39.768395 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c21b61a9-4d26-4470-bd78-e96601706cf2" path="/var/lib/kubelet/pods/c21b61a9-4d26-4470-bd78-e96601706cf2/volumes" Dec 10 15:36:40 crc kubenswrapper[4755]: I1210 15:36:40.359012 4755 patch_prober.go:28] interesting pod/machine-config-daemon-ggt8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 15:36:40 crc kubenswrapper[4755]: I1210 15:36:40.359075 4755 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 15:36:40 crc kubenswrapper[4755]: I1210 15:36:40.359110 4755 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" Dec 10 15:36:40 crc kubenswrapper[4755]: I1210 15:36:40.359480 4755 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a3bec46d814cc9fbc9935f1242adb126dce3912edb10a563b43df294190d9363"} pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 10 15:36:40 crc kubenswrapper[4755]: I1210 15:36:40.359528 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" containerName="machine-config-daemon" containerID="cri-o://a3bec46d814cc9fbc9935f1242adb126dce3912edb10a563b43df294190d9363" gracePeriod=600 Dec 10 15:36:41 crc kubenswrapper[4755]: I1210 15:36:41.357701 4755 generic.go:334] "Generic (PLEG): container finished" podID="b132a8b9-1c99-414d-8773-229bf36b305d" containerID="a3bec46d814cc9fbc9935f1242adb126dce3912edb10a563b43df294190d9363" exitCode=0 Dec 10 15:36:41 crc kubenswrapper[4755]: I1210 15:36:41.357938 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" event={"ID":"b132a8b9-1c99-414d-8773-229bf36b305d","Type":"ContainerDied","Data":"a3bec46d814cc9fbc9935f1242adb126dce3912edb10a563b43df294190d9363"} Dec 10 15:36:41 crc kubenswrapper[4755]: I1210 15:36:41.358497 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" event={"ID":"b132a8b9-1c99-414d-8773-229bf36b305d","Type":"ContainerStarted","Data":"4c970abaaa70f01d1899eae5e78bc6f2bf1fb1ebdd24f00f3de5524057d3b3cd"} Dec 10 15:36:41 crc kubenswrapper[4755]: I1210 15:36:41.358528 4755 scope.go:117] "RemoveContainer" containerID="e0512fff55aaaeeb22a338a748ccafc0fe3e36f21ae6e952762dc39e4ce559fe" Dec 10 15:36:41 crc kubenswrapper[4755]: I1210 15:36:41.361027 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-2lfwl" event={"ID":"03a03d51-0d67-4cf5-b102-74f7d787298e","Type":"ContainerStarted","Data":"23a359a0e6d2cb1cb492759e055c84dd0033c3f2d1531a4729c8eb374e450318"} Dec 10 15:36:41 crc kubenswrapper[4755]: I1210 15:36:41.404374 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-2lfwl" podStartSLOduration=1.507799963 podStartE2EDuration="6.404351852s" podCreationTimestamp="2025-12-10 15:36:35 +0000 UTC" firstStartedPulling="2025-12-10 15:36:35.890890362 +0000 UTC m=+792.491773984" lastFinishedPulling="2025-12-10 15:36:40.787442241 +0000 UTC m=+797.388325873" observedRunningTime="2025-12-10 15:36:41.398252005 +0000 UTC m=+797.999135657" watchObservedRunningTime="2025-12-10 15:36:41.404351852 +0000 UTC m=+798.005235494" Dec 10 15:36:45 crc kubenswrapper[4755]: I1210 15:36:45.735547 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-jjt7x" Dec 10 15:36:45 crc kubenswrapper[4755]: I1210 15:36:45.952181 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-b47d6c489-jr8kz" Dec 10 15:36:45 crc kubenswrapper[4755]: I1210 15:36:45.952500 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-b47d6c489-jr8kz" Dec 10 15:36:45 crc kubenswrapper[4755]: I1210 15:36:45.960568 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-b47d6c489-jr8kz" Dec 10 15:36:46 crc kubenswrapper[4755]: I1210 15:36:46.410238 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-b47d6c489-jr8kz" Dec 10 15:36:46 crc kubenswrapper[4755]: I1210 15:36:46.484875 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-n6qb5"] Dec 10 15:36:56 crc kubenswrapper[4755]: I1210 15:36:56.259019 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-8mfv6" Dec 10 15:37:11 crc kubenswrapper[4755]: I1210 15:37:11.467696 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83q5l8d"] Dec 10 15:37:11 crc kubenswrapper[4755]: E1210 15:37:11.468447 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c21b61a9-4d26-4470-bd78-e96601706cf2" containerName="extract-content" Dec 10 15:37:11 crc kubenswrapper[4755]: I1210 15:37:11.468486 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="c21b61a9-4d26-4470-bd78-e96601706cf2" containerName="extract-content" Dec 10 15:37:11 crc kubenswrapper[4755]: E1210 15:37:11.468497 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c21b61a9-4d26-4470-bd78-e96601706cf2" containerName="extract-utilities" Dec 10 15:37:11 crc kubenswrapper[4755]: I1210 15:37:11.468503 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="c21b61a9-4d26-4470-bd78-e96601706cf2" containerName="extract-utilities" Dec 10 15:37:11 crc kubenswrapper[4755]: E1210 15:37:11.468525 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c21b61a9-4d26-4470-bd78-e96601706cf2" containerName="registry-server" Dec 10 15:37:11 crc kubenswrapper[4755]: I1210 15:37:11.468531 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="c21b61a9-4d26-4470-bd78-e96601706cf2" containerName="registry-server" Dec 10 15:37:11 crc kubenswrapper[4755]: I1210 15:37:11.468648 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="c21b61a9-4d26-4470-bd78-e96601706cf2" containerName="registry-server" Dec 10 15:37:11 crc kubenswrapper[4755]: I1210 15:37:11.469384 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83q5l8d" Dec 10 15:37:11 crc kubenswrapper[4755]: I1210 15:37:11.479374 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83q5l8d"] Dec 10 15:37:11 crc kubenswrapper[4755]: I1210 15:37:11.482363 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 10 15:37:11 crc kubenswrapper[4755]: I1210 15:37:11.551191 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-n6qb5" podUID="24e3bc3c-7e93-4c91-b0a2-85877004fafc" containerName="console" containerID="cri-o://779c415b86b45e88f4ad32b99d19ddf0ebcab03f99eeaf29c20ff0b22c36e94a" gracePeriod=15 Dec 10 15:37:11 crc kubenswrapper[4755]: I1210 15:37:11.561021 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e96c66c9-c2da-4ab5-95eb-3f8bbc6e8917-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83q5l8d\" (UID: \"e96c66c9-c2da-4ab5-95eb-3f8bbc6e8917\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83q5l8d" Dec 10 15:37:11 crc kubenswrapper[4755]: I1210 15:37:11.561102 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvp25\" (UniqueName: \"kubernetes.io/projected/e96c66c9-c2da-4ab5-95eb-3f8bbc6e8917-kube-api-access-gvp25\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83q5l8d\" (UID: \"e96c66c9-c2da-4ab5-95eb-3f8bbc6e8917\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83q5l8d" Dec 10 15:37:11 crc kubenswrapper[4755]: I1210 15:37:11.561133 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e96c66c9-c2da-4ab5-95eb-3f8bbc6e8917-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83q5l8d\" (UID: \"e96c66c9-c2da-4ab5-95eb-3f8bbc6e8917\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83q5l8d" Dec 10 15:37:11 crc kubenswrapper[4755]: I1210 15:37:11.662808 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e96c66c9-c2da-4ab5-95eb-3f8bbc6e8917-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83q5l8d\" (UID: \"e96c66c9-c2da-4ab5-95eb-3f8bbc6e8917\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83q5l8d" Dec 10 15:37:11 crc kubenswrapper[4755]: I1210 15:37:11.663521 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e96c66c9-c2da-4ab5-95eb-3f8bbc6e8917-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83q5l8d\" (UID: \"e96c66c9-c2da-4ab5-95eb-3f8bbc6e8917\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83q5l8d" Dec 10 15:37:11 crc kubenswrapper[4755]: I1210 15:37:11.663422 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e96c66c9-c2da-4ab5-95eb-3f8bbc6e8917-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83q5l8d\" (UID: \"e96c66c9-c2da-4ab5-95eb-3f8bbc6e8917\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83q5l8d" Dec 10 15:37:11 crc kubenswrapper[4755]: I1210 15:37:11.663613 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvp25\" (UniqueName: \"kubernetes.io/projected/e96c66c9-c2da-4ab5-95eb-3f8bbc6e8917-kube-api-access-gvp25\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83q5l8d\" (UID: \"e96c66c9-c2da-4ab5-95eb-3f8bbc6e8917\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83q5l8d" Dec 10 15:37:11 crc kubenswrapper[4755]: I1210 15:37:11.664011 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e96c66c9-c2da-4ab5-95eb-3f8bbc6e8917-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83q5l8d\" (UID: \"e96c66c9-c2da-4ab5-95eb-3f8bbc6e8917\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83q5l8d" Dec 10 15:37:11 crc kubenswrapper[4755]: I1210 15:37:11.688762 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvp25\" (UniqueName: \"kubernetes.io/projected/e96c66c9-c2da-4ab5-95eb-3f8bbc6e8917-kube-api-access-gvp25\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83q5l8d\" (UID: \"e96c66c9-c2da-4ab5-95eb-3f8bbc6e8917\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83q5l8d" Dec 10 15:37:11 crc kubenswrapper[4755]: I1210 15:37:11.794572 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83q5l8d" Dec 10 15:37:11 crc kubenswrapper[4755]: I1210 15:37:11.938017 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-n6qb5_24e3bc3c-7e93-4c91-b0a2-85877004fafc/console/0.log" Dec 10 15:37:11 crc kubenswrapper[4755]: I1210 15:37:11.938312 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-n6qb5" Dec 10 15:37:11 crc kubenswrapper[4755]: I1210 15:37:11.970356 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/24e3bc3c-7e93-4c91-b0a2-85877004fafc-console-config\") pod \"24e3bc3c-7e93-4c91-b0a2-85877004fafc\" (UID: \"24e3bc3c-7e93-4c91-b0a2-85877004fafc\") " Dec 10 15:37:11 crc kubenswrapper[4755]: I1210 15:37:11.970414 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/24e3bc3c-7e93-4c91-b0a2-85877004fafc-service-ca\") pod \"24e3bc3c-7e93-4c91-b0a2-85877004fafc\" (UID: \"24e3bc3c-7e93-4c91-b0a2-85877004fafc\") " Dec 10 15:37:11 crc kubenswrapper[4755]: I1210 15:37:11.970486 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9xch\" (UniqueName: \"kubernetes.io/projected/24e3bc3c-7e93-4c91-b0a2-85877004fafc-kube-api-access-l9xch\") pod \"24e3bc3c-7e93-4c91-b0a2-85877004fafc\" (UID: \"24e3bc3c-7e93-4c91-b0a2-85877004fafc\") " Dec 10 15:37:11 crc kubenswrapper[4755]: I1210 15:37:11.970566 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/24e3bc3c-7e93-4c91-b0a2-85877004fafc-console-serving-cert\") pod \"24e3bc3c-7e93-4c91-b0a2-85877004fafc\" (UID: \"24e3bc3c-7e93-4c91-b0a2-85877004fafc\") " Dec 10 15:37:11 crc kubenswrapper[4755]: I1210 15:37:11.970596 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/24e3bc3c-7e93-4c91-b0a2-85877004fafc-console-oauth-config\") pod \"24e3bc3c-7e93-4c91-b0a2-85877004fafc\" (UID: \"24e3bc3c-7e93-4c91-b0a2-85877004fafc\") " Dec 10 15:37:11 crc kubenswrapper[4755]: I1210 15:37:11.970626 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/24e3bc3c-7e93-4c91-b0a2-85877004fafc-trusted-ca-bundle\") pod \"24e3bc3c-7e93-4c91-b0a2-85877004fafc\" (UID: \"24e3bc3c-7e93-4c91-b0a2-85877004fafc\") " Dec 10 15:37:11 crc kubenswrapper[4755]: I1210 15:37:11.970648 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/24e3bc3c-7e93-4c91-b0a2-85877004fafc-oauth-serving-cert\") pod \"24e3bc3c-7e93-4c91-b0a2-85877004fafc\" (UID: \"24e3bc3c-7e93-4c91-b0a2-85877004fafc\") " Dec 10 15:37:11 crc kubenswrapper[4755]: I1210 15:37:11.971447 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24e3bc3c-7e93-4c91-b0a2-85877004fafc-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "24e3bc3c-7e93-4c91-b0a2-85877004fafc" (UID: "24e3bc3c-7e93-4c91-b0a2-85877004fafc"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:37:11 crc kubenswrapper[4755]: I1210 15:37:11.971543 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24e3bc3c-7e93-4c91-b0a2-85877004fafc-console-config" (OuterVolumeSpecName: "console-config") pod "24e3bc3c-7e93-4c91-b0a2-85877004fafc" (UID: "24e3bc3c-7e93-4c91-b0a2-85877004fafc"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:37:11 crc kubenswrapper[4755]: I1210 15:37:11.971617 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24e3bc3c-7e93-4c91-b0a2-85877004fafc-service-ca" (OuterVolumeSpecName: "service-ca") pod "24e3bc3c-7e93-4c91-b0a2-85877004fafc" (UID: "24e3bc3c-7e93-4c91-b0a2-85877004fafc"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:37:11 crc kubenswrapper[4755]: I1210 15:37:11.971619 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24e3bc3c-7e93-4c91-b0a2-85877004fafc-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "24e3bc3c-7e93-4c91-b0a2-85877004fafc" (UID: "24e3bc3c-7e93-4c91-b0a2-85877004fafc"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:37:11 crc kubenswrapper[4755]: I1210 15:37:11.976842 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24e3bc3c-7e93-4c91-b0a2-85877004fafc-kube-api-access-l9xch" (OuterVolumeSpecName: "kube-api-access-l9xch") pod "24e3bc3c-7e93-4c91-b0a2-85877004fafc" (UID: "24e3bc3c-7e93-4c91-b0a2-85877004fafc"). InnerVolumeSpecName "kube-api-access-l9xch". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:37:11 crc kubenswrapper[4755]: I1210 15:37:11.977040 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24e3bc3c-7e93-4c91-b0a2-85877004fafc-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "24e3bc3c-7e93-4c91-b0a2-85877004fafc" (UID: "24e3bc3c-7e93-4c91-b0a2-85877004fafc"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:37:11 crc kubenswrapper[4755]: I1210 15:37:11.979246 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24e3bc3c-7e93-4c91-b0a2-85877004fafc-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "24e3bc3c-7e93-4c91-b0a2-85877004fafc" (UID: "24e3bc3c-7e93-4c91-b0a2-85877004fafc"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:37:12 crc kubenswrapper[4755]: I1210 15:37:12.072020 4755 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/24e3bc3c-7e93-4c91-b0a2-85877004fafc-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:37:12 crc kubenswrapper[4755]: I1210 15:37:12.072063 4755 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/24e3bc3c-7e93-4c91-b0a2-85877004fafc-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:37:12 crc kubenswrapper[4755]: I1210 15:37:12.072077 4755 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/24e3bc3c-7e93-4c91-b0a2-85877004fafc-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 15:37:12 crc kubenswrapper[4755]: I1210 15:37:12.072088 4755 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/24e3bc3c-7e93-4c91-b0a2-85877004fafc-console-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:37:12 crc kubenswrapper[4755]: I1210 15:37:12.072099 4755 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/24e3bc3c-7e93-4c91-b0a2-85877004fafc-service-ca\") on node \"crc\" DevicePath \"\"" Dec 10 15:37:12 crc kubenswrapper[4755]: I1210 15:37:12.072109 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l9xch\" (UniqueName: \"kubernetes.io/projected/24e3bc3c-7e93-4c91-b0a2-85877004fafc-kube-api-access-l9xch\") on node \"crc\" DevicePath \"\"" Dec 10 15:37:12 crc kubenswrapper[4755]: I1210 15:37:12.072120 4755 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/24e3bc3c-7e93-4c91-b0a2-85877004fafc-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 15:37:12 crc kubenswrapper[4755]: I1210 15:37:12.279781 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83q5l8d"] Dec 10 15:37:12 crc kubenswrapper[4755]: I1210 15:37:12.585955 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-n6qb5_24e3bc3c-7e93-4c91-b0a2-85877004fafc/console/0.log" Dec 10 15:37:12 crc kubenswrapper[4755]: I1210 15:37:12.586014 4755 generic.go:334] "Generic (PLEG): container finished" podID="24e3bc3c-7e93-4c91-b0a2-85877004fafc" containerID="779c415b86b45e88f4ad32b99d19ddf0ebcab03f99eeaf29c20ff0b22c36e94a" exitCode=2 Dec 10 15:37:12 crc kubenswrapper[4755]: I1210 15:37:12.586114 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-n6qb5" event={"ID":"24e3bc3c-7e93-4c91-b0a2-85877004fafc","Type":"ContainerDied","Data":"779c415b86b45e88f4ad32b99d19ddf0ebcab03f99eeaf29c20ff0b22c36e94a"} Dec 10 15:37:12 crc kubenswrapper[4755]: I1210 15:37:12.586161 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-n6qb5" event={"ID":"24e3bc3c-7e93-4c91-b0a2-85877004fafc","Type":"ContainerDied","Data":"fa2868a1c2578591f381f7fed742fbb61db0c2e5b7be3cee10d36161a9a77338"} Dec 10 15:37:12 crc kubenswrapper[4755]: I1210 15:37:12.586186 4755 scope.go:117] "RemoveContainer" containerID="779c415b86b45e88f4ad32b99d19ddf0ebcab03f99eeaf29c20ff0b22c36e94a" Dec 10 15:37:12 crc kubenswrapper[4755]: I1210 15:37:12.586126 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-n6qb5" Dec 10 15:37:12 crc kubenswrapper[4755]: I1210 15:37:12.592663 4755 generic.go:334] "Generic (PLEG): container finished" podID="e96c66c9-c2da-4ab5-95eb-3f8bbc6e8917" containerID="5979f9ef1b9278ef757e82c019c92aa4253eab8fbffe715be4d191b56fa509bd" exitCode=0 Dec 10 15:37:12 crc kubenswrapper[4755]: I1210 15:37:12.592702 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83q5l8d" event={"ID":"e96c66c9-c2da-4ab5-95eb-3f8bbc6e8917","Type":"ContainerDied","Data":"5979f9ef1b9278ef757e82c019c92aa4253eab8fbffe715be4d191b56fa509bd"} Dec 10 15:37:12 crc kubenswrapper[4755]: I1210 15:37:12.592726 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83q5l8d" event={"ID":"e96c66c9-c2da-4ab5-95eb-3f8bbc6e8917","Type":"ContainerStarted","Data":"2f4083cb05b1fde80c608df71a4837224bea06214c4019e0493b62f7181a99a8"} Dec 10 15:37:12 crc kubenswrapper[4755]: I1210 15:37:12.606032 4755 scope.go:117] "RemoveContainer" containerID="779c415b86b45e88f4ad32b99d19ddf0ebcab03f99eeaf29c20ff0b22c36e94a" Dec 10 15:37:12 crc kubenswrapper[4755]: E1210 15:37:12.606418 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"779c415b86b45e88f4ad32b99d19ddf0ebcab03f99eeaf29c20ff0b22c36e94a\": container with ID starting with 779c415b86b45e88f4ad32b99d19ddf0ebcab03f99eeaf29c20ff0b22c36e94a not found: ID does not exist" containerID="779c415b86b45e88f4ad32b99d19ddf0ebcab03f99eeaf29c20ff0b22c36e94a" Dec 10 15:37:12 crc kubenswrapper[4755]: I1210 15:37:12.606456 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"779c415b86b45e88f4ad32b99d19ddf0ebcab03f99eeaf29c20ff0b22c36e94a"} err="failed to get container status \"779c415b86b45e88f4ad32b99d19ddf0ebcab03f99eeaf29c20ff0b22c36e94a\": rpc error: code = NotFound desc = could not find container \"779c415b86b45e88f4ad32b99d19ddf0ebcab03f99eeaf29c20ff0b22c36e94a\": container with ID starting with 779c415b86b45e88f4ad32b99d19ddf0ebcab03f99eeaf29c20ff0b22c36e94a not found: ID does not exist" Dec 10 15:37:12 crc kubenswrapper[4755]: I1210 15:37:12.641895 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-n6qb5"] Dec 10 15:37:12 crc kubenswrapper[4755]: I1210 15:37:12.646943 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-n6qb5"] Dec 10 15:37:13 crc kubenswrapper[4755]: I1210 15:37:13.766824 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24e3bc3c-7e93-4c91-b0a2-85877004fafc" path="/var/lib/kubelet/pods/24e3bc3c-7e93-4c91-b0a2-85877004fafc/volumes" Dec 10 15:37:14 crc kubenswrapper[4755]: E1210 15:37:14.335746 4755 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode96c66c9_c2da_4ab5_95eb_3f8bbc6e8917.slice/crio-conmon-ffcba59d1d9c0486489d9b675a1c97c75d59dec67f183a89fa7467f6adb1e332.scope\": RecentStats: unable to find data in memory cache]" Dec 10 15:37:14 crc kubenswrapper[4755]: I1210 15:37:14.611440 4755 generic.go:334] "Generic (PLEG): container finished" podID="e96c66c9-c2da-4ab5-95eb-3f8bbc6e8917" containerID="ffcba59d1d9c0486489d9b675a1c97c75d59dec67f183a89fa7467f6adb1e332" exitCode=0 Dec 10 15:37:14 crc kubenswrapper[4755]: I1210 15:37:14.611517 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83q5l8d" event={"ID":"e96c66c9-c2da-4ab5-95eb-3f8bbc6e8917","Type":"ContainerDied","Data":"ffcba59d1d9c0486489d9b675a1c97c75d59dec67f183a89fa7467f6adb1e332"} Dec 10 15:37:15 crc kubenswrapper[4755]: I1210 15:37:15.619711 4755 generic.go:334] "Generic (PLEG): container finished" podID="e96c66c9-c2da-4ab5-95eb-3f8bbc6e8917" containerID="f05cce2cd9f2c34f870a3b83bd8774fb3a3767728aa33aa96bca4138bdec2c47" exitCode=0 Dec 10 15:37:15 crc kubenswrapper[4755]: I1210 15:37:15.619752 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83q5l8d" event={"ID":"e96c66c9-c2da-4ab5-95eb-3f8bbc6e8917","Type":"ContainerDied","Data":"f05cce2cd9f2c34f870a3b83bd8774fb3a3767728aa33aa96bca4138bdec2c47"} Dec 10 15:37:16 crc kubenswrapper[4755]: I1210 15:37:16.906583 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83q5l8d" Dec 10 15:37:16 crc kubenswrapper[4755]: I1210 15:37:16.934593 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e96c66c9-c2da-4ab5-95eb-3f8bbc6e8917-bundle\") pod \"e96c66c9-c2da-4ab5-95eb-3f8bbc6e8917\" (UID: \"e96c66c9-c2da-4ab5-95eb-3f8bbc6e8917\") " Dec 10 15:37:16 crc kubenswrapper[4755]: I1210 15:37:16.934653 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e96c66c9-c2da-4ab5-95eb-3f8bbc6e8917-util\") pod \"e96c66c9-c2da-4ab5-95eb-3f8bbc6e8917\" (UID: \"e96c66c9-c2da-4ab5-95eb-3f8bbc6e8917\") " Dec 10 15:37:16 crc kubenswrapper[4755]: I1210 15:37:16.934712 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gvp25\" (UniqueName: \"kubernetes.io/projected/e96c66c9-c2da-4ab5-95eb-3f8bbc6e8917-kube-api-access-gvp25\") pod \"e96c66c9-c2da-4ab5-95eb-3f8bbc6e8917\" (UID: \"e96c66c9-c2da-4ab5-95eb-3f8bbc6e8917\") " Dec 10 15:37:16 crc kubenswrapper[4755]: I1210 15:37:16.948078 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e96c66c9-c2da-4ab5-95eb-3f8bbc6e8917-util" (OuterVolumeSpecName: "util") pod "e96c66c9-c2da-4ab5-95eb-3f8bbc6e8917" (UID: "e96c66c9-c2da-4ab5-95eb-3f8bbc6e8917"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:37:16 crc kubenswrapper[4755]: I1210 15:37:16.948705 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e96c66c9-c2da-4ab5-95eb-3f8bbc6e8917-bundle" (OuterVolumeSpecName: "bundle") pod "e96c66c9-c2da-4ab5-95eb-3f8bbc6e8917" (UID: "e96c66c9-c2da-4ab5-95eb-3f8bbc6e8917"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:37:16 crc kubenswrapper[4755]: I1210 15:37:16.949297 4755 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e96c66c9-c2da-4ab5-95eb-3f8bbc6e8917-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:37:16 crc kubenswrapper[4755]: I1210 15:37:16.949312 4755 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e96c66c9-c2da-4ab5-95eb-3f8bbc6e8917-util\") on node \"crc\" DevicePath \"\"" Dec 10 15:37:16 crc kubenswrapper[4755]: I1210 15:37:16.975170 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e96c66c9-c2da-4ab5-95eb-3f8bbc6e8917-kube-api-access-gvp25" (OuterVolumeSpecName: "kube-api-access-gvp25") pod "e96c66c9-c2da-4ab5-95eb-3f8bbc6e8917" (UID: "e96c66c9-c2da-4ab5-95eb-3f8bbc6e8917"). InnerVolumeSpecName "kube-api-access-gvp25". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:37:17 crc kubenswrapper[4755]: I1210 15:37:17.050838 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gvp25\" (UniqueName: \"kubernetes.io/projected/e96c66c9-c2da-4ab5-95eb-3f8bbc6e8917-kube-api-access-gvp25\") on node \"crc\" DevicePath \"\"" Dec 10 15:37:17 crc kubenswrapper[4755]: I1210 15:37:17.634906 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83q5l8d" event={"ID":"e96c66c9-c2da-4ab5-95eb-3f8bbc6e8917","Type":"ContainerDied","Data":"2f4083cb05b1fde80c608df71a4837224bea06214c4019e0493b62f7181a99a8"} Dec 10 15:37:17 crc kubenswrapper[4755]: I1210 15:37:17.635229 4755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f4083cb05b1fde80c608df71a4837224bea06214c4019e0493b62f7181a99a8" Dec 10 15:37:17 crc kubenswrapper[4755]: I1210 15:37:17.634952 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83q5l8d" Dec 10 15:37:24 crc kubenswrapper[4755]: I1210 15:37:24.885697 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-74cdb46bb8-59fxs"] Dec 10 15:37:24 crc kubenswrapper[4755]: E1210 15:37:24.887433 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24e3bc3c-7e93-4c91-b0a2-85877004fafc" containerName="console" Dec 10 15:37:24 crc kubenswrapper[4755]: I1210 15:37:24.887541 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="24e3bc3c-7e93-4c91-b0a2-85877004fafc" containerName="console" Dec 10 15:37:24 crc kubenswrapper[4755]: E1210 15:37:24.887616 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e96c66c9-c2da-4ab5-95eb-3f8bbc6e8917" containerName="extract" Dec 10 15:37:24 crc kubenswrapper[4755]: I1210 15:37:24.887686 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="e96c66c9-c2da-4ab5-95eb-3f8bbc6e8917" containerName="extract" Dec 10 15:37:24 crc kubenswrapper[4755]: E1210 15:37:24.887798 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e96c66c9-c2da-4ab5-95eb-3f8bbc6e8917" containerName="util" Dec 10 15:37:24 crc kubenswrapper[4755]: I1210 15:37:24.887879 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="e96c66c9-c2da-4ab5-95eb-3f8bbc6e8917" containerName="util" Dec 10 15:37:24 crc kubenswrapper[4755]: E1210 15:37:24.887972 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e96c66c9-c2da-4ab5-95eb-3f8bbc6e8917" containerName="pull" Dec 10 15:37:24 crc kubenswrapper[4755]: I1210 15:37:24.888046 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="e96c66c9-c2da-4ab5-95eb-3f8bbc6e8917" containerName="pull" Dec 10 15:37:24 crc kubenswrapper[4755]: I1210 15:37:24.888262 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="24e3bc3c-7e93-4c91-b0a2-85877004fafc" containerName="console" Dec 10 15:37:24 crc kubenswrapper[4755]: I1210 15:37:24.888349 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="e96c66c9-c2da-4ab5-95eb-3f8bbc6e8917" containerName="extract" Dec 10 15:37:24 crc kubenswrapper[4755]: I1210 15:37:24.888915 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-74cdb46bb8-59fxs" Dec 10 15:37:24 crc kubenswrapper[4755]: I1210 15:37:24.891397 4755 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Dec 10 15:37:24 crc kubenswrapper[4755]: I1210 15:37:24.891479 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Dec 10 15:37:24 crc kubenswrapper[4755]: I1210 15:37:24.891576 4755 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Dec 10 15:37:24 crc kubenswrapper[4755]: I1210 15:37:24.891665 4755 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-xdlrx" Dec 10 15:37:24 crc kubenswrapper[4755]: I1210 15:37:24.891841 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Dec 10 15:37:24 crc kubenswrapper[4755]: I1210 15:37:24.910824 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-74cdb46bb8-59fxs"] Dec 10 15:37:24 crc kubenswrapper[4755]: I1210 15:37:24.956068 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0f530a80-69ef-4c16-abb6-befe3285a8fd-apiservice-cert\") pod \"metallb-operator-controller-manager-74cdb46bb8-59fxs\" (UID: \"0f530a80-69ef-4c16-abb6-befe3285a8fd\") " pod="metallb-system/metallb-operator-controller-manager-74cdb46bb8-59fxs" Dec 10 15:37:24 crc kubenswrapper[4755]: I1210 15:37:24.956333 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnmdx\" (UniqueName: \"kubernetes.io/projected/0f530a80-69ef-4c16-abb6-befe3285a8fd-kube-api-access-fnmdx\") pod \"metallb-operator-controller-manager-74cdb46bb8-59fxs\" (UID: \"0f530a80-69ef-4c16-abb6-befe3285a8fd\") " pod="metallb-system/metallb-operator-controller-manager-74cdb46bb8-59fxs" Dec 10 15:37:24 crc kubenswrapper[4755]: I1210 15:37:24.956526 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0f530a80-69ef-4c16-abb6-befe3285a8fd-webhook-cert\") pod \"metallb-operator-controller-manager-74cdb46bb8-59fxs\" (UID: \"0f530a80-69ef-4c16-abb6-befe3285a8fd\") " pod="metallb-system/metallb-operator-controller-manager-74cdb46bb8-59fxs" Dec 10 15:37:25 crc kubenswrapper[4755]: I1210 15:37:25.058214 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0f530a80-69ef-4c16-abb6-befe3285a8fd-webhook-cert\") pod \"metallb-operator-controller-manager-74cdb46bb8-59fxs\" (UID: \"0f530a80-69ef-4c16-abb6-befe3285a8fd\") " pod="metallb-system/metallb-operator-controller-manager-74cdb46bb8-59fxs" Dec 10 15:37:25 crc kubenswrapper[4755]: I1210 15:37:25.058446 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0f530a80-69ef-4c16-abb6-befe3285a8fd-apiservice-cert\") pod \"metallb-operator-controller-manager-74cdb46bb8-59fxs\" (UID: \"0f530a80-69ef-4c16-abb6-befe3285a8fd\") " pod="metallb-system/metallb-operator-controller-manager-74cdb46bb8-59fxs" Dec 10 15:37:25 crc kubenswrapper[4755]: I1210 15:37:25.058575 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnmdx\" (UniqueName: \"kubernetes.io/projected/0f530a80-69ef-4c16-abb6-befe3285a8fd-kube-api-access-fnmdx\") pod \"metallb-operator-controller-manager-74cdb46bb8-59fxs\" (UID: \"0f530a80-69ef-4c16-abb6-befe3285a8fd\") " pod="metallb-system/metallb-operator-controller-manager-74cdb46bb8-59fxs" Dec 10 15:37:25 crc kubenswrapper[4755]: I1210 15:37:25.074240 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0f530a80-69ef-4c16-abb6-befe3285a8fd-apiservice-cert\") pod \"metallb-operator-controller-manager-74cdb46bb8-59fxs\" (UID: \"0f530a80-69ef-4c16-abb6-befe3285a8fd\") " pod="metallb-system/metallb-operator-controller-manager-74cdb46bb8-59fxs" Dec 10 15:37:25 crc kubenswrapper[4755]: I1210 15:37:25.074310 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0f530a80-69ef-4c16-abb6-befe3285a8fd-webhook-cert\") pod \"metallb-operator-controller-manager-74cdb46bb8-59fxs\" (UID: \"0f530a80-69ef-4c16-abb6-befe3285a8fd\") " pod="metallb-system/metallb-operator-controller-manager-74cdb46bb8-59fxs" Dec 10 15:37:25 crc kubenswrapper[4755]: I1210 15:37:25.078905 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnmdx\" (UniqueName: \"kubernetes.io/projected/0f530a80-69ef-4c16-abb6-befe3285a8fd-kube-api-access-fnmdx\") pod \"metallb-operator-controller-manager-74cdb46bb8-59fxs\" (UID: \"0f530a80-69ef-4c16-abb6-befe3285a8fd\") " pod="metallb-system/metallb-operator-controller-manager-74cdb46bb8-59fxs" Dec 10 15:37:25 crc kubenswrapper[4755]: I1210 15:37:25.135920 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-69bc58cc7-kjrfc"] Dec 10 15:37:25 crc kubenswrapper[4755]: I1210 15:37:25.137066 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-69bc58cc7-kjrfc" Dec 10 15:37:25 crc kubenswrapper[4755]: I1210 15:37:25.141641 4755 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-684kp" Dec 10 15:37:25 crc kubenswrapper[4755]: I1210 15:37:25.142067 4755 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Dec 10 15:37:25 crc kubenswrapper[4755]: I1210 15:37:25.144901 4755 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Dec 10 15:37:25 crc kubenswrapper[4755]: I1210 15:37:25.159212 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-69bc58cc7-kjrfc"] Dec 10 15:37:25 crc kubenswrapper[4755]: I1210 15:37:25.159556 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9497c2dd-9f7a-4cf0-ab2b-5447fde7fc2a-webhook-cert\") pod \"metallb-operator-webhook-server-69bc58cc7-kjrfc\" (UID: \"9497c2dd-9f7a-4cf0-ab2b-5447fde7fc2a\") " pod="metallb-system/metallb-operator-webhook-server-69bc58cc7-kjrfc" Dec 10 15:37:25 crc kubenswrapper[4755]: I1210 15:37:25.159682 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldgmx\" (UniqueName: \"kubernetes.io/projected/9497c2dd-9f7a-4cf0-ab2b-5447fde7fc2a-kube-api-access-ldgmx\") pod \"metallb-operator-webhook-server-69bc58cc7-kjrfc\" (UID: \"9497c2dd-9f7a-4cf0-ab2b-5447fde7fc2a\") " pod="metallb-system/metallb-operator-webhook-server-69bc58cc7-kjrfc" Dec 10 15:37:25 crc kubenswrapper[4755]: I1210 15:37:25.159814 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9497c2dd-9f7a-4cf0-ab2b-5447fde7fc2a-apiservice-cert\") pod \"metallb-operator-webhook-server-69bc58cc7-kjrfc\" (UID: \"9497c2dd-9f7a-4cf0-ab2b-5447fde7fc2a\") " pod="metallb-system/metallb-operator-webhook-server-69bc58cc7-kjrfc" Dec 10 15:37:25 crc kubenswrapper[4755]: I1210 15:37:25.209337 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-74cdb46bb8-59fxs" Dec 10 15:37:25 crc kubenswrapper[4755]: I1210 15:37:25.260963 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9497c2dd-9f7a-4cf0-ab2b-5447fde7fc2a-apiservice-cert\") pod \"metallb-operator-webhook-server-69bc58cc7-kjrfc\" (UID: \"9497c2dd-9f7a-4cf0-ab2b-5447fde7fc2a\") " pod="metallb-system/metallb-operator-webhook-server-69bc58cc7-kjrfc" Dec 10 15:37:25 crc kubenswrapper[4755]: I1210 15:37:25.261073 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9497c2dd-9f7a-4cf0-ab2b-5447fde7fc2a-webhook-cert\") pod \"metallb-operator-webhook-server-69bc58cc7-kjrfc\" (UID: \"9497c2dd-9f7a-4cf0-ab2b-5447fde7fc2a\") " pod="metallb-system/metallb-operator-webhook-server-69bc58cc7-kjrfc" Dec 10 15:37:25 crc kubenswrapper[4755]: I1210 15:37:25.261098 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldgmx\" (UniqueName: \"kubernetes.io/projected/9497c2dd-9f7a-4cf0-ab2b-5447fde7fc2a-kube-api-access-ldgmx\") pod \"metallb-operator-webhook-server-69bc58cc7-kjrfc\" (UID: \"9497c2dd-9f7a-4cf0-ab2b-5447fde7fc2a\") " pod="metallb-system/metallb-operator-webhook-server-69bc58cc7-kjrfc" Dec 10 15:37:25 crc kubenswrapper[4755]: I1210 15:37:25.265723 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9497c2dd-9f7a-4cf0-ab2b-5447fde7fc2a-apiservice-cert\") pod \"metallb-operator-webhook-server-69bc58cc7-kjrfc\" (UID: \"9497c2dd-9f7a-4cf0-ab2b-5447fde7fc2a\") " pod="metallb-system/metallb-operator-webhook-server-69bc58cc7-kjrfc" Dec 10 15:37:25 crc kubenswrapper[4755]: I1210 15:37:25.268161 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9497c2dd-9f7a-4cf0-ab2b-5447fde7fc2a-webhook-cert\") pod \"metallb-operator-webhook-server-69bc58cc7-kjrfc\" (UID: \"9497c2dd-9f7a-4cf0-ab2b-5447fde7fc2a\") " pod="metallb-system/metallb-operator-webhook-server-69bc58cc7-kjrfc" Dec 10 15:37:25 crc kubenswrapper[4755]: I1210 15:37:25.283035 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldgmx\" (UniqueName: \"kubernetes.io/projected/9497c2dd-9f7a-4cf0-ab2b-5447fde7fc2a-kube-api-access-ldgmx\") pod \"metallb-operator-webhook-server-69bc58cc7-kjrfc\" (UID: \"9497c2dd-9f7a-4cf0-ab2b-5447fde7fc2a\") " pod="metallb-system/metallb-operator-webhook-server-69bc58cc7-kjrfc" Dec 10 15:37:25 crc kubenswrapper[4755]: I1210 15:37:25.453127 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-69bc58cc7-kjrfc" Dec 10 15:37:25 crc kubenswrapper[4755]: I1210 15:37:25.453322 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-74cdb46bb8-59fxs"] Dec 10 15:37:25 crc kubenswrapper[4755]: W1210 15:37:25.493673 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0f530a80_69ef_4c16_abb6_befe3285a8fd.slice/crio-d341bc48e3b558781d0fe1e81eaf2687c63b5ca9cbb35deee4d58831834e7b93 WatchSource:0}: Error finding container d341bc48e3b558781d0fe1e81eaf2687c63b5ca9cbb35deee4d58831834e7b93: Status 404 returned error can't find the container with id d341bc48e3b558781d0fe1e81eaf2687c63b5ca9cbb35deee4d58831834e7b93 Dec 10 15:37:25 crc kubenswrapper[4755]: I1210 15:37:25.685938 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-69bc58cc7-kjrfc"] Dec 10 15:37:25 crc kubenswrapper[4755]: I1210 15:37:25.687923 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-74cdb46bb8-59fxs" event={"ID":"0f530a80-69ef-4c16-abb6-befe3285a8fd","Type":"ContainerStarted","Data":"d341bc48e3b558781d0fe1e81eaf2687c63b5ca9cbb35deee4d58831834e7b93"} Dec 10 15:37:25 crc kubenswrapper[4755]: W1210 15:37:25.695358 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9497c2dd_9f7a_4cf0_ab2b_5447fde7fc2a.slice/crio-30c92c66eddfc3eab2340b303455b86b526fe48beb472039fd55935683a6e37c WatchSource:0}: Error finding container 30c92c66eddfc3eab2340b303455b86b526fe48beb472039fd55935683a6e37c: Status 404 returned error can't find the container with id 30c92c66eddfc3eab2340b303455b86b526fe48beb472039fd55935683a6e37c Dec 10 15:37:26 crc kubenswrapper[4755]: I1210 15:37:26.695843 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-69bc58cc7-kjrfc" event={"ID":"9497c2dd-9f7a-4cf0-ab2b-5447fde7fc2a","Type":"ContainerStarted","Data":"30c92c66eddfc3eab2340b303455b86b526fe48beb472039fd55935683a6e37c"} Dec 10 15:37:30 crc kubenswrapper[4755]: I1210 15:37:30.721435 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-69bc58cc7-kjrfc" event={"ID":"9497c2dd-9f7a-4cf0-ab2b-5447fde7fc2a","Type":"ContainerStarted","Data":"ab4b2c7e42dac9c27905188165e3798d7207ea832bb24c1fae427d71ea721d43"} Dec 10 15:37:30 crc kubenswrapper[4755]: I1210 15:37:30.722170 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-69bc58cc7-kjrfc" Dec 10 15:37:30 crc kubenswrapper[4755]: I1210 15:37:30.723263 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-74cdb46bb8-59fxs" event={"ID":"0f530a80-69ef-4c16-abb6-befe3285a8fd","Type":"ContainerStarted","Data":"2d1482aa7a76933c822bf97a98575da6f1e02574af37740cb1b57e4e4df58320"} Dec 10 15:37:30 crc kubenswrapper[4755]: I1210 15:37:30.723711 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-74cdb46bb8-59fxs" Dec 10 15:37:30 crc kubenswrapper[4755]: I1210 15:37:30.743818 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-69bc58cc7-kjrfc" podStartSLOduration=1.131752872 podStartE2EDuration="5.743782567s" podCreationTimestamp="2025-12-10 15:37:25 +0000 UTC" firstStartedPulling="2025-12-10 15:37:25.704493254 +0000 UTC m=+842.305376886" lastFinishedPulling="2025-12-10 15:37:30.316522949 +0000 UTC m=+846.917406581" observedRunningTime="2025-12-10 15:37:30.73802578 +0000 UTC m=+847.338909412" watchObservedRunningTime="2025-12-10 15:37:30.743782567 +0000 UTC m=+847.344666189" Dec 10 15:37:30 crc kubenswrapper[4755]: I1210 15:37:30.773516 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-74cdb46bb8-59fxs" podStartSLOduration=1.974116341 podStartE2EDuration="6.773496008s" podCreationTimestamp="2025-12-10 15:37:24 +0000 UTC" firstStartedPulling="2025-12-10 15:37:25.498338338 +0000 UTC m=+842.099221960" lastFinishedPulling="2025-12-10 15:37:30.297717995 +0000 UTC m=+846.898601627" observedRunningTime="2025-12-10 15:37:30.769132959 +0000 UTC m=+847.370016621" watchObservedRunningTime="2025-12-10 15:37:30.773496008 +0000 UTC m=+847.374379650" Dec 10 15:37:45 crc kubenswrapper[4755]: I1210 15:37:45.466521 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-69bc58cc7-kjrfc" Dec 10 15:38:05 crc kubenswrapper[4755]: I1210 15:38:05.212893 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-74cdb46bb8-59fxs" Dec 10 15:38:06 crc kubenswrapper[4755]: I1210 15:38:06.046768 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-xvxb9"] Dec 10 15:38:06 crc kubenswrapper[4755]: I1210 15:38:06.048093 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-xvxb9" Dec 10 15:38:06 crc kubenswrapper[4755]: I1210 15:38:06.051116 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-bddzj"] Dec 10 15:38:06 crc kubenswrapper[4755]: I1210 15:38:06.053486 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-bddzj" Dec 10 15:38:06 crc kubenswrapper[4755]: I1210 15:38:06.053981 4755 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Dec 10 15:38:06 crc kubenswrapper[4755]: I1210 15:38:06.054394 4755 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-fgsst" Dec 10 15:38:06 crc kubenswrapper[4755]: I1210 15:38:06.056122 4755 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Dec 10 15:38:06 crc kubenswrapper[4755]: I1210 15:38:06.056840 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Dec 10 15:38:06 crc kubenswrapper[4755]: I1210 15:38:06.068582 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-xvxb9"] Dec 10 15:38:06 crc kubenswrapper[4755]: I1210 15:38:06.106329 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrc49\" (UniqueName: \"kubernetes.io/projected/68449e5b-980d-40dc-b54f-d1263755f703-kube-api-access-xrc49\") pod \"frr-k8s-bddzj\" (UID: \"68449e5b-980d-40dc-b54f-d1263755f703\") " pod="metallb-system/frr-k8s-bddzj" Dec 10 15:38:06 crc kubenswrapper[4755]: I1210 15:38:06.106412 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/68449e5b-980d-40dc-b54f-d1263755f703-metrics-certs\") pod \"frr-k8s-bddzj\" (UID: \"68449e5b-980d-40dc-b54f-d1263755f703\") " pod="metallb-system/frr-k8s-bddzj" Dec 10 15:38:06 crc kubenswrapper[4755]: I1210 15:38:06.106440 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d04b8edb-ca78-4d5d-9de7-11935b847af1-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-xvxb9\" (UID: \"d04b8edb-ca78-4d5d-9de7-11935b847af1\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-xvxb9" Dec 10 15:38:06 crc kubenswrapper[4755]: I1210 15:38:06.106455 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/68449e5b-980d-40dc-b54f-d1263755f703-frr-conf\") pod \"frr-k8s-bddzj\" (UID: \"68449e5b-980d-40dc-b54f-d1263755f703\") " pod="metallb-system/frr-k8s-bddzj" Dec 10 15:38:06 crc kubenswrapper[4755]: I1210 15:38:06.106490 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/68449e5b-980d-40dc-b54f-d1263755f703-frr-startup\") pod \"frr-k8s-bddzj\" (UID: \"68449e5b-980d-40dc-b54f-d1263755f703\") " pod="metallb-system/frr-k8s-bddzj" Dec 10 15:38:06 crc kubenswrapper[4755]: I1210 15:38:06.106510 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/68449e5b-980d-40dc-b54f-d1263755f703-frr-sockets\") pod \"frr-k8s-bddzj\" (UID: \"68449e5b-980d-40dc-b54f-d1263755f703\") " pod="metallb-system/frr-k8s-bddzj" Dec 10 15:38:06 crc kubenswrapper[4755]: I1210 15:38:06.106539 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/68449e5b-980d-40dc-b54f-d1263755f703-metrics\") pod \"frr-k8s-bddzj\" (UID: \"68449e5b-980d-40dc-b54f-d1263755f703\") " pod="metallb-system/frr-k8s-bddzj" Dec 10 15:38:06 crc kubenswrapper[4755]: I1210 15:38:06.106562 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7d6t\" (UniqueName: \"kubernetes.io/projected/d04b8edb-ca78-4d5d-9de7-11935b847af1-kube-api-access-b7d6t\") pod \"frr-k8s-webhook-server-7fcb986d4-xvxb9\" (UID: \"d04b8edb-ca78-4d5d-9de7-11935b847af1\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-xvxb9" Dec 10 15:38:06 crc kubenswrapper[4755]: I1210 15:38:06.106583 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/68449e5b-980d-40dc-b54f-d1263755f703-reloader\") pod \"frr-k8s-bddzj\" (UID: \"68449e5b-980d-40dc-b54f-d1263755f703\") " pod="metallb-system/frr-k8s-bddzj" Dec 10 15:38:06 crc kubenswrapper[4755]: I1210 15:38:06.131969 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-86dvw"] Dec 10 15:38:06 crc kubenswrapper[4755]: I1210 15:38:06.136809 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-86dvw" Dec 10 15:38:06 crc kubenswrapper[4755]: I1210 15:38:06.141730 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Dec 10 15:38:06 crc kubenswrapper[4755]: I1210 15:38:06.141933 4755 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Dec 10 15:38:06 crc kubenswrapper[4755]: I1210 15:38:06.141958 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-f8648f98b-qsjcn"] Dec 10 15:38:06 crc kubenswrapper[4755]: I1210 15:38:06.142258 4755 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-zrhr8" Dec 10 15:38:06 crc kubenswrapper[4755]: I1210 15:38:06.142299 4755 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Dec 10 15:38:06 crc kubenswrapper[4755]: I1210 15:38:06.143163 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-f8648f98b-qsjcn" Dec 10 15:38:06 crc kubenswrapper[4755]: I1210 15:38:06.148680 4755 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Dec 10 15:38:06 crc kubenswrapper[4755]: I1210 15:38:06.154449 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-f8648f98b-qsjcn"] Dec 10 15:38:06 crc kubenswrapper[4755]: I1210 15:38:06.207908 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/68449e5b-980d-40dc-b54f-d1263755f703-metrics-certs\") pod \"frr-k8s-bddzj\" (UID: \"68449e5b-980d-40dc-b54f-d1263755f703\") " pod="metallb-system/frr-k8s-bddzj" Dec 10 15:38:06 crc kubenswrapper[4755]: E1210 15:38:06.208807 4755 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Dec 10 15:38:06 crc kubenswrapper[4755]: E1210 15:38:06.223796 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/68449e5b-980d-40dc-b54f-d1263755f703-metrics-certs podName:68449e5b-980d-40dc-b54f-d1263755f703 nodeName:}" failed. No retries permitted until 2025-12-10 15:38:06.723766177 +0000 UTC m=+883.324649819 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/68449e5b-980d-40dc-b54f-d1263755f703-metrics-certs") pod "frr-k8s-bddzj" (UID: "68449e5b-980d-40dc-b54f-d1263755f703") : secret "frr-k8s-certs-secret" not found Dec 10 15:38:06 crc kubenswrapper[4755]: I1210 15:38:06.224950 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d04b8edb-ca78-4d5d-9de7-11935b847af1-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-xvxb9\" (UID: \"d04b8edb-ca78-4d5d-9de7-11935b847af1\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-xvxb9" Dec 10 15:38:06 crc kubenswrapper[4755]: I1210 15:38:06.225092 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/68449e5b-980d-40dc-b54f-d1263755f703-frr-conf\") pod \"frr-k8s-bddzj\" (UID: \"68449e5b-980d-40dc-b54f-d1263755f703\") " pod="metallb-system/frr-k8s-bddzj" Dec 10 15:38:06 crc kubenswrapper[4755]: E1210 15:38:06.225158 4755 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Dec 10 15:38:06 crc kubenswrapper[4755]: E1210 15:38:06.225263 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d04b8edb-ca78-4d5d-9de7-11935b847af1-cert podName:d04b8edb-ca78-4d5d-9de7-11935b847af1 nodeName:}" failed. No retries permitted until 2025-12-10 15:38:06.725237457 +0000 UTC m=+883.326121089 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d04b8edb-ca78-4d5d-9de7-11935b847af1-cert") pod "frr-k8s-webhook-server-7fcb986d4-xvxb9" (UID: "d04b8edb-ca78-4d5d-9de7-11935b847af1") : secret "frr-k8s-webhook-server-cert" not found Dec 10 15:38:06 crc kubenswrapper[4755]: I1210 15:38:06.225395 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/68449e5b-980d-40dc-b54f-d1263755f703-frr-startup\") pod \"frr-k8s-bddzj\" (UID: \"68449e5b-980d-40dc-b54f-d1263755f703\") " pod="metallb-system/frr-k8s-bddzj" Dec 10 15:38:06 crc kubenswrapper[4755]: I1210 15:38:06.225561 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/68449e5b-980d-40dc-b54f-d1263755f703-frr-sockets\") pod \"frr-k8s-bddzj\" (UID: \"68449e5b-980d-40dc-b54f-d1263755f703\") " pod="metallb-system/frr-k8s-bddzj" Dec 10 15:38:06 crc kubenswrapper[4755]: I1210 15:38:06.225577 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/68449e5b-980d-40dc-b54f-d1263755f703-frr-conf\") pod \"frr-k8s-bddzj\" (UID: \"68449e5b-980d-40dc-b54f-d1263755f703\") " pod="metallb-system/frr-k8s-bddzj" Dec 10 15:38:06 crc kubenswrapper[4755]: I1210 15:38:06.225726 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/68449e5b-980d-40dc-b54f-d1263755f703-metrics\") pod \"frr-k8s-bddzj\" (UID: \"68449e5b-980d-40dc-b54f-d1263755f703\") " pod="metallb-system/frr-k8s-bddzj" Dec 10 15:38:06 crc kubenswrapper[4755]: I1210 15:38:06.225850 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7d6t\" (UniqueName: \"kubernetes.io/projected/d04b8edb-ca78-4d5d-9de7-11935b847af1-kube-api-access-b7d6t\") pod \"frr-k8s-webhook-server-7fcb986d4-xvxb9\" (UID: \"d04b8edb-ca78-4d5d-9de7-11935b847af1\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-xvxb9" Dec 10 15:38:06 crc kubenswrapper[4755]: I1210 15:38:06.225905 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/68449e5b-980d-40dc-b54f-d1263755f703-reloader\") pod \"frr-k8s-bddzj\" (UID: \"68449e5b-980d-40dc-b54f-d1263755f703\") " pod="metallb-system/frr-k8s-bddzj" Dec 10 15:38:06 crc kubenswrapper[4755]: I1210 15:38:06.225969 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/68449e5b-980d-40dc-b54f-d1263755f703-frr-sockets\") pod \"frr-k8s-bddzj\" (UID: \"68449e5b-980d-40dc-b54f-d1263755f703\") " pod="metallb-system/frr-k8s-bddzj" Dec 10 15:38:06 crc kubenswrapper[4755]: I1210 15:38:06.225983 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrc49\" (UniqueName: \"kubernetes.io/projected/68449e5b-980d-40dc-b54f-d1263755f703-kube-api-access-xrc49\") pod \"frr-k8s-bddzj\" (UID: \"68449e5b-980d-40dc-b54f-d1263755f703\") " pod="metallb-system/frr-k8s-bddzj" Dec 10 15:38:06 crc kubenswrapper[4755]: I1210 15:38:06.226343 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/68449e5b-980d-40dc-b54f-d1263755f703-frr-startup\") pod \"frr-k8s-bddzj\" (UID: \"68449e5b-980d-40dc-b54f-d1263755f703\") " pod="metallb-system/frr-k8s-bddzj" Dec 10 15:38:06 crc kubenswrapper[4755]: I1210 15:38:06.226549 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/68449e5b-980d-40dc-b54f-d1263755f703-reloader\") pod \"frr-k8s-bddzj\" (UID: \"68449e5b-980d-40dc-b54f-d1263755f703\") " pod="metallb-system/frr-k8s-bddzj" Dec 10 15:38:06 crc kubenswrapper[4755]: I1210 15:38:06.227005 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/68449e5b-980d-40dc-b54f-d1263755f703-metrics\") pod \"frr-k8s-bddzj\" (UID: \"68449e5b-980d-40dc-b54f-d1263755f703\") " pod="metallb-system/frr-k8s-bddzj" Dec 10 15:38:06 crc kubenswrapper[4755]: I1210 15:38:06.247887 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrc49\" (UniqueName: \"kubernetes.io/projected/68449e5b-980d-40dc-b54f-d1263755f703-kube-api-access-xrc49\") pod \"frr-k8s-bddzj\" (UID: \"68449e5b-980d-40dc-b54f-d1263755f703\") " pod="metallb-system/frr-k8s-bddzj" Dec 10 15:38:06 crc kubenswrapper[4755]: I1210 15:38:06.248211 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7d6t\" (UniqueName: \"kubernetes.io/projected/d04b8edb-ca78-4d5d-9de7-11935b847af1-kube-api-access-b7d6t\") pod \"frr-k8s-webhook-server-7fcb986d4-xvxb9\" (UID: \"d04b8edb-ca78-4d5d-9de7-11935b847af1\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-xvxb9" Dec 10 15:38:06 crc kubenswrapper[4755]: I1210 15:38:06.327770 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/72472af7-45ad-4637-b6ea-7c39bd98cfbf-memberlist\") pod \"speaker-86dvw\" (UID: \"72472af7-45ad-4637-b6ea-7c39bd98cfbf\") " pod="metallb-system/speaker-86dvw" Dec 10 15:38:06 crc kubenswrapper[4755]: I1210 15:38:06.327842 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0463accf-2a6b-41bb-a91c-7609e8ff9a00-metrics-certs\") pod \"controller-f8648f98b-qsjcn\" (UID: \"0463accf-2a6b-41bb-a91c-7609e8ff9a00\") " pod="metallb-system/controller-f8648f98b-qsjcn" Dec 10 15:38:06 crc kubenswrapper[4755]: I1210 15:38:06.327884 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/72472af7-45ad-4637-b6ea-7c39bd98cfbf-metallb-excludel2\") pod \"speaker-86dvw\" (UID: \"72472af7-45ad-4637-b6ea-7c39bd98cfbf\") " pod="metallb-system/speaker-86dvw" Dec 10 15:38:06 crc kubenswrapper[4755]: I1210 15:38:06.327902 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6llk5\" (UniqueName: \"kubernetes.io/projected/0463accf-2a6b-41bb-a91c-7609e8ff9a00-kube-api-access-6llk5\") pod \"controller-f8648f98b-qsjcn\" (UID: \"0463accf-2a6b-41bb-a91c-7609e8ff9a00\") " pod="metallb-system/controller-f8648f98b-qsjcn" Dec 10 15:38:06 crc kubenswrapper[4755]: I1210 15:38:06.327921 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lv74q\" (UniqueName: \"kubernetes.io/projected/72472af7-45ad-4637-b6ea-7c39bd98cfbf-kube-api-access-lv74q\") pod \"speaker-86dvw\" (UID: \"72472af7-45ad-4637-b6ea-7c39bd98cfbf\") " pod="metallb-system/speaker-86dvw" Dec 10 15:38:06 crc kubenswrapper[4755]: I1210 15:38:06.327943 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0463accf-2a6b-41bb-a91c-7609e8ff9a00-cert\") pod \"controller-f8648f98b-qsjcn\" (UID: \"0463accf-2a6b-41bb-a91c-7609e8ff9a00\") " pod="metallb-system/controller-f8648f98b-qsjcn" Dec 10 15:38:06 crc kubenswrapper[4755]: I1210 15:38:06.327962 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/72472af7-45ad-4637-b6ea-7c39bd98cfbf-metrics-certs\") pod \"speaker-86dvw\" (UID: \"72472af7-45ad-4637-b6ea-7c39bd98cfbf\") " pod="metallb-system/speaker-86dvw" Dec 10 15:38:06 crc kubenswrapper[4755]: I1210 15:38:06.429357 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lv74q\" (UniqueName: \"kubernetes.io/projected/72472af7-45ad-4637-b6ea-7c39bd98cfbf-kube-api-access-lv74q\") pod \"speaker-86dvw\" (UID: \"72472af7-45ad-4637-b6ea-7c39bd98cfbf\") " pod="metallb-system/speaker-86dvw" Dec 10 15:38:06 crc kubenswrapper[4755]: I1210 15:38:06.429728 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0463accf-2a6b-41bb-a91c-7609e8ff9a00-cert\") pod \"controller-f8648f98b-qsjcn\" (UID: \"0463accf-2a6b-41bb-a91c-7609e8ff9a00\") " pod="metallb-system/controller-f8648f98b-qsjcn" Dec 10 15:38:06 crc kubenswrapper[4755]: I1210 15:38:06.429804 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/72472af7-45ad-4637-b6ea-7c39bd98cfbf-metrics-certs\") pod \"speaker-86dvw\" (UID: \"72472af7-45ad-4637-b6ea-7c39bd98cfbf\") " pod="metallb-system/speaker-86dvw" Dec 10 15:38:06 crc kubenswrapper[4755]: I1210 15:38:06.429986 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/72472af7-45ad-4637-b6ea-7c39bd98cfbf-memberlist\") pod \"speaker-86dvw\" (UID: \"72472af7-45ad-4637-b6ea-7c39bd98cfbf\") " pod="metallb-system/speaker-86dvw" Dec 10 15:38:06 crc kubenswrapper[4755]: I1210 15:38:06.430116 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0463accf-2a6b-41bb-a91c-7609e8ff9a00-metrics-certs\") pod \"controller-f8648f98b-qsjcn\" (UID: \"0463accf-2a6b-41bb-a91c-7609e8ff9a00\") " pod="metallb-system/controller-f8648f98b-qsjcn" Dec 10 15:38:06 crc kubenswrapper[4755]: E1210 15:38:06.430162 4755 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Dec 10 15:38:06 crc kubenswrapper[4755]: E1210 15:38:06.430693 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/72472af7-45ad-4637-b6ea-7c39bd98cfbf-memberlist podName:72472af7-45ad-4637-b6ea-7c39bd98cfbf nodeName:}" failed. No retries permitted until 2025-12-10 15:38:06.930662763 +0000 UTC m=+883.531546445 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/72472af7-45ad-4637-b6ea-7c39bd98cfbf-memberlist") pod "speaker-86dvw" (UID: "72472af7-45ad-4637-b6ea-7c39bd98cfbf") : secret "metallb-memberlist" not found Dec 10 15:38:06 crc kubenswrapper[4755]: I1210 15:38:06.430616 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/72472af7-45ad-4637-b6ea-7c39bd98cfbf-metallb-excludel2\") pod \"speaker-86dvw\" (UID: \"72472af7-45ad-4637-b6ea-7c39bd98cfbf\") " pod="metallb-system/speaker-86dvw" Dec 10 15:38:06 crc kubenswrapper[4755]: I1210 15:38:06.430955 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6llk5\" (UniqueName: \"kubernetes.io/projected/0463accf-2a6b-41bb-a91c-7609e8ff9a00-kube-api-access-6llk5\") pod \"controller-f8648f98b-qsjcn\" (UID: \"0463accf-2a6b-41bb-a91c-7609e8ff9a00\") " pod="metallb-system/controller-f8648f98b-qsjcn" Dec 10 15:38:06 crc kubenswrapper[4755]: I1210 15:38:06.431530 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/72472af7-45ad-4637-b6ea-7c39bd98cfbf-metallb-excludel2\") pod \"speaker-86dvw\" (UID: \"72472af7-45ad-4637-b6ea-7c39bd98cfbf\") " pod="metallb-system/speaker-86dvw" Dec 10 15:38:06 crc kubenswrapper[4755]: I1210 15:38:06.432918 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0463accf-2a6b-41bb-a91c-7609e8ff9a00-cert\") pod \"controller-f8648f98b-qsjcn\" (UID: \"0463accf-2a6b-41bb-a91c-7609e8ff9a00\") " pod="metallb-system/controller-f8648f98b-qsjcn" Dec 10 15:38:06 crc kubenswrapper[4755]: I1210 15:38:06.433639 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0463accf-2a6b-41bb-a91c-7609e8ff9a00-metrics-certs\") pod \"controller-f8648f98b-qsjcn\" (UID: \"0463accf-2a6b-41bb-a91c-7609e8ff9a00\") " pod="metallb-system/controller-f8648f98b-qsjcn" Dec 10 15:38:06 crc kubenswrapper[4755]: I1210 15:38:06.433810 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/72472af7-45ad-4637-b6ea-7c39bd98cfbf-metrics-certs\") pod \"speaker-86dvw\" (UID: \"72472af7-45ad-4637-b6ea-7c39bd98cfbf\") " pod="metallb-system/speaker-86dvw" Dec 10 15:38:06 crc kubenswrapper[4755]: I1210 15:38:06.445661 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6llk5\" (UniqueName: \"kubernetes.io/projected/0463accf-2a6b-41bb-a91c-7609e8ff9a00-kube-api-access-6llk5\") pod \"controller-f8648f98b-qsjcn\" (UID: \"0463accf-2a6b-41bb-a91c-7609e8ff9a00\") " pod="metallb-system/controller-f8648f98b-qsjcn" Dec 10 15:38:06 crc kubenswrapper[4755]: I1210 15:38:06.446180 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lv74q\" (UniqueName: \"kubernetes.io/projected/72472af7-45ad-4637-b6ea-7c39bd98cfbf-kube-api-access-lv74q\") pod \"speaker-86dvw\" (UID: \"72472af7-45ad-4637-b6ea-7c39bd98cfbf\") " pod="metallb-system/speaker-86dvw" Dec 10 15:38:06 crc kubenswrapper[4755]: I1210 15:38:06.469137 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-f8648f98b-qsjcn" Dec 10 15:38:06 crc kubenswrapper[4755]: I1210 15:38:06.653362 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-f8648f98b-qsjcn"] Dec 10 15:38:06 crc kubenswrapper[4755]: I1210 15:38:06.738240 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d04b8edb-ca78-4d5d-9de7-11935b847af1-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-xvxb9\" (UID: \"d04b8edb-ca78-4d5d-9de7-11935b847af1\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-xvxb9" Dec 10 15:38:06 crc kubenswrapper[4755]: I1210 15:38:06.738400 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/68449e5b-980d-40dc-b54f-d1263755f703-metrics-certs\") pod \"frr-k8s-bddzj\" (UID: \"68449e5b-980d-40dc-b54f-d1263755f703\") " pod="metallb-system/frr-k8s-bddzj" Dec 10 15:38:06 crc kubenswrapper[4755]: I1210 15:38:06.743413 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d04b8edb-ca78-4d5d-9de7-11935b847af1-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-xvxb9\" (UID: \"d04b8edb-ca78-4d5d-9de7-11935b847af1\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-xvxb9" Dec 10 15:38:06 crc kubenswrapper[4755]: I1210 15:38:06.750885 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/68449e5b-980d-40dc-b54f-d1263755f703-metrics-certs\") pod \"frr-k8s-bddzj\" (UID: \"68449e5b-980d-40dc-b54f-d1263755f703\") " pod="metallb-system/frr-k8s-bddzj" Dec 10 15:38:06 crc kubenswrapper[4755]: I1210 15:38:06.940987 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/72472af7-45ad-4637-b6ea-7c39bd98cfbf-memberlist\") pod \"speaker-86dvw\" (UID: \"72472af7-45ad-4637-b6ea-7c39bd98cfbf\") " pod="metallb-system/speaker-86dvw" Dec 10 15:38:06 crc kubenswrapper[4755]: E1210 15:38:06.941241 4755 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Dec 10 15:38:06 crc kubenswrapper[4755]: E1210 15:38:06.941390 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/72472af7-45ad-4637-b6ea-7c39bd98cfbf-memberlist podName:72472af7-45ad-4637-b6ea-7c39bd98cfbf nodeName:}" failed. No retries permitted until 2025-12-10 15:38:07.941372738 +0000 UTC m=+884.542256370 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/72472af7-45ad-4637-b6ea-7c39bd98cfbf-memberlist") pod "speaker-86dvw" (UID: "72472af7-45ad-4637-b6ea-7c39bd98cfbf") : secret "metallb-memberlist" not found Dec 10 15:38:06 crc kubenswrapper[4755]: I1210 15:38:06.956534 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-qsjcn" event={"ID":"0463accf-2a6b-41bb-a91c-7609e8ff9a00","Type":"ContainerStarted","Data":"73a7f567878bdf97fd283cb2e47265f50d77ccb4b31c1fce43e1a28a0e85fb24"} Dec 10 15:38:06 crc kubenswrapper[4755]: I1210 15:38:06.956579 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-qsjcn" event={"ID":"0463accf-2a6b-41bb-a91c-7609e8ff9a00","Type":"ContainerStarted","Data":"80cfd3162b94a1fb761b6daf7e8de71dfcb5d4dbdacea490fdb609bedacdb06c"} Dec 10 15:38:06 crc kubenswrapper[4755]: I1210 15:38:06.971664 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-xvxb9" Dec 10 15:38:06 crc kubenswrapper[4755]: I1210 15:38:06.988951 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-bddzj" Dec 10 15:38:07 crc kubenswrapper[4755]: I1210 15:38:07.189949 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-xvxb9"] Dec 10 15:38:07 crc kubenswrapper[4755]: W1210 15:38:07.192258 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd04b8edb_ca78_4d5d_9de7_11935b847af1.slice/crio-a5fe29fdd31c5e6d719ab02dd2c441c886fe18aa876c4526ed13e1ea17555c2f WatchSource:0}: Error finding container a5fe29fdd31c5e6d719ab02dd2c441c886fe18aa876c4526ed13e1ea17555c2f: Status 404 returned error can't find the container with id a5fe29fdd31c5e6d719ab02dd2c441c886fe18aa876c4526ed13e1ea17555c2f Dec 10 15:38:07 crc kubenswrapper[4755]: I1210 15:38:07.953157 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/72472af7-45ad-4637-b6ea-7c39bd98cfbf-memberlist\") pod \"speaker-86dvw\" (UID: \"72472af7-45ad-4637-b6ea-7c39bd98cfbf\") " pod="metallb-system/speaker-86dvw" Dec 10 15:38:07 crc kubenswrapper[4755]: I1210 15:38:07.966162 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bddzj" event={"ID":"68449e5b-980d-40dc-b54f-d1263755f703","Type":"ContainerStarted","Data":"d2b2ae57d668b30b91be76891529fb0c67e61e797146766d462d825f04de88bb"} Dec 10 15:38:07 crc kubenswrapper[4755]: I1210 15:38:07.971163 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-qsjcn" event={"ID":"0463accf-2a6b-41bb-a91c-7609e8ff9a00","Type":"ContainerStarted","Data":"aa7a5df1b2b0380e1dbed1d7a69a6570405ed1cdcc56e9d737f6992cde7a61f0"} Dec 10 15:38:07 crc kubenswrapper[4755]: I1210 15:38:07.971451 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-f8648f98b-qsjcn" Dec 10 15:38:07 crc kubenswrapper[4755]: I1210 15:38:07.973349 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-xvxb9" event={"ID":"d04b8edb-ca78-4d5d-9de7-11935b847af1","Type":"ContainerStarted","Data":"a5fe29fdd31c5e6d719ab02dd2c441c886fe18aa876c4526ed13e1ea17555c2f"} Dec 10 15:38:07 crc kubenswrapper[4755]: I1210 15:38:07.977214 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/72472af7-45ad-4637-b6ea-7c39bd98cfbf-memberlist\") pod \"speaker-86dvw\" (UID: \"72472af7-45ad-4637-b6ea-7c39bd98cfbf\") " pod="metallb-system/speaker-86dvw" Dec 10 15:38:08 crc kubenswrapper[4755]: I1210 15:38:08.004726 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-f8648f98b-qsjcn" podStartSLOduration=2.004705512 podStartE2EDuration="2.004705512s" podCreationTimestamp="2025-12-10 15:38:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:38:07.995644045 +0000 UTC m=+884.596527697" watchObservedRunningTime="2025-12-10 15:38:08.004705512 +0000 UTC m=+884.605589144" Dec 10 15:38:08 crc kubenswrapper[4755]: I1210 15:38:08.256414 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-86dvw" Dec 10 15:38:08 crc kubenswrapper[4755]: W1210 15:38:08.277101 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod72472af7_45ad_4637_b6ea_7c39bd98cfbf.slice/crio-90e9a3fe2250a8bc8ce7de6536553657bd2f1cd571390d1616d79a9a7018762b WatchSource:0}: Error finding container 90e9a3fe2250a8bc8ce7de6536553657bd2f1cd571390d1616d79a9a7018762b: Status 404 returned error can't find the container with id 90e9a3fe2250a8bc8ce7de6536553657bd2f1cd571390d1616d79a9a7018762b Dec 10 15:38:08 crc kubenswrapper[4755]: I1210 15:38:08.991119 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-86dvw" event={"ID":"72472af7-45ad-4637-b6ea-7c39bd98cfbf","Type":"ContainerStarted","Data":"562163e25e7d39714587bf4290a76143d3bbdca008904cf06ba26ea563c1355c"} Dec 10 15:38:08 crc kubenswrapper[4755]: I1210 15:38:08.991485 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-86dvw" event={"ID":"72472af7-45ad-4637-b6ea-7c39bd98cfbf","Type":"ContainerStarted","Data":"e93c880b48702e43772d181675ff667c18bcb6899cfd9dec7e80fc9fb4f7d013"} Dec 10 15:38:08 crc kubenswrapper[4755]: I1210 15:38:08.991497 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-86dvw" event={"ID":"72472af7-45ad-4637-b6ea-7c39bd98cfbf","Type":"ContainerStarted","Data":"90e9a3fe2250a8bc8ce7de6536553657bd2f1cd571390d1616d79a9a7018762b"} Dec 10 15:38:08 crc kubenswrapper[4755]: I1210 15:38:08.991724 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-86dvw" Dec 10 15:38:09 crc kubenswrapper[4755]: I1210 15:38:09.208359 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-86dvw" podStartSLOduration=3.208340455 podStartE2EDuration="3.208340455s" podCreationTimestamp="2025-12-10 15:38:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:38:09.206965938 +0000 UTC m=+885.807849570" watchObservedRunningTime="2025-12-10 15:38:09.208340455 +0000 UTC m=+885.809224087" Dec 10 15:38:16 crc kubenswrapper[4755]: I1210 15:38:16.073133 4755 generic.go:334] "Generic (PLEG): container finished" podID="68449e5b-980d-40dc-b54f-d1263755f703" containerID="6430a95d093fde050cad274e24644ef64b0be27124af407ab1ff90644dece442" exitCode=0 Dec 10 15:38:16 crc kubenswrapper[4755]: I1210 15:38:16.073211 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bddzj" event={"ID":"68449e5b-980d-40dc-b54f-d1263755f703","Type":"ContainerDied","Data":"6430a95d093fde050cad274e24644ef64b0be27124af407ab1ff90644dece442"} Dec 10 15:38:16 crc kubenswrapper[4755]: I1210 15:38:16.075826 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-xvxb9" event={"ID":"d04b8edb-ca78-4d5d-9de7-11935b847af1","Type":"ContainerStarted","Data":"6cdc416df3b74e56681c045e4f574fa488835bb2ecb562687dcbbc7fd2bfd297"} Dec 10 15:38:16 crc kubenswrapper[4755]: I1210 15:38:16.076041 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-xvxb9" Dec 10 15:38:16 crc kubenswrapper[4755]: I1210 15:38:16.146330 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-xvxb9" podStartSLOduration=2.078071104 podStartE2EDuration="10.146307277s" podCreationTimestamp="2025-12-10 15:38:06 +0000 UTC" firstStartedPulling="2025-12-10 15:38:07.19434155 +0000 UTC m=+883.795225182" lastFinishedPulling="2025-12-10 15:38:15.262577733 +0000 UTC m=+891.863461355" observedRunningTime="2025-12-10 15:38:16.138246347 +0000 UTC m=+892.739129969" watchObservedRunningTime="2025-12-10 15:38:16.146307277 +0000 UTC m=+892.747190929" Dec 10 15:38:16 crc kubenswrapper[4755]: I1210 15:38:16.474335 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-f8648f98b-qsjcn" Dec 10 15:38:17 crc kubenswrapper[4755]: I1210 15:38:17.082656 4755 generic.go:334] "Generic (PLEG): container finished" podID="68449e5b-980d-40dc-b54f-d1263755f703" containerID="c26fab5941efb4fe82b263853714f1740e52bce101baafc51670294fafa361ed" exitCode=0 Dec 10 15:38:17 crc kubenswrapper[4755]: I1210 15:38:17.082716 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bddzj" event={"ID":"68449e5b-980d-40dc-b54f-d1263755f703","Type":"ContainerDied","Data":"c26fab5941efb4fe82b263853714f1740e52bce101baafc51670294fafa361ed"} Dec 10 15:38:18 crc kubenswrapper[4755]: I1210 15:38:18.091519 4755 generic.go:334] "Generic (PLEG): container finished" podID="68449e5b-980d-40dc-b54f-d1263755f703" containerID="5d5e52afe20bd9821a53e7edc5cf17db3ab89a58c3a02f9d094f59d5b2a22f7e" exitCode=0 Dec 10 15:38:18 crc kubenswrapper[4755]: I1210 15:38:18.091704 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bddzj" event={"ID":"68449e5b-980d-40dc-b54f-d1263755f703","Type":"ContainerDied","Data":"5d5e52afe20bd9821a53e7edc5cf17db3ab89a58c3a02f9d094f59d5b2a22f7e"} Dec 10 15:38:18 crc kubenswrapper[4755]: I1210 15:38:18.260768 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-86dvw" Dec 10 15:38:19 crc kubenswrapper[4755]: I1210 15:38:19.105086 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bddzj" event={"ID":"68449e5b-980d-40dc-b54f-d1263755f703","Type":"ContainerStarted","Data":"b434eb56f3b75d7b5d624ed9ad568f48b027b6db46fc52390521fade96e96549"} Dec 10 15:38:19 crc kubenswrapper[4755]: I1210 15:38:19.105399 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bddzj" event={"ID":"68449e5b-980d-40dc-b54f-d1263755f703","Type":"ContainerStarted","Data":"6c9b6effea0a165b10c2e366c979323de3da7f5e6920abd7dbf12cd1f3f1f84a"} Dec 10 15:38:19 crc kubenswrapper[4755]: I1210 15:38:19.105416 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bddzj" event={"ID":"68449e5b-980d-40dc-b54f-d1263755f703","Type":"ContainerStarted","Data":"742b0331bda8d01168f44818fe522ee670f78818acc2baa06364792ea587cef0"} Dec 10 15:38:20 crc kubenswrapper[4755]: I1210 15:38:20.115837 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bddzj" event={"ID":"68449e5b-980d-40dc-b54f-d1263755f703","Type":"ContainerStarted","Data":"d5b9cf33dff17fe3b9c87bb52e3298e7cb677f840e4248ef7f562f11b3672067"} Dec 10 15:38:20 crc kubenswrapper[4755]: I1210 15:38:20.115886 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bddzj" event={"ID":"68449e5b-980d-40dc-b54f-d1263755f703","Type":"ContainerStarted","Data":"aa8385e650b633ca4df6eb74bb3bf56752ae130fcd1386caa83509c4c1f12e36"} Dec 10 15:38:20 crc kubenswrapper[4755]: I1210 15:38:20.115899 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bddzj" event={"ID":"68449e5b-980d-40dc-b54f-d1263755f703","Type":"ContainerStarted","Data":"769cf1abf6c53e76601c9f086eb52c16d117091d1d8197823ef2d6235b1df2c3"} Dec 10 15:38:20 crc kubenswrapper[4755]: I1210 15:38:20.116043 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-bddzj" Dec 10 15:38:20 crc kubenswrapper[4755]: I1210 15:38:20.139447 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-bddzj" podStartSLOduration=5.996687889 podStartE2EDuration="14.139430335s" podCreationTimestamp="2025-12-10 15:38:06 +0000 UTC" firstStartedPulling="2025-12-10 15:38:07.098516446 +0000 UTC m=+883.699400078" lastFinishedPulling="2025-12-10 15:38:15.241258892 +0000 UTC m=+891.842142524" observedRunningTime="2025-12-10 15:38:20.137271256 +0000 UTC m=+896.738154908" watchObservedRunningTime="2025-12-10 15:38:20.139430335 +0000 UTC m=+896.740313967" Dec 10 15:38:21 crc kubenswrapper[4755]: I1210 15:38:21.247920 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-h4nx8"] Dec 10 15:38:21 crc kubenswrapper[4755]: I1210 15:38:21.249316 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-h4nx8" Dec 10 15:38:21 crc kubenswrapper[4755]: I1210 15:38:21.252535 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Dec 10 15:38:21 crc kubenswrapper[4755]: I1210 15:38:21.252757 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Dec 10 15:38:21 crc kubenswrapper[4755]: I1210 15:38:21.252970 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-cxggh" Dec 10 15:38:21 crc kubenswrapper[4755]: I1210 15:38:21.274336 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-h4nx8"] Dec 10 15:38:21 crc kubenswrapper[4755]: I1210 15:38:21.344665 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftc65\" (UniqueName: \"kubernetes.io/projected/54692b5a-fc59-4626-b312-d05a407c6bf7-kube-api-access-ftc65\") pod \"openstack-operator-index-h4nx8\" (UID: \"54692b5a-fc59-4626-b312-d05a407c6bf7\") " pod="openstack-operators/openstack-operator-index-h4nx8" Dec 10 15:38:21 crc kubenswrapper[4755]: I1210 15:38:21.446353 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftc65\" (UniqueName: \"kubernetes.io/projected/54692b5a-fc59-4626-b312-d05a407c6bf7-kube-api-access-ftc65\") pod \"openstack-operator-index-h4nx8\" (UID: \"54692b5a-fc59-4626-b312-d05a407c6bf7\") " pod="openstack-operators/openstack-operator-index-h4nx8" Dec 10 15:38:21 crc kubenswrapper[4755]: I1210 15:38:21.467043 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftc65\" (UniqueName: \"kubernetes.io/projected/54692b5a-fc59-4626-b312-d05a407c6bf7-kube-api-access-ftc65\") pod \"openstack-operator-index-h4nx8\" (UID: \"54692b5a-fc59-4626-b312-d05a407c6bf7\") " pod="openstack-operators/openstack-operator-index-h4nx8" Dec 10 15:38:21 crc kubenswrapper[4755]: I1210 15:38:21.570007 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-h4nx8" Dec 10 15:38:21 crc kubenswrapper[4755]: W1210 15:38:21.938843 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod54692b5a_fc59_4626_b312_d05a407c6bf7.slice/crio-c37d7c79038bf3f350099ad7b678323fef0868effcfe220c3ce593b670b477bc WatchSource:0}: Error finding container c37d7c79038bf3f350099ad7b678323fef0868effcfe220c3ce593b670b477bc: Status 404 returned error can't find the container with id c37d7c79038bf3f350099ad7b678323fef0868effcfe220c3ce593b670b477bc Dec 10 15:38:21 crc kubenswrapper[4755]: I1210 15:38:21.940357 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-h4nx8"] Dec 10 15:38:21 crc kubenswrapper[4755]: I1210 15:38:21.990206 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-bddzj" Dec 10 15:38:22 crc kubenswrapper[4755]: I1210 15:38:22.043201 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-bddzj" Dec 10 15:38:22 crc kubenswrapper[4755]: I1210 15:38:22.135375 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-h4nx8" event={"ID":"54692b5a-fc59-4626-b312-d05a407c6bf7","Type":"ContainerStarted","Data":"c37d7c79038bf3f350099ad7b678323fef0868effcfe220c3ce593b670b477bc"} Dec 10 15:38:24 crc kubenswrapper[4755]: I1210 15:38:24.624797 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-h4nx8"] Dec 10 15:38:25 crc kubenswrapper[4755]: I1210 15:38:25.241997 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-5gf5f"] Dec 10 15:38:25 crc kubenswrapper[4755]: I1210 15:38:25.242922 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-5gf5f" Dec 10 15:38:25 crc kubenswrapper[4755]: I1210 15:38:25.248977 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-5gf5f"] Dec 10 15:38:25 crc kubenswrapper[4755]: I1210 15:38:25.436975 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gk2r\" (UniqueName: \"kubernetes.io/projected/be7f5dba-16d0-45c6-a8df-f978e3042232-kube-api-access-6gk2r\") pod \"openstack-operator-index-5gf5f\" (UID: \"be7f5dba-16d0-45c6-a8df-f978e3042232\") " pod="openstack-operators/openstack-operator-index-5gf5f" Dec 10 15:38:25 crc kubenswrapper[4755]: I1210 15:38:25.539416 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gk2r\" (UniqueName: \"kubernetes.io/projected/be7f5dba-16d0-45c6-a8df-f978e3042232-kube-api-access-6gk2r\") pod \"openstack-operator-index-5gf5f\" (UID: \"be7f5dba-16d0-45c6-a8df-f978e3042232\") " pod="openstack-operators/openstack-operator-index-5gf5f" Dec 10 15:38:25 crc kubenswrapper[4755]: I1210 15:38:25.574447 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gk2r\" (UniqueName: \"kubernetes.io/projected/be7f5dba-16d0-45c6-a8df-f978e3042232-kube-api-access-6gk2r\") pod \"openstack-operator-index-5gf5f\" (UID: \"be7f5dba-16d0-45c6-a8df-f978e3042232\") " pod="openstack-operators/openstack-operator-index-5gf5f" Dec 10 15:38:25 crc kubenswrapper[4755]: I1210 15:38:25.873423 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-5gf5f" Dec 10 15:38:26 crc kubenswrapper[4755]: I1210 15:38:26.710971 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-5gf5f"] Dec 10 15:38:26 crc kubenswrapper[4755]: I1210 15:38:26.977426 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-xvxb9" Dec 10 15:38:27 crc kubenswrapper[4755]: I1210 15:38:27.170496 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-h4nx8" event={"ID":"54692b5a-fc59-4626-b312-d05a407c6bf7","Type":"ContainerStarted","Data":"9067d7264396f5542396ab10adc6d1a30aeef3a8dff52224ec92fb976ade3b68"} Dec 10 15:38:27 crc kubenswrapper[4755]: I1210 15:38:27.170616 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-h4nx8" podUID="54692b5a-fc59-4626-b312-d05a407c6bf7" containerName="registry-server" containerID="cri-o://9067d7264396f5542396ab10adc6d1a30aeef3a8dff52224ec92fb976ade3b68" gracePeriod=2 Dec 10 15:38:27 crc kubenswrapper[4755]: I1210 15:38:27.171991 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-5gf5f" event={"ID":"be7f5dba-16d0-45c6-a8df-f978e3042232","Type":"ContainerStarted","Data":"7c19b24bfb95cefe2d0b9f80536f0ed1084032dc2b07105ff1f39239c4b51482"} Dec 10 15:38:27 crc kubenswrapper[4755]: I1210 15:38:27.172097 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-5gf5f" event={"ID":"be7f5dba-16d0-45c6-a8df-f978e3042232","Type":"ContainerStarted","Data":"fdefe8165aa47997cde223cd884253dfdef2326a05b92f67fccc47e4d1defebc"} Dec 10 15:38:27 crc kubenswrapper[4755]: I1210 15:38:27.190712 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-h4nx8" podStartSLOduration=1.8210393759999999 podStartE2EDuration="6.190691728s" podCreationTimestamp="2025-12-10 15:38:21 +0000 UTC" firstStartedPulling="2025-12-10 15:38:21.946396621 +0000 UTC m=+898.547280253" lastFinishedPulling="2025-12-10 15:38:26.316048963 +0000 UTC m=+902.916932605" observedRunningTime="2025-12-10 15:38:27.183280266 +0000 UTC m=+903.784163908" watchObservedRunningTime="2025-12-10 15:38:27.190691728 +0000 UTC m=+903.791575370" Dec 10 15:38:27 crc kubenswrapper[4755]: I1210 15:38:27.196257 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-5gf5f" podStartSLOduration=2.055100958 podStartE2EDuration="2.196233489s" podCreationTimestamp="2025-12-10 15:38:25 +0000 UTC" firstStartedPulling="2025-12-10 15:38:26.72055158 +0000 UTC m=+903.321435222" lastFinishedPulling="2025-12-10 15:38:26.861684111 +0000 UTC m=+903.462567753" observedRunningTime="2025-12-10 15:38:27.194941475 +0000 UTC m=+903.795825107" watchObservedRunningTime="2025-12-10 15:38:27.196233489 +0000 UTC m=+903.797117121" Dec 10 15:38:27 crc kubenswrapper[4755]: I1210 15:38:27.790973 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-h4nx8" Dec 10 15:38:27 crc kubenswrapper[4755]: I1210 15:38:27.972577 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftc65\" (UniqueName: \"kubernetes.io/projected/54692b5a-fc59-4626-b312-d05a407c6bf7-kube-api-access-ftc65\") pod \"54692b5a-fc59-4626-b312-d05a407c6bf7\" (UID: \"54692b5a-fc59-4626-b312-d05a407c6bf7\") " Dec 10 15:38:27 crc kubenswrapper[4755]: I1210 15:38:27.982454 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54692b5a-fc59-4626-b312-d05a407c6bf7-kube-api-access-ftc65" (OuterVolumeSpecName: "kube-api-access-ftc65") pod "54692b5a-fc59-4626-b312-d05a407c6bf7" (UID: "54692b5a-fc59-4626-b312-d05a407c6bf7"). InnerVolumeSpecName "kube-api-access-ftc65". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:38:28 crc kubenswrapper[4755]: I1210 15:38:28.074023 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ftc65\" (UniqueName: \"kubernetes.io/projected/54692b5a-fc59-4626-b312-d05a407c6bf7-kube-api-access-ftc65\") on node \"crc\" DevicePath \"\"" Dec 10 15:38:28 crc kubenswrapper[4755]: I1210 15:38:28.180143 4755 generic.go:334] "Generic (PLEG): container finished" podID="54692b5a-fc59-4626-b312-d05a407c6bf7" containerID="9067d7264396f5542396ab10adc6d1a30aeef3a8dff52224ec92fb976ade3b68" exitCode=0 Dec 10 15:38:28 crc kubenswrapper[4755]: I1210 15:38:28.180209 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-h4nx8" Dec 10 15:38:28 crc kubenswrapper[4755]: I1210 15:38:28.180197 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-h4nx8" event={"ID":"54692b5a-fc59-4626-b312-d05a407c6bf7","Type":"ContainerDied","Data":"9067d7264396f5542396ab10adc6d1a30aeef3a8dff52224ec92fb976ade3b68"} Dec 10 15:38:28 crc kubenswrapper[4755]: I1210 15:38:28.180266 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-h4nx8" event={"ID":"54692b5a-fc59-4626-b312-d05a407c6bf7","Type":"ContainerDied","Data":"c37d7c79038bf3f350099ad7b678323fef0868effcfe220c3ce593b670b477bc"} Dec 10 15:38:28 crc kubenswrapper[4755]: I1210 15:38:28.180286 4755 scope.go:117] "RemoveContainer" containerID="9067d7264396f5542396ab10adc6d1a30aeef3a8dff52224ec92fb976ade3b68" Dec 10 15:38:28 crc kubenswrapper[4755]: I1210 15:38:28.195271 4755 scope.go:117] "RemoveContainer" containerID="9067d7264396f5542396ab10adc6d1a30aeef3a8dff52224ec92fb976ade3b68" Dec 10 15:38:28 crc kubenswrapper[4755]: E1210 15:38:28.195718 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9067d7264396f5542396ab10adc6d1a30aeef3a8dff52224ec92fb976ade3b68\": container with ID starting with 9067d7264396f5542396ab10adc6d1a30aeef3a8dff52224ec92fb976ade3b68 not found: ID does not exist" containerID="9067d7264396f5542396ab10adc6d1a30aeef3a8dff52224ec92fb976ade3b68" Dec 10 15:38:28 crc kubenswrapper[4755]: I1210 15:38:28.195749 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9067d7264396f5542396ab10adc6d1a30aeef3a8dff52224ec92fb976ade3b68"} err="failed to get container status \"9067d7264396f5542396ab10adc6d1a30aeef3a8dff52224ec92fb976ade3b68\": rpc error: code = NotFound desc = could not find container \"9067d7264396f5542396ab10adc6d1a30aeef3a8dff52224ec92fb976ade3b68\": container with ID starting with 9067d7264396f5542396ab10adc6d1a30aeef3a8dff52224ec92fb976ade3b68 not found: ID does not exist" Dec 10 15:38:28 crc kubenswrapper[4755]: I1210 15:38:28.211691 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-h4nx8"] Dec 10 15:38:28 crc kubenswrapper[4755]: I1210 15:38:28.216597 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-h4nx8"] Dec 10 15:38:29 crc kubenswrapper[4755]: I1210 15:38:29.776539 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54692b5a-fc59-4626-b312-d05a407c6bf7" path="/var/lib/kubelet/pods/54692b5a-fc59-4626-b312-d05a407c6bf7/volumes" Dec 10 15:38:35 crc kubenswrapper[4755]: I1210 15:38:35.873950 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-5gf5f" Dec 10 15:38:35 crc kubenswrapper[4755]: I1210 15:38:35.874310 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-5gf5f" Dec 10 15:38:35 crc kubenswrapper[4755]: I1210 15:38:35.927640 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-5gf5f" Dec 10 15:38:36 crc kubenswrapper[4755]: I1210 15:38:36.289932 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-5gf5f" Dec 10 15:38:36 crc kubenswrapper[4755]: I1210 15:38:36.992192 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-bddzj" Dec 10 15:38:40 crc kubenswrapper[4755]: I1210 15:38:40.358886 4755 patch_prober.go:28] interesting pod/machine-config-daemon-ggt8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 15:38:40 crc kubenswrapper[4755]: I1210 15:38:40.359207 4755 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 15:38:41 crc kubenswrapper[4755]: I1210 15:38:41.868075 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/147d6c35b26de94843aae2cc16def28bc6b9292bfcf7a2079ec0c049657wkgl"] Dec 10 15:38:41 crc kubenswrapper[4755]: E1210 15:38:41.868665 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54692b5a-fc59-4626-b312-d05a407c6bf7" containerName="registry-server" Dec 10 15:38:41 crc kubenswrapper[4755]: I1210 15:38:41.868681 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="54692b5a-fc59-4626-b312-d05a407c6bf7" containerName="registry-server" Dec 10 15:38:41 crc kubenswrapper[4755]: I1210 15:38:41.868822 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="54692b5a-fc59-4626-b312-d05a407c6bf7" containerName="registry-server" Dec 10 15:38:41 crc kubenswrapper[4755]: I1210 15:38:41.869864 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/147d6c35b26de94843aae2cc16def28bc6b9292bfcf7a2079ec0c049657wkgl" Dec 10 15:38:41 crc kubenswrapper[4755]: I1210 15:38:41.872242 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-c9zvq" Dec 10 15:38:41 crc kubenswrapper[4755]: I1210 15:38:41.876817 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/147d6c35b26de94843aae2cc16def28bc6b9292bfcf7a2079ec0c049657wkgl"] Dec 10 15:38:41 crc kubenswrapper[4755]: I1210 15:38:41.971136 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwggg\" (UniqueName: \"kubernetes.io/projected/42436916-b7c2-4531-ada8-a5590d158fe9-kube-api-access-nwggg\") pod \"147d6c35b26de94843aae2cc16def28bc6b9292bfcf7a2079ec0c049657wkgl\" (UID: \"42436916-b7c2-4531-ada8-a5590d158fe9\") " pod="openstack-operators/147d6c35b26de94843aae2cc16def28bc6b9292bfcf7a2079ec0c049657wkgl" Dec 10 15:38:41 crc kubenswrapper[4755]: I1210 15:38:41.971237 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/42436916-b7c2-4531-ada8-a5590d158fe9-util\") pod \"147d6c35b26de94843aae2cc16def28bc6b9292bfcf7a2079ec0c049657wkgl\" (UID: \"42436916-b7c2-4531-ada8-a5590d158fe9\") " pod="openstack-operators/147d6c35b26de94843aae2cc16def28bc6b9292bfcf7a2079ec0c049657wkgl" Dec 10 15:38:41 crc kubenswrapper[4755]: I1210 15:38:41.971260 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/42436916-b7c2-4531-ada8-a5590d158fe9-bundle\") pod \"147d6c35b26de94843aae2cc16def28bc6b9292bfcf7a2079ec0c049657wkgl\" (UID: \"42436916-b7c2-4531-ada8-a5590d158fe9\") " pod="openstack-operators/147d6c35b26de94843aae2cc16def28bc6b9292bfcf7a2079ec0c049657wkgl" Dec 10 15:38:42 crc kubenswrapper[4755]: I1210 15:38:42.072336 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/42436916-b7c2-4531-ada8-a5590d158fe9-util\") pod \"147d6c35b26de94843aae2cc16def28bc6b9292bfcf7a2079ec0c049657wkgl\" (UID: \"42436916-b7c2-4531-ada8-a5590d158fe9\") " pod="openstack-operators/147d6c35b26de94843aae2cc16def28bc6b9292bfcf7a2079ec0c049657wkgl" Dec 10 15:38:42 crc kubenswrapper[4755]: I1210 15:38:42.072385 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/42436916-b7c2-4531-ada8-a5590d158fe9-bundle\") pod \"147d6c35b26de94843aae2cc16def28bc6b9292bfcf7a2079ec0c049657wkgl\" (UID: \"42436916-b7c2-4531-ada8-a5590d158fe9\") " pod="openstack-operators/147d6c35b26de94843aae2cc16def28bc6b9292bfcf7a2079ec0c049657wkgl" Dec 10 15:38:42 crc kubenswrapper[4755]: I1210 15:38:42.072440 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwggg\" (UniqueName: \"kubernetes.io/projected/42436916-b7c2-4531-ada8-a5590d158fe9-kube-api-access-nwggg\") pod \"147d6c35b26de94843aae2cc16def28bc6b9292bfcf7a2079ec0c049657wkgl\" (UID: \"42436916-b7c2-4531-ada8-a5590d158fe9\") " pod="openstack-operators/147d6c35b26de94843aae2cc16def28bc6b9292bfcf7a2079ec0c049657wkgl" Dec 10 15:38:42 crc kubenswrapper[4755]: I1210 15:38:42.073331 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/42436916-b7c2-4531-ada8-a5590d158fe9-util\") pod \"147d6c35b26de94843aae2cc16def28bc6b9292bfcf7a2079ec0c049657wkgl\" (UID: \"42436916-b7c2-4531-ada8-a5590d158fe9\") " pod="openstack-operators/147d6c35b26de94843aae2cc16def28bc6b9292bfcf7a2079ec0c049657wkgl" Dec 10 15:38:42 crc kubenswrapper[4755]: I1210 15:38:42.073537 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/42436916-b7c2-4531-ada8-a5590d158fe9-bundle\") pod \"147d6c35b26de94843aae2cc16def28bc6b9292bfcf7a2079ec0c049657wkgl\" (UID: \"42436916-b7c2-4531-ada8-a5590d158fe9\") " pod="openstack-operators/147d6c35b26de94843aae2cc16def28bc6b9292bfcf7a2079ec0c049657wkgl" Dec 10 15:38:42 crc kubenswrapper[4755]: I1210 15:38:42.097623 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwggg\" (UniqueName: \"kubernetes.io/projected/42436916-b7c2-4531-ada8-a5590d158fe9-kube-api-access-nwggg\") pod \"147d6c35b26de94843aae2cc16def28bc6b9292bfcf7a2079ec0c049657wkgl\" (UID: \"42436916-b7c2-4531-ada8-a5590d158fe9\") " pod="openstack-operators/147d6c35b26de94843aae2cc16def28bc6b9292bfcf7a2079ec0c049657wkgl" Dec 10 15:38:42 crc kubenswrapper[4755]: I1210 15:38:42.194900 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/147d6c35b26de94843aae2cc16def28bc6b9292bfcf7a2079ec0c049657wkgl" Dec 10 15:38:42 crc kubenswrapper[4755]: I1210 15:38:42.506055 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/147d6c35b26de94843aae2cc16def28bc6b9292bfcf7a2079ec0c049657wkgl"] Dec 10 15:38:43 crc kubenswrapper[4755]: I1210 15:38:43.309191 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/147d6c35b26de94843aae2cc16def28bc6b9292bfcf7a2079ec0c049657wkgl" event={"ID":"42436916-b7c2-4531-ada8-a5590d158fe9","Type":"ContainerStarted","Data":"23ba15e67b12ed78da3d350b8014991e8fdaadce8d8acfbd03087eb95bdd8c2f"} Dec 10 15:38:45 crc kubenswrapper[4755]: I1210 15:38:45.322495 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/147d6c35b26de94843aae2cc16def28bc6b9292bfcf7a2079ec0c049657wkgl" event={"ID":"42436916-b7c2-4531-ada8-a5590d158fe9","Type":"ContainerStarted","Data":"c2feb95b21c80c84015636ad1c9970073240fb787d5c2e27ea23fb8f72ce42ac"} Dec 10 15:38:45 crc kubenswrapper[4755]: E1210 15:38:45.641734 4755 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod42436916_b7c2_4531_ada8_a5590d158fe9.slice/crio-conmon-c2feb95b21c80c84015636ad1c9970073240fb787d5c2e27ea23fb8f72ce42ac.scope\": RecentStats: unable to find data in memory cache]" Dec 10 15:38:46 crc kubenswrapper[4755]: I1210 15:38:46.330390 4755 generic.go:334] "Generic (PLEG): container finished" podID="42436916-b7c2-4531-ada8-a5590d158fe9" containerID="c2feb95b21c80c84015636ad1c9970073240fb787d5c2e27ea23fb8f72ce42ac" exitCode=0 Dec 10 15:38:46 crc kubenswrapper[4755]: I1210 15:38:46.330448 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/147d6c35b26de94843aae2cc16def28bc6b9292bfcf7a2079ec0c049657wkgl" event={"ID":"42436916-b7c2-4531-ada8-a5590d158fe9","Type":"ContainerDied","Data":"c2feb95b21c80c84015636ad1c9970073240fb787d5c2e27ea23fb8f72ce42ac"} Dec 10 15:38:46 crc kubenswrapper[4755]: I1210 15:38:46.332019 4755 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 10 15:38:48 crc kubenswrapper[4755]: I1210 15:38:48.344050 4755 generic.go:334] "Generic (PLEG): container finished" podID="42436916-b7c2-4531-ada8-a5590d158fe9" containerID="2ac81670780693194ab4cc2ad5f567ba63d09c7fec675e61fe550dae24154e6f" exitCode=0 Dec 10 15:38:48 crc kubenswrapper[4755]: I1210 15:38:48.344596 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/147d6c35b26de94843aae2cc16def28bc6b9292bfcf7a2079ec0c049657wkgl" event={"ID":"42436916-b7c2-4531-ada8-a5590d158fe9","Type":"ContainerDied","Data":"2ac81670780693194ab4cc2ad5f567ba63d09c7fec675e61fe550dae24154e6f"} Dec 10 15:38:49 crc kubenswrapper[4755]: I1210 15:38:49.364229 4755 generic.go:334] "Generic (PLEG): container finished" podID="42436916-b7c2-4531-ada8-a5590d158fe9" containerID="f49162f4cc3e0fab9ce37f5aa00fbfc73e97124c0dcc78098410a3e584911ba4" exitCode=0 Dec 10 15:38:49 crc kubenswrapper[4755]: I1210 15:38:49.364326 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/147d6c35b26de94843aae2cc16def28bc6b9292bfcf7a2079ec0c049657wkgl" event={"ID":"42436916-b7c2-4531-ada8-a5590d158fe9","Type":"ContainerDied","Data":"f49162f4cc3e0fab9ce37f5aa00fbfc73e97124c0dcc78098410a3e584911ba4"} Dec 10 15:38:50 crc kubenswrapper[4755]: I1210 15:38:50.673768 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/147d6c35b26de94843aae2cc16def28bc6b9292bfcf7a2079ec0c049657wkgl" Dec 10 15:38:50 crc kubenswrapper[4755]: I1210 15:38:50.805908 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/42436916-b7c2-4531-ada8-a5590d158fe9-util\") pod \"42436916-b7c2-4531-ada8-a5590d158fe9\" (UID: \"42436916-b7c2-4531-ada8-a5590d158fe9\") " Dec 10 15:38:50 crc kubenswrapper[4755]: I1210 15:38:50.805959 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/42436916-b7c2-4531-ada8-a5590d158fe9-bundle\") pod \"42436916-b7c2-4531-ada8-a5590d158fe9\" (UID: \"42436916-b7c2-4531-ada8-a5590d158fe9\") " Dec 10 15:38:50 crc kubenswrapper[4755]: I1210 15:38:50.806005 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nwggg\" (UniqueName: \"kubernetes.io/projected/42436916-b7c2-4531-ada8-a5590d158fe9-kube-api-access-nwggg\") pod \"42436916-b7c2-4531-ada8-a5590d158fe9\" (UID: \"42436916-b7c2-4531-ada8-a5590d158fe9\") " Dec 10 15:38:50 crc kubenswrapper[4755]: I1210 15:38:50.807162 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42436916-b7c2-4531-ada8-a5590d158fe9-bundle" (OuterVolumeSpecName: "bundle") pod "42436916-b7c2-4531-ada8-a5590d158fe9" (UID: "42436916-b7c2-4531-ada8-a5590d158fe9"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:38:50 crc kubenswrapper[4755]: I1210 15:38:50.812669 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42436916-b7c2-4531-ada8-a5590d158fe9-kube-api-access-nwggg" (OuterVolumeSpecName: "kube-api-access-nwggg") pod "42436916-b7c2-4531-ada8-a5590d158fe9" (UID: "42436916-b7c2-4531-ada8-a5590d158fe9"). InnerVolumeSpecName "kube-api-access-nwggg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:38:50 crc kubenswrapper[4755]: I1210 15:38:50.816742 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42436916-b7c2-4531-ada8-a5590d158fe9-util" (OuterVolumeSpecName: "util") pod "42436916-b7c2-4531-ada8-a5590d158fe9" (UID: "42436916-b7c2-4531-ada8-a5590d158fe9"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:38:50 crc kubenswrapper[4755]: I1210 15:38:50.907663 4755 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/42436916-b7c2-4531-ada8-a5590d158fe9-util\") on node \"crc\" DevicePath \"\"" Dec 10 15:38:50 crc kubenswrapper[4755]: I1210 15:38:50.907709 4755 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/42436916-b7c2-4531-ada8-a5590d158fe9-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:38:50 crc kubenswrapper[4755]: I1210 15:38:50.907722 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nwggg\" (UniqueName: \"kubernetes.io/projected/42436916-b7c2-4531-ada8-a5590d158fe9-kube-api-access-nwggg\") on node \"crc\" DevicePath \"\"" Dec 10 15:38:51 crc kubenswrapper[4755]: I1210 15:38:51.382143 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/147d6c35b26de94843aae2cc16def28bc6b9292bfcf7a2079ec0c049657wkgl" event={"ID":"42436916-b7c2-4531-ada8-a5590d158fe9","Type":"ContainerDied","Data":"23ba15e67b12ed78da3d350b8014991e8fdaadce8d8acfbd03087eb95bdd8c2f"} Dec 10 15:38:51 crc kubenswrapper[4755]: I1210 15:38:51.382186 4755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="23ba15e67b12ed78da3d350b8014991e8fdaadce8d8acfbd03087eb95bdd8c2f" Dec 10 15:38:51 crc kubenswrapper[4755]: I1210 15:38:51.382264 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/147d6c35b26de94843aae2cc16def28bc6b9292bfcf7a2079ec0c049657wkgl" Dec 10 15:38:54 crc kubenswrapper[4755]: I1210 15:38:54.141700 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-6899b76-8jwvz"] Dec 10 15:38:54 crc kubenswrapper[4755]: E1210 15:38:54.142272 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42436916-b7c2-4531-ada8-a5590d158fe9" containerName="extract" Dec 10 15:38:54 crc kubenswrapper[4755]: I1210 15:38:54.142288 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="42436916-b7c2-4531-ada8-a5590d158fe9" containerName="extract" Dec 10 15:38:54 crc kubenswrapper[4755]: E1210 15:38:54.142304 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42436916-b7c2-4531-ada8-a5590d158fe9" containerName="util" Dec 10 15:38:54 crc kubenswrapper[4755]: I1210 15:38:54.142312 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="42436916-b7c2-4531-ada8-a5590d158fe9" containerName="util" Dec 10 15:38:54 crc kubenswrapper[4755]: E1210 15:38:54.142341 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42436916-b7c2-4531-ada8-a5590d158fe9" containerName="pull" Dec 10 15:38:54 crc kubenswrapper[4755]: I1210 15:38:54.142350 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="42436916-b7c2-4531-ada8-a5590d158fe9" containerName="pull" Dec 10 15:38:54 crc kubenswrapper[4755]: I1210 15:38:54.142517 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="42436916-b7c2-4531-ada8-a5590d158fe9" containerName="extract" Dec 10 15:38:54 crc kubenswrapper[4755]: I1210 15:38:54.143010 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-6899b76-8jwvz" Dec 10 15:38:54 crc kubenswrapper[4755]: I1210 15:38:54.145775 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-gqm4r" Dec 10 15:38:54 crc kubenswrapper[4755]: I1210 15:38:54.164876 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-6899b76-8jwvz"] Dec 10 15:38:54 crc kubenswrapper[4755]: I1210 15:38:54.250060 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcfqb\" (UniqueName: \"kubernetes.io/projected/4b263601-3c4c-48b8-a169-7e3caaa77be2-kube-api-access-lcfqb\") pod \"openstack-operator-controller-operator-6899b76-8jwvz\" (UID: \"4b263601-3c4c-48b8-a169-7e3caaa77be2\") " pod="openstack-operators/openstack-operator-controller-operator-6899b76-8jwvz" Dec 10 15:38:54 crc kubenswrapper[4755]: I1210 15:38:54.351910 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lcfqb\" (UniqueName: \"kubernetes.io/projected/4b263601-3c4c-48b8-a169-7e3caaa77be2-kube-api-access-lcfqb\") pod \"openstack-operator-controller-operator-6899b76-8jwvz\" (UID: \"4b263601-3c4c-48b8-a169-7e3caaa77be2\") " pod="openstack-operators/openstack-operator-controller-operator-6899b76-8jwvz" Dec 10 15:38:54 crc kubenswrapper[4755]: I1210 15:38:54.376861 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lcfqb\" (UniqueName: \"kubernetes.io/projected/4b263601-3c4c-48b8-a169-7e3caaa77be2-kube-api-access-lcfqb\") pod \"openstack-operator-controller-operator-6899b76-8jwvz\" (UID: \"4b263601-3c4c-48b8-a169-7e3caaa77be2\") " pod="openstack-operators/openstack-operator-controller-operator-6899b76-8jwvz" Dec 10 15:38:54 crc kubenswrapper[4755]: I1210 15:38:54.465856 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-6899b76-8jwvz" Dec 10 15:38:54 crc kubenswrapper[4755]: I1210 15:38:54.729980 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-6899b76-8jwvz"] Dec 10 15:38:55 crc kubenswrapper[4755]: I1210 15:38:55.417099 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-6899b76-8jwvz" event={"ID":"4b263601-3c4c-48b8-a169-7e3caaa77be2","Type":"ContainerStarted","Data":"eb735932a6ae34ee0da796f29a0e35c08f68353876f95a09796fc1465c2897ea"} Dec 10 15:38:58 crc kubenswrapper[4755]: I1210 15:38:58.673027 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rggft"] Dec 10 15:38:58 crc kubenswrapper[4755]: I1210 15:38:58.674858 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rggft" Dec 10 15:38:58 crc kubenswrapper[4755]: I1210 15:38:58.685422 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rggft"] Dec 10 15:38:58 crc kubenswrapper[4755]: I1210 15:38:58.824229 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/802c63e3-5c53-470a-8724-096638173db6-utilities\") pod \"community-operators-rggft\" (UID: \"802c63e3-5c53-470a-8724-096638173db6\") " pod="openshift-marketplace/community-operators-rggft" Dec 10 15:38:58 crc kubenswrapper[4755]: I1210 15:38:58.824306 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lww5k\" (UniqueName: \"kubernetes.io/projected/802c63e3-5c53-470a-8724-096638173db6-kube-api-access-lww5k\") pod \"community-operators-rggft\" (UID: \"802c63e3-5c53-470a-8724-096638173db6\") " pod="openshift-marketplace/community-operators-rggft" Dec 10 15:38:58 crc kubenswrapper[4755]: I1210 15:38:58.824347 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/802c63e3-5c53-470a-8724-096638173db6-catalog-content\") pod \"community-operators-rggft\" (UID: \"802c63e3-5c53-470a-8724-096638173db6\") " pod="openshift-marketplace/community-operators-rggft" Dec 10 15:38:58 crc kubenswrapper[4755]: I1210 15:38:58.926242 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/802c63e3-5c53-470a-8724-096638173db6-utilities\") pod \"community-operators-rggft\" (UID: \"802c63e3-5c53-470a-8724-096638173db6\") " pod="openshift-marketplace/community-operators-rggft" Dec 10 15:38:58 crc kubenswrapper[4755]: I1210 15:38:58.926320 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lww5k\" (UniqueName: \"kubernetes.io/projected/802c63e3-5c53-470a-8724-096638173db6-kube-api-access-lww5k\") pod \"community-operators-rggft\" (UID: \"802c63e3-5c53-470a-8724-096638173db6\") " pod="openshift-marketplace/community-operators-rggft" Dec 10 15:38:58 crc kubenswrapper[4755]: I1210 15:38:58.926352 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/802c63e3-5c53-470a-8724-096638173db6-catalog-content\") pod \"community-operators-rggft\" (UID: \"802c63e3-5c53-470a-8724-096638173db6\") " pod="openshift-marketplace/community-operators-rggft" Dec 10 15:38:58 crc kubenswrapper[4755]: I1210 15:38:58.926885 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/802c63e3-5c53-470a-8724-096638173db6-utilities\") pod \"community-operators-rggft\" (UID: \"802c63e3-5c53-470a-8724-096638173db6\") " pod="openshift-marketplace/community-operators-rggft" Dec 10 15:38:58 crc kubenswrapper[4755]: I1210 15:38:58.926934 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/802c63e3-5c53-470a-8724-096638173db6-catalog-content\") pod \"community-operators-rggft\" (UID: \"802c63e3-5c53-470a-8724-096638173db6\") " pod="openshift-marketplace/community-operators-rggft" Dec 10 15:38:58 crc kubenswrapper[4755]: I1210 15:38:58.956874 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lww5k\" (UniqueName: \"kubernetes.io/projected/802c63e3-5c53-470a-8724-096638173db6-kube-api-access-lww5k\") pod \"community-operators-rggft\" (UID: \"802c63e3-5c53-470a-8724-096638173db6\") " pod="openshift-marketplace/community-operators-rggft" Dec 10 15:38:59 crc kubenswrapper[4755]: I1210 15:38:59.000432 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rggft" Dec 10 15:39:00 crc kubenswrapper[4755]: I1210 15:39:00.337341 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rggft"] Dec 10 15:39:00 crc kubenswrapper[4755]: W1210 15:39:00.344022 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod802c63e3_5c53_470a_8724_096638173db6.slice/crio-1a8483420848c617db433a2a1651815f1b090332159457429bd4a2d01d354b42 WatchSource:0}: Error finding container 1a8483420848c617db433a2a1651815f1b090332159457429bd4a2d01d354b42: Status 404 returned error can't find the container with id 1a8483420848c617db433a2a1651815f1b090332159457429bd4a2d01d354b42 Dec 10 15:39:00 crc kubenswrapper[4755]: I1210 15:39:00.452808 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rggft" event={"ID":"802c63e3-5c53-470a-8724-096638173db6","Type":"ContainerStarted","Data":"1a8483420848c617db433a2a1651815f1b090332159457429bd4a2d01d354b42"} Dec 10 15:39:00 crc kubenswrapper[4755]: I1210 15:39:00.454973 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-6899b76-8jwvz" event={"ID":"4b263601-3c4c-48b8-a169-7e3caaa77be2","Type":"ContainerStarted","Data":"b897654f11315b517b7e2eeddf096a19a6b820a383360fa8cedeee459888923d"} Dec 10 15:39:00 crc kubenswrapper[4755]: I1210 15:39:00.455184 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-6899b76-8jwvz" Dec 10 15:39:00 crc kubenswrapper[4755]: I1210 15:39:00.489677 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-6899b76-8jwvz" podStartSLOduration=1.060662837 podStartE2EDuration="6.489659702s" podCreationTimestamp="2025-12-10 15:38:54 +0000 UTC" firstStartedPulling="2025-12-10 15:38:54.733413257 +0000 UTC m=+931.334296889" lastFinishedPulling="2025-12-10 15:39:00.162410112 +0000 UTC m=+936.763293754" observedRunningTime="2025-12-10 15:39:00.487733359 +0000 UTC m=+937.088617011" watchObservedRunningTime="2025-12-10 15:39:00.489659702 +0000 UTC m=+937.090543334" Dec 10 15:39:01 crc kubenswrapper[4755]: I1210 15:39:01.463751 4755 generic.go:334] "Generic (PLEG): container finished" podID="802c63e3-5c53-470a-8724-096638173db6" containerID="cc9cb7616713fb2b4e78ed5022e05f68e9ee2c4e250ca5403e0e50d2c30df2be" exitCode=0 Dec 10 15:39:01 crc kubenswrapper[4755]: I1210 15:39:01.463855 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rggft" event={"ID":"802c63e3-5c53-470a-8724-096638173db6","Type":"ContainerDied","Data":"cc9cb7616713fb2b4e78ed5022e05f68e9ee2c4e250ca5403e0e50d2c30df2be"} Dec 10 15:39:02 crc kubenswrapper[4755]: I1210 15:39:02.473125 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rggft" event={"ID":"802c63e3-5c53-470a-8724-096638173db6","Type":"ContainerStarted","Data":"9d8cc9308ac947d1b6de9176a562592c5224c1ceafe8bf48dc7c937d085f6527"} Dec 10 15:39:03 crc kubenswrapper[4755]: I1210 15:39:03.481965 4755 generic.go:334] "Generic (PLEG): container finished" podID="802c63e3-5c53-470a-8724-096638173db6" containerID="9d8cc9308ac947d1b6de9176a562592c5224c1ceafe8bf48dc7c937d085f6527" exitCode=0 Dec 10 15:39:03 crc kubenswrapper[4755]: I1210 15:39:03.482072 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rggft" event={"ID":"802c63e3-5c53-470a-8724-096638173db6","Type":"ContainerDied","Data":"9d8cc9308ac947d1b6de9176a562592c5224c1ceafe8bf48dc7c937d085f6527"} Dec 10 15:39:04 crc kubenswrapper[4755]: I1210 15:39:04.491500 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rggft" event={"ID":"802c63e3-5c53-470a-8724-096638173db6","Type":"ContainerStarted","Data":"308238df3390ed53c515bd65b923ebb2e994b2db591654ffa7c5528c5478ad8f"} Dec 10 15:39:04 crc kubenswrapper[4755]: I1210 15:39:04.514021 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rggft" podStartSLOduration=3.781590911 podStartE2EDuration="6.514002722s" podCreationTimestamp="2025-12-10 15:38:58 +0000 UTC" firstStartedPulling="2025-12-10 15:39:01.466724533 +0000 UTC m=+938.067608165" lastFinishedPulling="2025-12-10 15:39:04.199136344 +0000 UTC m=+940.800019976" observedRunningTime="2025-12-10 15:39:04.507314558 +0000 UTC m=+941.108198190" watchObservedRunningTime="2025-12-10 15:39:04.514002722 +0000 UTC m=+941.114886354" Dec 10 15:39:09 crc kubenswrapper[4755]: I1210 15:39:09.000937 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rggft" Dec 10 15:39:09 crc kubenswrapper[4755]: I1210 15:39:09.001522 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rggft" Dec 10 15:39:09 crc kubenswrapper[4755]: I1210 15:39:09.041762 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rggft" Dec 10 15:39:09 crc kubenswrapper[4755]: I1210 15:39:09.608383 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rggft" Dec 10 15:39:10 crc kubenswrapper[4755]: I1210 15:39:10.359447 4755 patch_prober.go:28] interesting pod/machine-config-daemon-ggt8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 15:39:10 crc kubenswrapper[4755]: I1210 15:39:10.359584 4755 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 15:39:11 crc kubenswrapper[4755]: I1210 15:39:11.457840 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rggft"] Dec 10 15:39:11 crc kubenswrapper[4755]: I1210 15:39:11.544091 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rggft" podUID="802c63e3-5c53-470a-8724-096638173db6" containerName="registry-server" containerID="cri-o://308238df3390ed53c515bd65b923ebb2e994b2db591654ffa7c5528c5478ad8f" gracePeriod=2 Dec 10 15:39:13 crc kubenswrapper[4755]: I1210 15:39:13.556621 4755 generic.go:334] "Generic (PLEG): container finished" podID="802c63e3-5c53-470a-8724-096638173db6" containerID="308238df3390ed53c515bd65b923ebb2e994b2db591654ffa7c5528c5478ad8f" exitCode=0 Dec 10 15:39:13 crc kubenswrapper[4755]: I1210 15:39:13.556696 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rggft" event={"ID":"802c63e3-5c53-470a-8724-096638173db6","Type":"ContainerDied","Data":"308238df3390ed53c515bd65b923ebb2e994b2db591654ffa7c5528c5478ad8f"} Dec 10 15:39:14 crc kubenswrapper[4755]: I1210 15:39:14.469720 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-6899b76-8jwvz" Dec 10 15:39:15 crc kubenswrapper[4755]: I1210 15:39:15.002126 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rggft" Dec 10 15:39:15 crc kubenswrapper[4755]: I1210 15:39:15.093788 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-pml9v"] Dec 10 15:39:15 crc kubenswrapper[4755]: E1210 15:39:15.094526 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="802c63e3-5c53-470a-8724-096638173db6" containerName="extract-content" Dec 10 15:39:15 crc kubenswrapper[4755]: I1210 15:39:15.094551 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="802c63e3-5c53-470a-8724-096638173db6" containerName="extract-content" Dec 10 15:39:15 crc kubenswrapper[4755]: E1210 15:39:15.094575 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="802c63e3-5c53-470a-8724-096638173db6" containerName="registry-server" Dec 10 15:39:15 crc kubenswrapper[4755]: I1210 15:39:15.094585 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="802c63e3-5c53-470a-8724-096638173db6" containerName="registry-server" Dec 10 15:39:15 crc kubenswrapper[4755]: E1210 15:39:15.094626 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="802c63e3-5c53-470a-8724-096638173db6" containerName="extract-utilities" Dec 10 15:39:15 crc kubenswrapper[4755]: I1210 15:39:15.094636 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="802c63e3-5c53-470a-8724-096638173db6" containerName="extract-utilities" Dec 10 15:39:15 crc kubenswrapper[4755]: I1210 15:39:15.094912 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="802c63e3-5c53-470a-8724-096638173db6" containerName="registry-server" Dec 10 15:39:15 crc kubenswrapper[4755]: I1210 15:39:15.097344 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pml9v" Dec 10 15:39:15 crc kubenswrapper[4755]: I1210 15:39:15.104270 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pml9v"] Dec 10 15:39:15 crc kubenswrapper[4755]: I1210 15:39:15.152437 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/802c63e3-5c53-470a-8724-096638173db6-catalog-content\") pod \"802c63e3-5c53-470a-8724-096638173db6\" (UID: \"802c63e3-5c53-470a-8724-096638173db6\") " Dec 10 15:39:15 crc kubenswrapper[4755]: I1210 15:39:15.152569 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/802c63e3-5c53-470a-8724-096638173db6-utilities\") pod \"802c63e3-5c53-470a-8724-096638173db6\" (UID: \"802c63e3-5c53-470a-8724-096638173db6\") " Dec 10 15:39:15 crc kubenswrapper[4755]: I1210 15:39:15.152908 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lww5k\" (UniqueName: \"kubernetes.io/projected/802c63e3-5c53-470a-8724-096638173db6-kube-api-access-lww5k\") pod \"802c63e3-5c53-470a-8724-096638173db6\" (UID: \"802c63e3-5c53-470a-8724-096638173db6\") " Dec 10 15:39:15 crc kubenswrapper[4755]: I1210 15:39:15.153223 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtnfl\" (UniqueName: \"kubernetes.io/projected/8d7211d9-0130-439e-aafd-43c30c1405c2-kube-api-access-gtnfl\") pod \"redhat-marketplace-pml9v\" (UID: \"8d7211d9-0130-439e-aafd-43c30c1405c2\") " pod="openshift-marketplace/redhat-marketplace-pml9v" Dec 10 15:39:15 crc kubenswrapper[4755]: I1210 15:39:15.153259 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d7211d9-0130-439e-aafd-43c30c1405c2-utilities\") pod \"redhat-marketplace-pml9v\" (UID: \"8d7211d9-0130-439e-aafd-43c30c1405c2\") " pod="openshift-marketplace/redhat-marketplace-pml9v" Dec 10 15:39:15 crc kubenswrapper[4755]: I1210 15:39:15.153343 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d7211d9-0130-439e-aafd-43c30c1405c2-catalog-content\") pod \"redhat-marketplace-pml9v\" (UID: \"8d7211d9-0130-439e-aafd-43c30c1405c2\") " pod="openshift-marketplace/redhat-marketplace-pml9v" Dec 10 15:39:15 crc kubenswrapper[4755]: I1210 15:39:15.153412 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/802c63e3-5c53-470a-8724-096638173db6-utilities" (OuterVolumeSpecName: "utilities") pod "802c63e3-5c53-470a-8724-096638173db6" (UID: "802c63e3-5c53-470a-8724-096638173db6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:39:15 crc kubenswrapper[4755]: I1210 15:39:15.153523 4755 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/802c63e3-5c53-470a-8724-096638173db6-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 15:39:15 crc kubenswrapper[4755]: I1210 15:39:15.159660 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/802c63e3-5c53-470a-8724-096638173db6-kube-api-access-lww5k" (OuterVolumeSpecName: "kube-api-access-lww5k") pod "802c63e3-5c53-470a-8724-096638173db6" (UID: "802c63e3-5c53-470a-8724-096638173db6"). InnerVolumeSpecName "kube-api-access-lww5k". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:39:15 crc kubenswrapper[4755]: I1210 15:39:15.215990 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/802c63e3-5c53-470a-8724-096638173db6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "802c63e3-5c53-470a-8724-096638173db6" (UID: "802c63e3-5c53-470a-8724-096638173db6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:39:15 crc kubenswrapper[4755]: I1210 15:39:15.254544 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d7211d9-0130-439e-aafd-43c30c1405c2-catalog-content\") pod \"redhat-marketplace-pml9v\" (UID: \"8d7211d9-0130-439e-aafd-43c30c1405c2\") " pod="openshift-marketplace/redhat-marketplace-pml9v" Dec 10 15:39:15 crc kubenswrapper[4755]: I1210 15:39:15.254673 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtnfl\" (UniqueName: \"kubernetes.io/projected/8d7211d9-0130-439e-aafd-43c30c1405c2-kube-api-access-gtnfl\") pod \"redhat-marketplace-pml9v\" (UID: \"8d7211d9-0130-439e-aafd-43c30c1405c2\") " pod="openshift-marketplace/redhat-marketplace-pml9v" Dec 10 15:39:15 crc kubenswrapper[4755]: I1210 15:39:15.254702 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d7211d9-0130-439e-aafd-43c30c1405c2-utilities\") pod \"redhat-marketplace-pml9v\" (UID: \"8d7211d9-0130-439e-aafd-43c30c1405c2\") " pod="openshift-marketplace/redhat-marketplace-pml9v" Dec 10 15:39:15 crc kubenswrapper[4755]: I1210 15:39:15.254742 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lww5k\" (UniqueName: \"kubernetes.io/projected/802c63e3-5c53-470a-8724-096638173db6-kube-api-access-lww5k\") on node \"crc\" DevicePath \"\"" Dec 10 15:39:15 crc kubenswrapper[4755]: I1210 15:39:15.254754 4755 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/802c63e3-5c53-470a-8724-096638173db6-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 15:39:15 crc kubenswrapper[4755]: I1210 15:39:15.255044 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d7211d9-0130-439e-aafd-43c30c1405c2-catalog-content\") pod \"redhat-marketplace-pml9v\" (UID: \"8d7211d9-0130-439e-aafd-43c30c1405c2\") " pod="openshift-marketplace/redhat-marketplace-pml9v" Dec 10 15:39:15 crc kubenswrapper[4755]: I1210 15:39:15.255096 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d7211d9-0130-439e-aafd-43c30c1405c2-utilities\") pod \"redhat-marketplace-pml9v\" (UID: \"8d7211d9-0130-439e-aafd-43c30c1405c2\") " pod="openshift-marketplace/redhat-marketplace-pml9v" Dec 10 15:39:15 crc kubenswrapper[4755]: I1210 15:39:15.278970 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtnfl\" (UniqueName: \"kubernetes.io/projected/8d7211d9-0130-439e-aafd-43c30c1405c2-kube-api-access-gtnfl\") pod \"redhat-marketplace-pml9v\" (UID: \"8d7211d9-0130-439e-aafd-43c30c1405c2\") " pod="openshift-marketplace/redhat-marketplace-pml9v" Dec 10 15:39:15 crc kubenswrapper[4755]: I1210 15:39:15.412047 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pml9v" Dec 10 15:39:15 crc kubenswrapper[4755]: I1210 15:39:15.570318 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rggft" event={"ID":"802c63e3-5c53-470a-8724-096638173db6","Type":"ContainerDied","Data":"1a8483420848c617db433a2a1651815f1b090332159457429bd4a2d01d354b42"} Dec 10 15:39:15 crc kubenswrapper[4755]: I1210 15:39:15.570371 4755 scope.go:117] "RemoveContainer" containerID="308238df3390ed53c515bd65b923ebb2e994b2db591654ffa7c5528c5478ad8f" Dec 10 15:39:15 crc kubenswrapper[4755]: I1210 15:39:15.570399 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rggft" Dec 10 15:39:15 crc kubenswrapper[4755]: I1210 15:39:15.594119 4755 scope.go:117] "RemoveContainer" containerID="9d8cc9308ac947d1b6de9176a562592c5224c1ceafe8bf48dc7c937d085f6527" Dec 10 15:39:15 crc kubenswrapper[4755]: I1210 15:39:15.600792 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rggft"] Dec 10 15:39:15 crc kubenswrapper[4755]: I1210 15:39:15.612208 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rggft"] Dec 10 15:39:15 crc kubenswrapper[4755]: I1210 15:39:15.618625 4755 scope.go:117] "RemoveContainer" containerID="cc9cb7616713fb2b4e78ed5022e05f68e9ee2c4e250ca5403e0e50d2c30df2be" Dec 10 15:39:15 crc kubenswrapper[4755]: I1210 15:39:15.765932 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="802c63e3-5c53-470a-8724-096638173db6" path="/var/lib/kubelet/pods/802c63e3-5c53-470a-8724-096638173db6/volumes" Dec 10 15:39:15 crc kubenswrapper[4755]: I1210 15:39:15.892532 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pml9v"] Dec 10 15:39:15 crc kubenswrapper[4755]: W1210 15:39:15.907922 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8d7211d9_0130_439e_aafd_43c30c1405c2.slice/crio-2d5d5f92414479c6d7094efa480bc5daf7e95b3f3654f5603bb7a7104c9bc0a4 WatchSource:0}: Error finding container 2d5d5f92414479c6d7094efa480bc5daf7e95b3f3654f5603bb7a7104c9bc0a4: Status 404 returned error can't find the container with id 2d5d5f92414479c6d7094efa480bc5daf7e95b3f3654f5603bb7a7104c9bc0a4 Dec 10 15:39:16 crc kubenswrapper[4755]: E1210 15:39:16.198204 4755 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8d7211d9_0130_439e_aafd_43c30c1405c2.slice/crio-9d80a05030a45d0acff51a6137667f3fb9ba1f02dfe86a25ea25c37acdea9b74.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8d7211d9_0130_439e_aafd_43c30c1405c2.slice/crio-conmon-9d80a05030a45d0acff51a6137667f3fb9ba1f02dfe86a25ea25c37acdea9b74.scope\": RecentStats: unable to find data in memory cache]" Dec 10 15:39:16 crc kubenswrapper[4755]: I1210 15:39:16.578497 4755 generic.go:334] "Generic (PLEG): container finished" podID="8d7211d9-0130-439e-aafd-43c30c1405c2" containerID="9d80a05030a45d0acff51a6137667f3fb9ba1f02dfe86a25ea25c37acdea9b74" exitCode=0 Dec 10 15:39:16 crc kubenswrapper[4755]: I1210 15:39:16.578538 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pml9v" event={"ID":"8d7211d9-0130-439e-aafd-43c30c1405c2","Type":"ContainerDied","Data":"9d80a05030a45d0acff51a6137667f3fb9ba1f02dfe86a25ea25c37acdea9b74"} Dec 10 15:39:16 crc kubenswrapper[4755]: I1210 15:39:16.578578 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pml9v" event={"ID":"8d7211d9-0130-439e-aafd-43c30c1405c2","Type":"ContainerStarted","Data":"2d5d5f92414479c6d7094efa480bc5daf7e95b3f3654f5603bb7a7104c9bc0a4"} Dec 10 15:39:17 crc kubenswrapper[4755]: I1210 15:39:17.588182 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pml9v" event={"ID":"8d7211d9-0130-439e-aafd-43c30c1405c2","Type":"ContainerStarted","Data":"fcad98ff657bdb1cf52c0546d02f811f20f607a39b536482751c0a0ed89defc5"} Dec 10 15:39:18 crc kubenswrapper[4755]: I1210 15:39:18.598318 4755 generic.go:334] "Generic (PLEG): container finished" podID="8d7211d9-0130-439e-aafd-43c30c1405c2" containerID="fcad98ff657bdb1cf52c0546d02f811f20f607a39b536482751c0a0ed89defc5" exitCode=0 Dec 10 15:39:18 crc kubenswrapper[4755]: I1210 15:39:18.598668 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pml9v" event={"ID":"8d7211d9-0130-439e-aafd-43c30c1405c2","Type":"ContainerDied","Data":"fcad98ff657bdb1cf52c0546d02f811f20f607a39b536482751c0a0ed89defc5"} Dec 10 15:39:21 crc kubenswrapper[4755]: I1210 15:39:21.620178 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pml9v" event={"ID":"8d7211d9-0130-439e-aafd-43c30c1405c2","Type":"ContainerStarted","Data":"b572e8e49da826c54c4649845b932644ae8264fd9b0c19da43574e4ed808a0dc"} Dec 10 15:39:21 crc kubenswrapper[4755]: I1210 15:39:21.646954 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-pml9v" podStartSLOduration=2.513294542 podStartE2EDuration="6.646933364s" podCreationTimestamp="2025-12-10 15:39:15 +0000 UTC" firstStartedPulling="2025-12-10 15:39:16.580006488 +0000 UTC m=+953.180890120" lastFinishedPulling="2025-12-10 15:39:20.71364531 +0000 UTC m=+957.314528942" observedRunningTime="2025-12-10 15:39:21.641814053 +0000 UTC m=+958.242697705" watchObservedRunningTime="2025-12-10 15:39:21.646933364 +0000 UTC m=+958.247816996" Dec 10 15:39:25 crc kubenswrapper[4755]: I1210 15:39:25.413169 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-pml9v" Dec 10 15:39:25 crc kubenswrapper[4755]: I1210 15:39:25.413729 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-pml9v" Dec 10 15:39:25 crc kubenswrapper[4755]: I1210 15:39:25.461755 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-pml9v" Dec 10 15:39:25 crc kubenswrapper[4755]: I1210 15:39:25.689312 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-pml9v" Dec 10 15:39:27 crc kubenswrapper[4755]: I1210 15:39:27.859907 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pml9v"] Dec 10 15:39:27 crc kubenswrapper[4755]: I1210 15:39:27.860449 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-pml9v" podUID="8d7211d9-0130-439e-aafd-43c30c1405c2" containerName="registry-server" containerID="cri-o://b572e8e49da826c54c4649845b932644ae8264fd9b0c19da43574e4ed808a0dc" gracePeriod=2 Dec 10 15:39:28 crc kubenswrapper[4755]: I1210 15:39:28.681006 4755 generic.go:334] "Generic (PLEG): container finished" podID="8d7211d9-0130-439e-aafd-43c30c1405c2" containerID="b572e8e49da826c54c4649845b932644ae8264fd9b0c19da43574e4ed808a0dc" exitCode=0 Dec 10 15:39:28 crc kubenswrapper[4755]: I1210 15:39:28.681037 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pml9v" event={"ID":"8d7211d9-0130-439e-aafd-43c30c1405c2","Type":"ContainerDied","Data":"b572e8e49da826c54c4649845b932644ae8264fd9b0c19da43574e4ed808a0dc"} Dec 10 15:39:28 crc kubenswrapper[4755]: I1210 15:39:28.769869 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pml9v" Dec 10 15:39:28 crc kubenswrapper[4755]: I1210 15:39:28.942742 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d7211d9-0130-439e-aafd-43c30c1405c2-utilities\") pod \"8d7211d9-0130-439e-aafd-43c30c1405c2\" (UID: \"8d7211d9-0130-439e-aafd-43c30c1405c2\") " Dec 10 15:39:28 crc kubenswrapper[4755]: I1210 15:39:28.942800 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gtnfl\" (UniqueName: \"kubernetes.io/projected/8d7211d9-0130-439e-aafd-43c30c1405c2-kube-api-access-gtnfl\") pod \"8d7211d9-0130-439e-aafd-43c30c1405c2\" (UID: \"8d7211d9-0130-439e-aafd-43c30c1405c2\") " Dec 10 15:39:28 crc kubenswrapper[4755]: I1210 15:39:28.943055 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d7211d9-0130-439e-aafd-43c30c1405c2-catalog-content\") pod \"8d7211d9-0130-439e-aafd-43c30c1405c2\" (UID: \"8d7211d9-0130-439e-aafd-43c30c1405c2\") " Dec 10 15:39:28 crc kubenswrapper[4755]: I1210 15:39:28.943975 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d7211d9-0130-439e-aafd-43c30c1405c2-utilities" (OuterVolumeSpecName: "utilities") pod "8d7211d9-0130-439e-aafd-43c30c1405c2" (UID: "8d7211d9-0130-439e-aafd-43c30c1405c2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:39:28 crc kubenswrapper[4755]: I1210 15:39:28.952171 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d7211d9-0130-439e-aafd-43c30c1405c2-kube-api-access-gtnfl" (OuterVolumeSpecName: "kube-api-access-gtnfl") pod "8d7211d9-0130-439e-aafd-43c30c1405c2" (UID: "8d7211d9-0130-439e-aafd-43c30c1405c2"). InnerVolumeSpecName "kube-api-access-gtnfl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:39:28 crc kubenswrapper[4755]: I1210 15:39:28.966421 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d7211d9-0130-439e-aafd-43c30c1405c2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8d7211d9-0130-439e-aafd-43c30c1405c2" (UID: "8d7211d9-0130-439e-aafd-43c30c1405c2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:39:29 crc kubenswrapper[4755]: I1210 15:39:29.044757 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gtnfl\" (UniqueName: \"kubernetes.io/projected/8d7211d9-0130-439e-aafd-43c30c1405c2-kube-api-access-gtnfl\") on node \"crc\" DevicePath \"\"" Dec 10 15:39:29 crc kubenswrapper[4755]: I1210 15:39:29.044800 4755 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d7211d9-0130-439e-aafd-43c30c1405c2-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 15:39:29 crc kubenswrapper[4755]: I1210 15:39:29.044811 4755 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d7211d9-0130-439e-aafd-43c30c1405c2-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 15:39:29 crc kubenswrapper[4755]: I1210 15:39:29.688060 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pml9v" event={"ID":"8d7211d9-0130-439e-aafd-43c30c1405c2","Type":"ContainerDied","Data":"2d5d5f92414479c6d7094efa480bc5daf7e95b3f3654f5603bb7a7104c9bc0a4"} Dec 10 15:39:29 crc kubenswrapper[4755]: I1210 15:39:29.688108 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pml9v" Dec 10 15:39:29 crc kubenswrapper[4755]: I1210 15:39:29.688117 4755 scope.go:117] "RemoveContainer" containerID="b572e8e49da826c54c4649845b932644ae8264fd9b0c19da43574e4ed808a0dc" Dec 10 15:39:29 crc kubenswrapper[4755]: I1210 15:39:29.708863 4755 scope.go:117] "RemoveContainer" containerID="fcad98ff657bdb1cf52c0546d02f811f20f607a39b536482751c0a0ed89defc5" Dec 10 15:39:29 crc kubenswrapper[4755]: I1210 15:39:29.719917 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pml9v"] Dec 10 15:39:29 crc kubenswrapper[4755]: I1210 15:39:29.734596 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-pml9v"] Dec 10 15:39:29 crc kubenswrapper[4755]: I1210 15:39:29.735354 4755 scope.go:117] "RemoveContainer" containerID="9d80a05030a45d0acff51a6137667f3fb9ba1f02dfe86a25ea25c37acdea9b74" Dec 10 15:39:29 crc kubenswrapper[4755]: I1210 15:39:29.765431 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d7211d9-0130-439e-aafd-43c30c1405c2" path="/var/lib/kubelet/pods/8d7211d9-0130-439e-aafd-43c30c1405c2/volumes" Dec 10 15:39:40 crc kubenswrapper[4755]: I1210 15:39:40.359820 4755 patch_prober.go:28] interesting pod/machine-config-daemon-ggt8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 15:39:40 crc kubenswrapper[4755]: I1210 15:39:40.360463 4755 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 15:39:40 crc kubenswrapper[4755]: I1210 15:39:40.360532 4755 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" Dec 10 15:39:40 crc kubenswrapper[4755]: I1210 15:39:40.361166 4755 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4c970abaaa70f01d1899eae5e78bc6f2bf1fb1ebdd24f00f3de5524057d3b3cd"} pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 10 15:39:40 crc kubenswrapper[4755]: I1210 15:39:40.361232 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" containerName="machine-config-daemon" containerID="cri-o://4c970abaaa70f01d1899eae5e78bc6f2bf1fb1ebdd24f00f3de5524057d3b3cd" gracePeriod=600 Dec 10 15:39:41 crc kubenswrapper[4755]: I1210 15:39:41.778037 4755 generic.go:334] "Generic (PLEG): container finished" podID="b132a8b9-1c99-414d-8773-229bf36b305d" containerID="4c970abaaa70f01d1899eae5e78bc6f2bf1fb1ebdd24f00f3de5524057d3b3cd" exitCode=0 Dec 10 15:39:41 crc kubenswrapper[4755]: I1210 15:39:41.778457 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" event={"ID":"b132a8b9-1c99-414d-8773-229bf36b305d","Type":"ContainerDied","Data":"4c970abaaa70f01d1899eae5e78bc6f2bf1fb1ebdd24f00f3de5524057d3b3cd"} Dec 10 15:39:41 crc kubenswrapper[4755]: I1210 15:39:41.778498 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" event={"ID":"b132a8b9-1c99-414d-8773-229bf36b305d","Type":"ContainerStarted","Data":"cf5ba83fd616480d24cf584cf15a0ce95565ee5fa4662cb49e23ad86486c0d52"} Dec 10 15:39:41 crc kubenswrapper[4755]: I1210 15:39:41.778513 4755 scope.go:117] "RemoveContainer" containerID="a3bec46d814cc9fbc9935f1242adb126dce3912edb10a563b43df294190d9363" Dec 10 15:39:42 crc kubenswrapper[4755]: I1210 15:39:42.984117 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6c677c69b-ljn8k"] Dec 10 15:39:42 crc kubenswrapper[4755]: E1210 15:39:42.984804 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d7211d9-0130-439e-aafd-43c30c1405c2" containerName="registry-server" Dec 10 15:39:42 crc kubenswrapper[4755]: I1210 15:39:42.984819 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d7211d9-0130-439e-aafd-43c30c1405c2" containerName="registry-server" Dec 10 15:39:42 crc kubenswrapper[4755]: E1210 15:39:42.984844 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d7211d9-0130-439e-aafd-43c30c1405c2" containerName="extract-content" Dec 10 15:39:42 crc kubenswrapper[4755]: I1210 15:39:42.984851 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d7211d9-0130-439e-aafd-43c30c1405c2" containerName="extract-content" Dec 10 15:39:42 crc kubenswrapper[4755]: E1210 15:39:42.984868 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d7211d9-0130-439e-aafd-43c30c1405c2" containerName="extract-utilities" Dec 10 15:39:42 crc kubenswrapper[4755]: I1210 15:39:42.984874 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d7211d9-0130-439e-aafd-43c30c1405c2" containerName="extract-utilities" Dec 10 15:39:42 crc kubenswrapper[4755]: I1210 15:39:42.985014 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d7211d9-0130-439e-aafd-43c30c1405c2" containerName="registry-server" Dec 10 15:39:42 crc kubenswrapper[4755]: I1210 15:39:42.985779 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-ljn8k" Dec 10 15:39:42 crc kubenswrapper[4755]: I1210 15:39:42.989067 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-sccx5" Dec 10 15:39:42 crc kubenswrapper[4755]: I1210 15:39:42.990104 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-bk4xd"] Dec 10 15:39:42 crc kubenswrapper[4755]: I1210 15:39:42.991697 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-bk4xd" Dec 10 15:39:42 crc kubenswrapper[4755]: I1210 15:39:42.994927 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-jn2q2" Dec 10 15:39:42 crc kubenswrapper[4755]: I1210 15:39:42.997188 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-697fb699cf-z5frc"] Dec 10 15:39:42 crc kubenswrapper[4755]: I1210 15:39:42.998468 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-z5frc" Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.000063 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-97tld" Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.006479 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6c677c69b-ljn8k"] Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.020032 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-bk4xd"] Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.031539 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-5697bb5779-rjdmk"] Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.032693 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-rjdmk" Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.036027 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfxgp\" (UniqueName: \"kubernetes.io/projected/83bd67ec-3fa0-4f1e-9f87-7005f731f7e4-kube-api-access-vfxgp\") pod \"barbican-operator-controller-manager-7d9dfd778-bk4xd\" (UID: \"83bd67ec-3fa0-4f1e-9f87-7005f731f7e4\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-bk4xd" Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.036111 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xzcn\" (UniqueName: \"kubernetes.io/projected/313bb539-c9d7-4bb0-a5e3-3a36c45c0f79-kube-api-access-5xzcn\") pod \"cinder-operator-controller-manager-6c677c69b-ljn8k\" (UID: \"313bb539-c9d7-4bb0-a5e3-3a36c45c0f79\") " pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-ljn8k" Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.040332 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-stwdh" Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.048884 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-5697bb5779-rjdmk"] Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.063982 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-vqgpv"] Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.064958 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-vqgpv" Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.067665 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-m9nk6" Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.088664 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-697fb699cf-z5frc"] Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.098573 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-vqgpv"] Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.126540 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-bs4zx"] Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.127637 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-bs4zx" Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.132920 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-78d48bff9d-wsxsj"] Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.134289 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-wsxsj" Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.144882 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfxgp\" (UniqueName: \"kubernetes.io/projected/83bd67ec-3fa0-4f1e-9f87-7005f731f7e4-kube-api-access-vfxgp\") pod \"barbican-operator-controller-manager-7d9dfd778-bk4xd\" (UID: \"83bd67ec-3fa0-4f1e-9f87-7005f731f7e4\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-bk4xd" Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.144945 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8d7z\" (UniqueName: \"kubernetes.io/projected/8bcd3e35-31c8-4dbc-96e1-e6f4b486f082-kube-api-access-c8d7z\") pod \"glance-operator-controller-manager-5697bb5779-rjdmk\" (UID: \"8bcd3e35-31c8-4dbc-96e1-e6f4b486f082\") " pod="openstack-operators/glance-operator-controller-manager-5697bb5779-rjdmk" Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.144985 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrh2x\" (UniqueName: \"kubernetes.io/projected/ab09fdaf-b326-4221-a24c-9415dabdbcdd-kube-api-access-mrh2x\") pod \"heat-operator-controller-manager-5f64f6f8bb-vqgpv\" (UID: \"ab09fdaf-b326-4221-a24c-9415dabdbcdd\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-vqgpv" Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.145010 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlwqg\" (UniqueName: \"kubernetes.io/projected/05b2a283-f9ce-4cbb-a92f-a22a227de36d-kube-api-access-rlwqg\") pod \"designate-operator-controller-manager-697fb699cf-z5frc\" (UID: \"05b2a283-f9ce-4cbb-a92f-a22a227de36d\") " pod="openstack-operators/designate-operator-controller-manager-697fb699cf-z5frc" Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.145032 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xzcn\" (UniqueName: \"kubernetes.io/projected/313bb539-c9d7-4bb0-a5e3-3a36c45c0f79-kube-api-access-5xzcn\") pod \"cinder-operator-controller-manager-6c677c69b-ljn8k\" (UID: \"313bb539-c9d7-4bb0-a5e3-3a36c45c0f79\") " pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-ljn8k" Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.146755 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-52d9p" Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.146940 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-k5gp2" Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.147027 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.162582 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-bs4zx"] Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.169879 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-78d48bff9d-wsxsj"] Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.178388 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-967d97867-h8w5g"] Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.180323 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfxgp\" (UniqueName: \"kubernetes.io/projected/83bd67ec-3fa0-4f1e-9f87-7005f731f7e4-kube-api-access-vfxgp\") pod \"barbican-operator-controller-manager-7d9dfd778-bk4xd\" (UID: \"83bd67ec-3fa0-4f1e-9f87-7005f731f7e4\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-bk4xd" Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.191355 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xzcn\" (UniqueName: \"kubernetes.io/projected/313bb539-c9d7-4bb0-a5e3-3a36c45c0f79-kube-api-access-5xzcn\") pod \"cinder-operator-controller-manager-6c677c69b-ljn8k\" (UID: \"313bb539-c9d7-4bb0-a5e3-3a36c45c0f79\") " pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-ljn8k" Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.200317 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7765d96ddf-t7zjt"] Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.200434 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-967d97867-h8w5g" Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.202798 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-t7zjt" Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.209732 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-wtjv7" Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.209847 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-z498s" Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.211004 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7765d96ddf-t7zjt"] Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.215604 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-967d97867-h8w5g"] Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.247257 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-5b5fd79c9c-5kfxq"] Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.250439 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-5kfxq" Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.259397 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrn8b\" (UniqueName: \"kubernetes.io/projected/359a4730-4858-4677-9977-a9d6cea57122-kube-api-access-xrn8b\") pod \"ironic-operator-controller-manager-967d97867-h8w5g\" (UID: \"359a4730-4858-4677-9977-a9d6cea57122\") " pod="openstack-operators/ironic-operator-controller-manager-967d97867-h8w5g" Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.259496 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lk8rs\" (UniqueName: \"kubernetes.io/projected/a07fdc07-16fa-4834-b370-378b543dde9f-kube-api-access-lk8rs\") pod \"horizon-operator-controller-manager-68c6d99b8f-bs4zx\" (UID: \"a07fdc07-16fa-4834-b370-378b543dde9f\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-bs4zx" Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.264368 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-5mqb5" Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.270202 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/423be682-6135-4dd2-8366-b7106adbc632-cert\") pod \"infra-operator-controller-manager-78d48bff9d-wsxsj\" (UID: \"423be682-6135-4dd2-8366-b7106adbc632\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-wsxsj" Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.270387 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-79c8c4686c-6vjxq"] Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.273493 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8d7z\" (UniqueName: \"kubernetes.io/projected/8bcd3e35-31c8-4dbc-96e1-e6f4b486f082-kube-api-access-c8d7z\") pod \"glance-operator-controller-manager-5697bb5779-rjdmk\" (UID: \"8bcd3e35-31c8-4dbc-96e1-e6f4b486f082\") " pod="openstack-operators/glance-operator-controller-manager-5697bb5779-rjdmk" Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.273691 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrh2x\" (UniqueName: \"kubernetes.io/projected/ab09fdaf-b326-4221-a24c-9415dabdbcdd-kube-api-access-mrh2x\") pod \"heat-operator-controller-manager-5f64f6f8bb-vqgpv\" (UID: \"ab09fdaf-b326-4221-a24c-9415dabdbcdd\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-vqgpv" Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.273832 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvst2\" (UniqueName: \"kubernetes.io/projected/423be682-6135-4dd2-8366-b7106adbc632-kube-api-access-mvst2\") pod \"infra-operator-controller-manager-78d48bff9d-wsxsj\" (UID: \"423be682-6135-4dd2-8366-b7106adbc632\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-wsxsj" Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.274242 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlwqg\" (UniqueName: \"kubernetes.io/projected/05b2a283-f9ce-4cbb-a92f-a22a227de36d-kube-api-access-rlwqg\") pod \"designate-operator-controller-manager-697fb699cf-z5frc\" (UID: \"05b2a283-f9ce-4cbb-a92f-a22a227de36d\") " pod="openstack-operators/designate-operator-controller-manager-697fb699cf-z5frc" Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.316075 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-6vjxq" Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.319138 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-ncdm5" Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.322934 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlwqg\" (UniqueName: \"kubernetes.io/projected/05b2a283-f9ce-4cbb-a92f-a22a227de36d-kube-api-access-rlwqg\") pod \"designate-operator-controller-manager-697fb699cf-z5frc\" (UID: \"05b2a283-f9ce-4cbb-a92f-a22a227de36d\") " pod="openstack-operators/designate-operator-controller-manager-697fb699cf-z5frc" Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.323145 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrh2x\" (UniqueName: \"kubernetes.io/projected/ab09fdaf-b326-4221-a24c-9415dabdbcdd-kube-api-access-mrh2x\") pod \"heat-operator-controller-manager-5f64f6f8bb-vqgpv\" (UID: \"ab09fdaf-b326-4221-a24c-9415dabdbcdd\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-vqgpv" Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.323809 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-ljn8k" Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.324574 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8d7z\" (UniqueName: \"kubernetes.io/projected/8bcd3e35-31c8-4dbc-96e1-e6f4b486f082-kube-api-access-c8d7z\") pod \"glance-operator-controller-manager-5697bb5779-rjdmk\" (UID: \"8bcd3e35-31c8-4dbc-96e1-e6f4b486f082\") " pod="openstack-operators/glance-operator-controller-manager-5697bb5779-rjdmk" Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.324672 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-bk4xd" Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.339487 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-z5frc" Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.352642 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-5b5fd79c9c-5kfxq"] Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.366046 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-rjdmk" Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.380533 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/423be682-6135-4dd2-8366-b7106adbc632-cert\") pod \"infra-operator-controller-manager-78d48bff9d-wsxsj\" (UID: \"423be682-6135-4dd2-8366-b7106adbc632\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-wsxsj" Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.380616 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvst2\" (UniqueName: \"kubernetes.io/projected/423be682-6135-4dd2-8366-b7106adbc632-kube-api-access-mvst2\") pod \"infra-operator-controller-manager-78d48bff9d-wsxsj\" (UID: \"423be682-6135-4dd2-8366-b7106adbc632\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-wsxsj" Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.380662 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbhzn\" (UniqueName: \"kubernetes.io/projected/8bc636b5-ac4d-4b4e-8b50-102a72e6ee2a-kube-api-access-tbhzn\") pod \"keystone-operator-controller-manager-7765d96ddf-t7zjt\" (UID: \"8bc636b5-ac4d-4b4e-8b50-102a72e6ee2a\") " pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-t7zjt" Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.380692 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqn2b\" (UniqueName: \"kubernetes.io/projected/2a918143-c2cf-4c73-b547-c8d0d9c6e2a6-kube-api-access-sqn2b\") pod \"manila-operator-controller-manager-5b5fd79c9c-5kfxq\" (UID: \"2a918143-c2cf-4c73-b547-c8d0d9c6e2a6\") " pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-5kfxq" Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.380719 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qf7lj\" (UniqueName: \"kubernetes.io/projected/e4692dc7-ecb8-45b5-be03-9990c0a32b2a-kube-api-access-qf7lj\") pod \"mariadb-operator-controller-manager-79c8c4686c-6vjxq\" (UID: \"e4692dc7-ecb8-45b5-be03-9990c0a32b2a\") " pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-6vjxq" Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.380751 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrn8b\" (UniqueName: \"kubernetes.io/projected/359a4730-4858-4677-9977-a9d6cea57122-kube-api-access-xrn8b\") pod \"ironic-operator-controller-manager-967d97867-h8w5g\" (UID: \"359a4730-4858-4677-9977-a9d6cea57122\") " pod="openstack-operators/ironic-operator-controller-manager-967d97867-h8w5g" Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.380784 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lk8rs\" (UniqueName: \"kubernetes.io/projected/a07fdc07-16fa-4834-b370-378b543dde9f-kube-api-access-lk8rs\") pod \"horizon-operator-controller-manager-68c6d99b8f-bs4zx\" (UID: \"a07fdc07-16fa-4834-b370-378b543dde9f\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-bs4zx" Dec 10 15:39:43 crc kubenswrapper[4755]: E1210 15:39:43.381062 4755 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 10 15:39:43 crc kubenswrapper[4755]: E1210 15:39:43.381102 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/423be682-6135-4dd2-8366-b7106adbc632-cert podName:423be682-6135-4dd2-8366-b7106adbc632 nodeName:}" failed. No retries permitted until 2025-12-10 15:39:43.881086535 +0000 UTC m=+980.481970167 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/423be682-6135-4dd2-8366-b7106adbc632-cert") pod "infra-operator-controller-manager-78d48bff9d-wsxsj" (UID: "423be682-6135-4dd2-8366-b7106adbc632") : secret "infra-operator-webhook-server-cert" not found Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.381446 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-79c8c4686c-6vjxq"] Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.390521 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-xtr7m"] Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.391741 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-xtr7m" Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.394600 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-vqgpv" Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.394872 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-rhq7f" Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.407412 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvst2\" (UniqueName: \"kubernetes.io/projected/423be682-6135-4dd2-8366-b7106adbc632-kube-api-access-mvst2\") pod \"infra-operator-controller-manager-78d48bff9d-wsxsj\" (UID: \"423be682-6135-4dd2-8366-b7106adbc632\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-wsxsj" Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.410502 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lk8rs\" (UniqueName: \"kubernetes.io/projected/a07fdc07-16fa-4834-b370-378b543dde9f-kube-api-access-lk8rs\") pod \"horizon-operator-controller-manager-68c6d99b8f-bs4zx\" (UID: \"a07fdc07-16fa-4834-b370-378b543dde9f\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-bs4zx" Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.410546 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-bgxgp"] Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.412107 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-bgxgp" Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.412149 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrn8b\" (UniqueName: \"kubernetes.io/projected/359a4730-4858-4677-9977-a9d6cea57122-kube-api-access-xrn8b\") pod \"ironic-operator-controller-manager-967d97867-h8w5g\" (UID: \"359a4730-4858-4677-9977-a9d6cea57122\") " pod="openstack-operators/ironic-operator-controller-manager-967d97867-h8w5g" Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.414768 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-c9snv" Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.417791 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-xtr7m"] Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.431417 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-pxstj"] Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.436581 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-998648c74-pxstj" Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.437794 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-bgxgp"] Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.440096 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-hxxnn" Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.440621 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-967d97867-h8w5g" Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.459094 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-pxstj"] Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.471410 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fh778d"] Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.472447 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fh778d" Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.478321 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-lbj4z"] Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.479504 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-bs4zx" Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.481837 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhwxs\" (UniqueName: \"kubernetes.io/projected/d3b1545f-1f46-4869-bc92-cdc7e5b1fc4c-kube-api-access-jhwxs\") pod \"nova-operator-controller-manager-697bc559fc-xtr7m\" (UID: \"d3b1545f-1f46-4869-bc92-cdc7e5b1fc4c\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-xtr7m" Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.481942 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fq7pt\" (UniqueName: \"kubernetes.io/projected/46715591-f787-42bc-9871-a51b08963893-kube-api-access-fq7pt\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-bgxgp\" (UID: \"46715591-f787-42bc-9871-a51b08963893\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-bgxgp" Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.481972 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbhzn\" (UniqueName: \"kubernetes.io/projected/8bc636b5-ac4d-4b4e-8b50-102a72e6ee2a-kube-api-access-tbhzn\") pod \"keystone-operator-controller-manager-7765d96ddf-t7zjt\" (UID: \"8bc636b5-ac4d-4b4e-8b50-102a72e6ee2a\") " pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-t7zjt" Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.481999 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqn2b\" (UniqueName: \"kubernetes.io/projected/2a918143-c2cf-4c73-b547-c8d0d9c6e2a6-kube-api-access-sqn2b\") pod \"manila-operator-controller-manager-5b5fd79c9c-5kfxq\" (UID: \"2a918143-c2cf-4c73-b547-c8d0d9c6e2a6\") " pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-5kfxq" Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.482027 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qf7lj\" (UniqueName: \"kubernetes.io/projected/e4692dc7-ecb8-45b5-be03-9990c0a32b2a-kube-api-access-qf7lj\") pod \"mariadb-operator-controller-manager-79c8c4686c-6vjxq\" (UID: \"e4692dc7-ecb8-45b5-be03-9990c0a32b2a\") " pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-6vjxq" Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.483356 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.484892 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-sx4d9" Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.490837 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-78f8948974-lbj4z" Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.493716 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-zjgtp" Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.502767 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-xpv7s"] Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.503893 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-xpv7s" Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.513787 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-p7spr" Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.513947 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-xpv7s"] Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.546298 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-lbj4z"] Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.586641 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggq8q\" (UniqueName: \"kubernetes.io/projected/10728d77-c715-4cb1-ab30-5747594a6320-kube-api-access-ggq8q\") pod \"octavia-operator-controller-manager-998648c74-pxstj\" (UID: \"10728d77-c715-4cb1-ab30-5747594a6320\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-pxstj" Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.586728 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hl44w\" (UniqueName: \"kubernetes.io/projected/67e9d86f-4e93-4e78-a9d5-d8023721414d-kube-api-access-hl44w\") pod \"ovn-operator-controller-manager-b6456fdb6-xpv7s\" (UID: \"67e9d86f-4e93-4e78-a9d5-d8023721414d\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-xpv7s" Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.586760 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45rgv\" (UniqueName: \"kubernetes.io/projected/0bcf5a92-0324-4799-be55-0e49bd060ee7-kube-api-access-45rgv\") pod \"openstack-baremetal-operator-controller-manager-84b575879fh778d\" (UID: \"0bcf5a92-0324-4799-be55-0e49bd060ee7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fh778d" Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.586787 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fq7pt\" (UniqueName: \"kubernetes.io/projected/46715591-f787-42bc-9871-a51b08963893-kube-api-access-fq7pt\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-bgxgp\" (UID: \"46715591-f787-42bc-9871-a51b08963893\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-bgxgp" Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.586806 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkv2b\" (UniqueName: \"kubernetes.io/projected/ebb31199-21f8-4493-8725-1c5e1aa70d66-kube-api-access-lkv2b\") pod \"placement-operator-controller-manager-78f8948974-lbj4z\" (UID: \"ebb31199-21f8-4493-8725-1c5e1aa70d66\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-lbj4z" Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.586875 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhwxs\" (UniqueName: \"kubernetes.io/projected/d3b1545f-1f46-4869-bc92-cdc7e5b1fc4c-kube-api-access-jhwxs\") pod \"nova-operator-controller-manager-697bc559fc-xtr7m\" (UID: \"d3b1545f-1f46-4869-bc92-cdc7e5b1fc4c\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-xtr7m" Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.586915 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0bcf5a92-0324-4799-be55-0e49bd060ee7-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879fh778d\" (UID: \"0bcf5a92-0324-4799-be55-0e49bd060ee7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fh778d" Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.592869 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqn2b\" (UniqueName: \"kubernetes.io/projected/2a918143-c2cf-4c73-b547-c8d0d9c6e2a6-kube-api-access-sqn2b\") pod \"manila-operator-controller-manager-5b5fd79c9c-5kfxq\" (UID: \"2a918143-c2cf-4c73-b547-c8d0d9c6e2a6\") " pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-5kfxq" Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.595551 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-9d58d64bc-jzkms"] Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.597355 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qf7lj\" (UniqueName: \"kubernetes.io/projected/e4692dc7-ecb8-45b5-be03-9990c0a32b2a-kube-api-access-qf7lj\") pod \"mariadb-operator-controller-manager-79c8c4686c-6vjxq\" (UID: \"e4692dc7-ecb8-45b5-be03-9990c0a32b2a\") " pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-6vjxq" Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.597901 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-jzkms" Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.599350 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbhzn\" (UniqueName: \"kubernetes.io/projected/8bc636b5-ac4d-4b4e-8b50-102a72e6ee2a-kube-api-access-tbhzn\") pod \"keystone-operator-controller-manager-7765d96ddf-t7zjt\" (UID: \"8bc636b5-ac4d-4b4e-8b50-102a72e6ee2a\") " pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-t7zjt" Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.607330 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-9d58d64bc-jzkms"] Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.631366 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-855k9" Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.639561 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fh778d"] Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.647266 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fq7pt\" (UniqueName: \"kubernetes.io/projected/46715591-f787-42bc-9871-a51b08963893-kube-api-access-fq7pt\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-bgxgp\" (UID: \"46715591-f787-42bc-9871-a51b08963893\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-bgxgp" Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.655018 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-bgxgp" Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.657100 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhwxs\" (UniqueName: \"kubernetes.io/projected/d3b1545f-1f46-4869-bc92-cdc7e5b1fc4c-kube-api-access-jhwxs\") pod \"nova-operator-controller-manager-697bc559fc-xtr7m\" (UID: \"d3b1545f-1f46-4869-bc92-cdc7e5b1fc4c\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-xtr7m" Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.682117 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5f8d644d5d-87jfq"] Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.683368 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-5f8d644d5d-87jfq" Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.699020 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0bcf5a92-0324-4799-be55-0e49bd060ee7-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879fh778d\" (UID: \"0bcf5a92-0324-4799-be55-0e49bd060ee7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fh778d" Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.699182 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggq8q\" (UniqueName: \"kubernetes.io/projected/10728d77-c715-4cb1-ab30-5747594a6320-kube-api-access-ggq8q\") pod \"octavia-operator-controller-manager-998648c74-pxstj\" (UID: \"10728d77-c715-4cb1-ab30-5747594a6320\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-pxstj" Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.699374 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hl44w\" (UniqueName: \"kubernetes.io/projected/67e9d86f-4e93-4e78-a9d5-d8023721414d-kube-api-access-hl44w\") pod \"ovn-operator-controller-manager-b6456fdb6-xpv7s\" (UID: \"67e9d86f-4e93-4e78-a9d5-d8023721414d\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-xpv7s" Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.699512 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45rgv\" (UniqueName: \"kubernetes.io/projected/0bcf5a92-0324-4799-be55-0e49bd060ee7-kube-api-access-45rgv\") pod \"openstack-baremetal-operator-controller-manager-84b575879fh778d\" (UID: \"0bcf5a92-0324-4799-be55-0e49bd060ee7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fh778d" Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.699620 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkv2b\" (UniqueName: \"kubernetes.io/projected/ebb31199-21f8-4493-8725-1c5e1aa70d66-kube-api-access-lkv2b\") pod \"placement-operator-controller-manager-78f8948974-lbj4z\" (UID: \"ebb31199-21f8-4493-8725-1c5e1aa70d66\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-lbj4z" Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.699735 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jx9kp\" (UniqueName: \"kubernetes.io/projected/f0af4059-171e-409f-8043-8f112664e01c-kube-api-access-jx9kp\") pod \"swift-operator-controller-manager-9d58d64bc-jzkms\" (UID: \"f0af4059-171e-409f-8043-8f112664e01c\") " pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-jzkms" Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.700658 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-xswgp" Dec 10 15:39:43 crc kubenswrapper[4755]: E1210 15:39:43.701255 4755 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 10 15:39:43 crc kubenswrapper[4755]: E1210 15:39:43.701306 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0bcf5a92-0324-4799-be55-0e49bd060ee7-cert podName:0bcf5a92-0324-4799-be55-0e49bd060ee7 nodeName:}" failed. No retries permitted until 2025-12-10 15:39:44.201289539 +0000 UTC m=+980.802173171 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/0bcf5a92-0324-4799-be55-0e49bd060ee7-cert") pod "openstack-baremetal-operator-controller-manager-84b575879fh778d" (UID: "0bcf5a92-0324-4799-be55-0e49bd060ee7") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.711896 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5f8d644d5d-87jfq"] Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.771929 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-t7zjt" Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.773448 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hl44w\" (UniqueName: \"kubernetes.io/projected/67e9d86f-4e93-4e78-a9d5-d8023721414d-kube-api-access-hl44w\") pod \"ovn-operator-controller-manager-b6456fdb6-xpv7s\" (UID: \"67e9d86f-4e93-4e78-a9d5-d8023721414d\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-xpv7s" Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.803816 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-xpv7s" Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.805912 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9d7d\" (UniqueName: \"kubernetes.io/projected/e36da2bb-2cc5-4a66-97f3-ace6966152fb-kube-api-access-r9d7d\") pod \"telemetry-operator-controller-manager-5f8d644d5d-87jfq\" (UID: \"e36da2bb-2cc5-4a66-97f3-ace6966152fb\") " pod="openstack-operators/telemetry-operator-controller-manager-5f8d644d5d-87jfq" Dec 10 15:39:43 crc kubenswrapper[4755]: I1210 15:39:43.806057 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jx9kp\" (UniqueName: \"kubernetes.io/projected/f0af4059-171e-409f-8043-8f112664e01c-kube-api-access-jx9kp\") pod \"swift-operator-controller-manager-9d58d64bc-jzkms\" (UID: \"f0af4059-171e-409f-8043-8f112664e01c\") " pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-jzkms" Dec 10 15:39:44 crc kubenswrapper[4755]: I1210 15:39:44.079036 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggq8q\" (UniqueName: \"kubernetes.io/projected/10728d77-c715-4cb1-ab30-5747594a6320-kube-api-access-ggq8q\") pod \"octavia-operator-controller-manager-998648c74-pxstj\" (UID: \"10728d77-c715-4cb1-ab30-5747594a6320\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-pxstj" Dec 10 15:39:44 crc kubenswrapper[4755]: I1210 15:39:44.081644 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45rgv\" (UniqueName: \"kubernetes.io/projected/0bcf5a92-0324-4799-be55-0e49bd060ee7-kube-api-access-45rgv\") pod \"openstack-baremetal-operator-controller-manager-84b575879fh778d\" (UID: \"0bcf5a92-0324-4799-be55-0e49bd060ee7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fh778d" Dec 10 15:39:44 crc kubenswrapper[4755]: I1210 15:39:44.061353 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jx9kp\" (UniqueName: \"kubernetes.io/projected/f0af4059-171e-409f-8043-8f112664e01c-kube-api-access-jx9kp\") pod \"swift-operator-controller-manager-9d58d64bc-jzkms\" (UID: \"f0af4059-171e-409f-8043-8f112664e01c\") " pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-jzkms" Dec 10 15:39:44 crc kubenswrapper[4755]: I1210 15:39:44.087232 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkv2b\" (UniqueName: \"kubernetes.io/projected/ebb31199-21f8-4493-8725-1c5e1aa70d66-kube-api-access-lkv2b\") pod \"placement-operator-controller-manager-78f8948974-lbj4z\" (UID: \"ebb31199-21f8-4493-8725-1c5e1aa70d66\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-lbj4z" Dec 10 15:39:44 crc kubenswrapper[4755]: I1210 15:39:44.091616 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-xtr7m" Dec 10 15:39:44 crc kubenswrapper[4755]: I1210 15:39:44.105233 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-5kfxq" Dec 10 15:39:44 crc kubenswrapper[4755]: I1210 15:39:44.112313 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-6vjxq" Dec 10 15:39:44 crc kubenswrapper[4755]: I1210 15:39:44.133614 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/423be682-6135-4dd2-8366-b7106adbc632-cert\") pod \"infra-operator-controller-manager-78d48bff9d-wsxsj\" (UID: \"423be682-6135-4dd2-8366-b7106adbc632\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-wsxsj" Dec 10 15:39:44 crc kubenswrapper[4755]: I1210 15:39:44.133700 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9d7d\" (UniqueName: \"kubernetes.io/projected/e36da2bb-2cc5-4a66-97f3-ace6966152fb-kube-api-access-r9d7d\") pod \"telemetry-operator-controller-manager-5f8d644d5d-87jfq\" (UID: \"e36da2bb-2cc5-4a66-97f3-ace6966152fb\") " pod="openstack-operators/telemetry-operator-controller-manager-5f8d644d5d-87jfq" Dec 10 15:39:44 crc kubenswrapper[4755]: E1210 15:39:44.134697 4755 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 10 15:39:44 crc kubenswrapper[4755]: E1210 15:39:44.134756 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/423be682-6135-4dd2-8366-b7106adbc632-cert podName:423be682-6135-4dd2-8366-b7106adbc632 nodeName:}" failed. No retries permitted until 2025-12-10 15:39:45.134739034 +0000 UTC m=+981.735622656 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/423be682-6135-4dd2-8366-b7106adbc632-cert") pod "infra-operator-controller-manager-78d48bff9d-wsxsj" (UID: "423be682-6135-4dd2-8366-b7106adbc632") : secret "infra-operator-webhook-server-cert" not found Dec 10 15:39:44 crc kubenswrapper[4755]: I1210 15:39:44.185223 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9d7d\" (UniqueName: \"kubernetes.io/projected/e36da2bb-2cc5-4a66-97f3-ace6966152fb-kube-api-access-r9d7d\") pod \"telemetry-operator-controller-manager-5f8d644d5d-87jfq\" (UID: \"e36da2bb-2cc5-4a66-97f3-ace6966152fb\") " pod="openstack-operators/telemetry-operator-controller-manager-5f8d644d5d-87jfq" Dec 10 15:39:44 crc kubenswrapper[4755]: I1210 15:39:44.191163 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-wbch9"] Dec 10 15:39:44 crc kubenswrapper[4755]: I1210 15:39:44.194187 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5854674fcc-wbch9" Dec 10 15:39:44 crc kubenswrapper[4755]: I1210 15:39:44.197635 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-xhzcm" Dec 10 15:39:44 crc kubenswrapper[4755]: I1210 15:39:44.226648 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-wbch9"] Dec 10 15:39:44 crc kubenswrapper[4755]: I1210 15:39:44.234616 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-626g8\" (UniqueName: \"kubernetes.io/projected/91969126-0986-41a4-8d56-19b071710ca8-kube-api-access-626g8\") pod \"test-operator-controller-manager-5854674fcc-wbch9\" (UID: \"91969126-0986-41a4-8d56-19b071710ca8\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-wbch9" Dec 10 15:39:44 crc kubenswrapper[4755]: I1210 15:39:44.234707 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0bcf5a92-0324-4799-be55-0e49bd060ee7-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879fh778d\" (UID: \"0bcf5a92-0324-4799-be55-0e49bd060ee7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fh778d" Dec 10 15:39:44 crc kubenswrapper[4755]: E1210 15:39:44.234845 4755 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 10 15:39:44 crc kubenswrapper[4755]: E1210 15:39:44.234897 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0bcf5a92-0324-4799-be55-0e49bd060ee7-cert podName:0bcf5a92-0324-4799-be55-0e49bd060ee7 nodeName:}" failed. No retries permitted until 2025-12-10 15:39:45.234881584 +0000 UTC m=+981.835765216 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/0bcf5a92-0324-4799-be55-0e49bd060ee7-cert") pod "openstack-baremetal-operator-controller-manager-84b575879fh778d" (UID: "0bcf5a92-0324-4799-be55-0e49bd060ee7") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 10 15:39:44 crc kubenswrapper[4755]: I1210 15:39:44.265630 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-75944c9b7-qrhhh"] Dec 10 15:39:44 crc kubenswrapper[4755]: I1210 15:39:44.267435 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-qrhhh" Dec 10 15:39:44 crc kubenswrapper[4755]: I1210 15:39:44.270138 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-2scrn" Dec 10 15:39:44 crc kubenswrapper[4755]: I1210 15:39:44.279750 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-998648c74-pxstj" Dec 10 15:39:44 crc kubenswrapper[4755]: I1210 15:39:44.295571 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-75944c9b7-qrhhh"] Dec 10 15:39:44 crc kubenswrapper[4755]: I1210 15:39:44.335765 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-626g8\" (UniqueName: \"kubernetes.io/projected/91969126-0986-41a4-8d56-19b071710ca8-kube-api-access-626g8\") pod \"test-operator-controller-manager-5854674fcc-wbch9\" (UID: \"91969126-0986-41a4-8d56-19b071710ca8\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-wbch9" Dec 10 15:39:44 crc kubenswrapper[4755]: I1210 15:39:44.372120 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-78f8948974-lbj4z" Dec 10 15:39:44 crc kubenswrapper[4755]: I1210 15:39:44.374622 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-626g8\" (UniqueName: \"kubernetes.io/projected/91969126-0986-41a4-8d56-19b071710ca8-kube-api-access-626g8\") pod \"test-operator-controller-manager-5854674fcc-wbch9\" (UID: \"91969126-0986-41a4-8d56-19b071710ca8\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-wbch9" Dec 10 15:39:44 crc kubenswrapper[4755]: I1210 15:39:44.377610 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-656546cb8f-64wpq"] Dec 10 15:39:44 crc kubenswrapper[4755]: I1210 15:39:44.384178 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-656546cb8f-64wpq" Dec 10 15:39:44 crc kubenswrapper[4755]: I1210 15:39:44.384436 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-jzkms" Dec 10 15:39:44 crc kubenswrapper[4755]: I1210 15:39:44.389843 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Dec 10 15:39:44 crc kubenswrapper[4755]: I1210 15:39:44.390117 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-lqpg4" Dec 10 15:39:44 crc kubenswrapper[4755]: I1210 15:39:44.393003 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Dec 10 15:39:44 crc kubenswrapper[4755]: I1210 15:39:44.418297 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-5f8d644d5d-87jfq" Dec 10 15:39:44 crc kubenswrapper[4755]: I1210 15:39:44.424590 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-656546cb8f-64wpq"] Dec 10 15:39:44 crc kubenswrapper[4755]: I1210 15:39:44.437690 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcpdt\" (UniqueName: \"kubernetes.io/projected/15009193-27b2-4bf2-a795-f6106327e331-kube-api-access-gcpdt\") pod \"watcher-operator-controller-manager-75944c9b7-qrhhh\" (UID: \"15009193-27b2-4bf2-a795-f6106327e331\") " pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-qrhhh" Dec 10 15:39:44 crc kubenswrapper[4755]: I1210 15:39:44.440063 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zz7fk"] Dec 10 15:39:44 crc kubenswrapper[4755]: I1210 15:39:44.444346 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zz7fk" Dec 10 15:39:44 crc kubenswrapper[4755]: I1210 15:39:44.447269 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-kvf7l" Dec 10 15:39:44 crc kubenswrapper[4755]: I1210 15:39:44.468065 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zz7fk"] Dec 10 15:39:44 crc kubenswrapper[4755]: I1210 15:39:44.476073 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5854674fcc-wbch9" Dec 10 15:39:44 crc kubenswrapper[4755]: I1210 15:39:44.512973 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-bk4xd"] Dec 10 15:39:44 crc kubenswrapper[4755]: I1210 15:39:44.538732 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gcpdt\" (UniqueName: \"kubernetes.io/projected/15009193-27b2-4bf2-a795-f6106327e331-kube-api-access-gcpdt\") pod \"watcher-operator-controller-manager-75944c9b7-qrhhh\" (UID: \"15009193-27b2-4bf2-a795-f6106327e331\") " pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-qrhhh" Dec 10 15:39:44 crc kubenswrapper[4755]: I1210 15:39:44.538837 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7dhz\" (UniqueName: \"kubernetes.io/projected/3fe0f8bf-8203-4cbb-b474-d00be4716ff5-kube-api-access-g7dhz\") pod \"openstack-operator-controller-manager-656546cb8f-64wpq\" (UID: \"3fe0f8bf-8203-4cbb-b474-d00be4716ff5\") " pod="openstack-operators/openstack-operator-controller-manager-656546cb8f-64wpq" Dec 10 15:39:44 crc kubenswrapper[4755]: I1210 15:39:44.538866 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3fe0f8bf-8203-4cbb-b474-d00be4716ff5-webhook-certs\") pod \"openstack-operator-controller-manager-656546cb8f-64wpq\" (UID: \"3fe0f8bf-8203-4cbb-b474-d00be4716ff5\") " pod="openstack-operators/openstack-operator-controller-manager-656546cb8f-64wpq" Dec 10 15:39:44 crc kubenswrapper[4755]: I1210 15:39:44.539056 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3fe0f8bf-8203-4cbb-b474-d00be4716ff5-metrics-certs\") pod \"openstack-operator-controller-manager-656546cb8f-64wpq\" (UID: \"3fe0f8bf-8203-4cbb-b474-d00be4716ff5\") " pod="openstack-operators/openstack-operator-controller-manager-656546cb8f-64wpq" Dec 10 15:39:44 crc kubenswrapper[4755]: I1210 15:39:44.602826 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gcpdt\" (UniqueName: \"kubernetes.io/projected/15009193-27b2-4bf2-a795-f6106327e331-kube-api-access-gcpdt\") pod \"watcher-operator-controller-manager-75944c9b7-qrhhh\" (UID: \"15009193-27b2-4bf2-a795-f6106327e331\") " pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-qrhhh" Dec 10 15:39:44 crc kubenswrapper[4755]: I1210 15:39:44.640161 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jbdt\" (UniqueName: \"kubernetes.io/projected/fbaec88b-8593-468f-aefc-777f8140504d-kube-api-access-4jbdt\") pod \"rabbitmq-cluster-operator-manager-668c99d594-zz7fk\" (UID: \"fbaec88b-8593-468f-aefc-777f8140504d\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zz7fk" Dec 10 15:39:44 crc kubenswrapper[4755]: I1210 15:39:44.640216 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3fe0f8bf-8203-4cbb-b474-d00be4716ff5-webhook-certs\") pod \"openstack-operator-controller-manager-656546cb8f-64wpq\" (UID: \"3fe0f8bf-8203-4cbb-b474-d00be4716ff5\") " pod="openstack-operators/openstack-operator-controller-manager-656546cb8f-64wpq" Dec 10 15:39:44 crc kubenswrapper[4755]: I1210 15:39:44.640281 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7dhz\" (UniqueName: \"kubernetes.io/projected/3fe0f8bf-8203-4cbb-b474-d00be4716ff5-kube-api-access-g7dhz\") pod \"openstack-operator-controller-manager-656546cb8f-64wpq\" (UID: \"3fe0f8bf-8203-4cbb-b474-d00be4716ff5\") " pod="openstack-operators/openstack-operator-controller-manager-656546cb8f-64wpq" Dec 10 15:39:44 crc kubenswrapper[4755]: E1210 15:39:44.640391 4755 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 10 15:39:44 crc kubenswrapper[4755]: E1210 15:39:44.640455 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3fe0f8bf-8203-4cbb-b474-d00be4716ff5-webhook-certs podName:3fe0f8bf-8203-4cbb-b474-d00be4716ff5 nodeName:}" failed. No retries permitted until 2025-12-10 15:39:45.140436055 +0000 UTC m=+981.741319687 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/3fe0f8bf-8203-4cbb-b474-d00be4716ff5-webhook-certs") pod "openstack-operator-controller-manager-656546cb8f-64wpq" (UID: "3fe0f8bf-8203-4cbb-b474-d00be4716ff5") : secret "webhook-server-cert" not found Dec 10 15:39:44 crc kubenswrapper[4755]: I1210 15:39:44.640527 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3fe0f8bf-8203-4cbb-b474-d00be4716ff5-metrics-certs\") pod \"openstack-operator-controller-manager-656546cb8f-64wpq\" (UID: \"3fe0f8bf-8203-4cbb-b474-d00be4716ff5\") " pod="openstack-operators/openstack-operator-controller-manager-656546cb8f-64wpq" Dec 10 15:39:44 crc kubenswrapper[4755]: E1210 15:39:44.640794 4755 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 10 15:39:44 crc kubenswrapper[4755]: E1210 15:39:44.640821 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3fe0f8bf-8203-4cbb-b474-d00be4716ff5-metrics-certs podName:3fe0f8bf-8203-4cbb-b474-d00be4716ff5 nodeName:}" failed. No retries permitted until 2025-12-10 15:39:45.140814175 +0000 UTC m=+981.741697807 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3fe0f8bf-8203-4cbb-b474-d00be4716ff5-metrics-certs") pod "openstack-operator-controller-manager-656546cb8f-64wpq" (UID: "3fe0f8bf-8203-4cbb-b474-d00be4716ff5") : secret "metrics-server-cert" not found Dec 10 15:39:44 crc kubenswrapper[4755]: I1210 15:39:44.664826 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7dhz\" (UniqueName: \"kubernetes.io/projected/3fe0f8bf-8203-4cbb-b474-d00be4716ff5-kube-api-access-g7dhz\") pod \"openstack-operator-controller-manager-656546cb8f-64wpq\" (UID: \"3fe0f8bf-8203-4cbb-b474-d00be4716ff5\") " pod="openstack-operators/openstack-operator-controller-manager-656546cb8f-64wpq" Dec 10 15:39:44 crc kubenswrapper[4755]: I1210 15:39:44.741806 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jbdt\" (UniqueName: \"kubernetes.io/projected/fbaec88b-8593-468f-aefc-777f8140504d-kube-api-access-4jbdt\") pod \"rabbitmq-cluster-operator-manager-668c99d594-zz7fk\" (UID: \"fbaec88b-8593-468f-aefc-777f8140504d\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zz7fk" Dec 10 15:39:44 crc kubenswrapper[4755]: I1210 15:39:44.760296 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jbdt\" (UniqueName: \"kubernetes.io/projected/fbaec88b-8593-468f-aefc-777f8140504d-kube-api-access-4jbdt\") pod \"rabbitmq-cluster-operator-manager-668c99d594-zz7fk\" (UID: \"fbaec88b-8593-468f-aefc-777f8140504d\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zz7fk" Dec 10 15:39:44 crc kubenswrapper[4755]: I1210 15:39:44.807854 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-qrhhh" Dec 10 15:39:44 crc kubenswrapper[4755]: I1210 15:39:44.854431 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zz7fk" Dec 10 15:39:45 crc kubenswrapper[4755]: I1210 15:39:45.097415 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-5697bb5779-rjdmk"] Dec 10 15:39:45 crc kubenswrapper[4755]: I1210 15:39:45.136443 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-rjdmk" event={"ID":"8bcd3e35-31c8-4dbc-96e1-e6f4b486f082","Type":"ContainerStarted","Data":"891860dca2e799358a8fe8b2c9dcca255f8c10e7d8a0d8743c834ef154e56368"} Dec 10 15:39:45 crc kubenswrapper[4755]: I1210 15:39:45.139567 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-bk4xd" event={"ID":"83bd67ec-3fa0-4f1e-9f87-7005f731f7e4","Type":"ContainerStarted","Data":"4078391975d826ce0fd88009cc3570a593ebbbcb103e1c388c005b72c07f9765"} Dec 10 15:39:45 crc kubenswrapper[4755]: I1210 15:39:45.156346 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3fe0f8bf-8203-4cbb-b474-d00be4716ff5-webhook-certs\") pod \"openstack-operator-controller-manager-656546cb8f-64wpq\" (UID: \"3fe0f8bf-8203-4cbb-b474-d00be4716ff5\") " pod="openstack-operators/openstack-operator-controller-manager-656546cb8f-64wpq" Dec 10 15:39:45 crc kubenswrapper[4755]: I1210 15:39:45.156449 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3fe0f8bf-8203-4cbb-b474-d00be4716ff5-metrics-certs\") pod \"openstack-operator-controller-manager-656546cb8f-64wpq\" (UID: \"3fe0f8bf-8203-4cbb-b474-d00be4716ff5\") " pod="openstack-operators/openstack-operator-controller-manager-656546cb8f-64wpq" Dec 10 15:39:45 crc kubenswrapper[4755]: I1210 15:39:45.156540 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/423be682-6135-4dd2-8366-b7106adbc632-cert\") pod \"infra-operator-controller-manager-78d48bff9d-wsxsj\" (UID: \"423be682-6135-4dd2-8366-b7106adbc632\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-wsxsj" Dec 10 15:39:45 crc kubenswrapper[4755]: E1210 15:39:45.156694 4755 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 10 15:39:45 crc kubenswrapper[4755]: E1210 15:39:45.156754 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/423be682-6135-4dd2-8366-b7106adbc632-cert podName:423be682-6135-4dd2-8366-b7106adbc632 nodeName:}" failed. No retries permitted until 2025-12-10 15:39:47.156735812 +0000 UTC m=+983.757619444 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/423be682-6135-4dd2-8366-b7106adbc632-cert") pod "infra-operator-controller-manager-78d48bff9d-wsxsj" (UID: "423be682-6135-4dd2-8366-b7106adbc632") : secret "infra-operator-webhook-server-cert" not found Dec 10 15:39:45 crc kubenswrapper[4755]: E1210 15:39:45.157110 4755 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 10 15:39:45 crc kubenswrapper[4755]: E1210 15:39:45.157145 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3fe0f8bf-8203-4cbb-b474-d00be4716ff5-webhook-certs podName:3fe0f8bf-8203-4cbb-b474-d00be4716ff5 nodeName:}" failed. No retries permitted until 2025-12-10 15:39:46.157135383 +0000 UTC m=+982.758019015 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/3fe0f8bf-8203-4cbb-b474-d00be4716ff5-webhook-certs") pod "openstack-operator-controller-manager-656546cb8f-64wpq" (UID: "3fe0f8bf-8203-4cbb-b474-d00be4716ff5") : secret "webhook-server-cert" not found Dec 10 15:39:45 crc kubenswrapper[4755]: E1210 15:39:45.157187 4755 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 10 15:39:45 crc kubenswrapper[4755]: E1210 15:39:45.157212 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3fe0f8bf-8203-4cbb-b474-d00be4716ff5-metrics-certs podName:3fe0f8bf-8203-4cbb-b474-d00be4716ff5 nodeName:}" failed. No retries permitted until 2025-12-10 15:39:46.157204615 +0000 UTC m=+982.758088247 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3fe0f8bf-8203-4cbb-b474-d00be4716ff5-metrics-certs") pod "openstack-operator-controller-manager-656546cb8f-64wpq" (UID: "3fe0f8bf-8203-4cbb-b474-d00be4716ff5") : secret "metrics-server-cert" not found Dec 10 15:39:45 crc kubenswrapper[4755]: I1210 15:39:45.211091 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6c677c69b-ljn8k"] Dec 10 15:39:45 crc kubenswrapper[4755]: W1210 15:39:45.221512 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod313bb539_c9d7_4bb0_a5e3_3a36c45c0f79.slice/crio-f0b80ee6ed99cd6f70b465079e0ebcc7676e701315a6253d728993069526d2b4 WatchSource:0}: Error finding container f0b80ee6ed99cd6f70b465079e0ebcc7676e701315a6253d728993069526d2b4: Status 404 returned error can't find the container with id f0b80ee6ed99cd6f70b465079e0ebcc7676e701315a6253d728993069526d2b4 Dec 10 15:39:45 crc kubenswrapper[4755]: I1210 15:39:45.262287 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0bcf5a92-0324-4799-be55-0e49bd060ee7-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879fh778d\" (UID: \"0bcf5a92-0324-4799-be55-0e49bd060ee7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fh778d" Dec 10 15:39:45 crc kubenswrapper[4755]: E1210 15:39:45.262521 4755 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 10 15:39:45 crc kubenswrapper[4755]: E1210 15:39:45.262568 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0bcf5a92-0324-4799-be55-0e49bd060ee7-cert podName:0bcf5a92-0324-4799-be55-0e49bd060ee7 nodeName:}" failed. No retries permitted until 2025-12-10 15:39:47.262554 +0000 UTC m=+983.863437632 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/0bcf5a92-0324-4799-be55-0e49bd060ee7-cert") pod "openstack-baremetal-operator-controller-manager-84b575879fh778d" (UID: "0bcf5a92-0324-4799-be55-0e49bd060ee7") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 10 15:39:45 crc kubenswrapper[4755]: I1210 15:39:45.514322 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-bs4zx"] Dec 10 15:39:45 crc kubenswrapper[4755]: I1210 15:39:45.523430 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-bgxgp"] Dec 10 15:39:45 crc kubenswrapper[4755]: I1210 15:39:45.553379 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-697fb699cf-z5frc"] Dec 10 15:39:45 crc kubenswrapper[4755]: I1210 15:39:45.605094 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-vqgpv"] Dec 10 15:39:45 crc kubenswrapper[4755]: I1210 15:39:45.632571 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7765d96ddf-t7zjt"] Dec 10 15:39:45 crc kubenswrapper[4755]: W1210 15:39:45.639687 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode4692dc7_ecb8_45b5_be03_9990c0a32b2a.slice/crio-0c9d4fd283cd84d7455cb4f607bd66d40d0255969fa90285a5f245cdb49d1ba8 WatchSource:0}: Error finding container 0c9d4fd283cd84d7455cb4f607bd66d40d0255969fa90285a5f245cdb49d1ba8: Status 404 returned error can't find the container with id 0c9d4fd283cd84d7455cb4f607bd66d40d0255969fa90285a5f245cdb49d1ba8 Dec 10 15:39:45 crc kubenswrapper[4755]: W1210 15:39:45.640082 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod359a4730_4858_4677_9977_a9d6cea57122.slice/crio-8fccd4c7b0d1769a11efcd27ff9dd8ed7cf7ad5c0f142920b2616cb5493915c6 WatchSource:0}: Error finding container 8fccd4c7b0d1769a11efcd27ff9dd8ed7cf7ad5c0f142920b2616cb5493915c6: Status 404 returned error can't find the container with id 8fccd4c7b0d1769a11efcd27ff9dd8ed7cf7ad5c0f142920b2616cb5493915c6 Dec 10 15:39:45 crc kubenswrapper[4755]: I1210 15:39:45.649074 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-79c8c4686c-6vjxq"] Dec 10 15:39:45 crc kubenswrapper[4755]: I1210 15:39:45.663524 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-967d97867-h8w5g"] Dec 10 15:39:45 crc kubenswrapper[4755]: I1210 15:39:45.703484 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-5b5fd79c9c-5kfxq"] Dec 10 15:39:45 crc kubenswrapper[4755]: I1210 15:39:45.720573 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-xtr7m"] Dec 10 15:39:45 crc kubenswrapper[4755]: W1210 15:39:45.726926 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod67e9d86f_4e93_4e78_a9d5_d8023721414d.slice/crio-970ad15853a138695a256703ea258f11c52b8762dfb356d07a6fac305ddd665c WatchSource:0}: Error finding container 970ad15853a138695a256703ea258f11c52b8762dfb356d07a6fac305ddd665c: Status 404 returned error can't find the container with id 970ad15853a138695a256703ea258f11c52b8762dfb356d07a6fac305ddd665c Dec 10 15:39:45 crc kubenswrapper[4755]: I1210 15:39:45.730695 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-xpv7s"] Dec 10 15:39:45 crc kubenswrapper[4755]: I1210 15:39:45.918536 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5f8d644d5d-87jfq"] Dec 10 15:39:45 crc kubenswrapper[4755]: I1210 15:39:45.930568 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-lbj4z"] Dec 10 15:39:45 crc kubenswrapper[4755]: I1210 15:39:45.962021 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-pxstj"] Dec 10 15:39:46 crc kubenswrapper[4755]: I1210 15:39:46.026176 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-9d58d64bc-jzkms"] Dec 10 15:39:46 crc kubenswrapper[4755]: E1210 15:39:46.058209 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ggq8q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-998648c74-pxstj_openstack-operators(10728d77-c715-4cb1-ab30-5747594a6320): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 10 15:39:46 crc kubenswrapper[4755]: E1210 15:39:46.063553 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ggq8q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-998648c74-pxstj_openstack-operators(10728d77-c715-4cb1-ab30-5747594a6320): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 10 15:39:46 crc kubenswrapper[4755]: E1210 15:39:46.070133 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/octavia-operator-controller-manager-998648c74-pxstj" podUID="10728d77-c715-4cb1-ab30-5747594a6320" Dec 10 15:39:46 crc kubenswrapper[4755]: E1210 15:39:46.087810 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:3aa109bb973253ae9dcf339b9b65abbd1176cdb4be672c93e538a5f113816991,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jx9kp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-9d58d64bc-jzkms_openstack-operators(f0af4059-171e-409f-8043-8f112664e01c): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 10 15:39:46 crc kubenswrapper[4755]: I1210 15:39:46.087914 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zz7fk"] Dec 10 15:39:46 crc kubenswrapper[4755]: E1210 15:39:46.090060 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jx9kp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-9d58d64bc-jzkms_openstack-operators(f0af4059-171e-409f-8043-8f112664e01c): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 10 15:39:46 crc kubenswrapper[4755]: E1210 15:39:46.091246 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-jzkms" podUID="f0af4059-171e-409f-8043-8f112664e01c" Dec 10 15:39:46 crc kubenswrapper[4755]: I1210 15:39:46.109541 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-75944c9b7-qrhhh"] Dec 10 15:39:46 crc kubenswrapper[4755]: I1210 15:39:46.125570 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-wbch9"] Dec 10 15:39:46 crc kubenswrapper[4755]: E1210 15:39:46.169294 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:961417d59f527d925ac48ff6a11de747d0493315e496e34dc83d76a1a1fff58a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gcpdt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-75944c9b7-qrhhh_openstack-operators(15009193-27b2-4bf2-a795-f6106327e331): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 10 15:39:46 crc kubenswrapper[4755]: E1210 15:39:46.180963 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gcpdt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-75944c9b7-qrhhh_openstack-operators(15009193-27b2-4bf2-a795-f6106327e331): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 10 15:39:46 crc kubenswrapper[4755]: I1210 15:39:46.185159 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3fe0f8bf-8203-4cbb-b474-d00be4716ff5-metrics-certs\") pod \"openstack-operator-controller-manager-656546cb8f-64wpq\" (UID: \"3fe0f8bf-8203-4cbb-b474-d00be4716ff5\") " pod="openstack-operators/openstack-operator-controller-manager-656546cb8f-64wpq" Dec 10 15:39:46 crc kubenswrapper[4755]: I1210 15:39:46.185506 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3fe0f8bf-8203-4cbb-b474-d00be4716ff5-webhook-certs\") pod \"openstack-operator-controller-manager-656546cb8f-64wpq\" (UID: \"3fe0f8bf-8203-4cbb-b474-d00be4716ff5\") " pod="openstack-operators/openstack-operator-controller-manager-656546cb8f-64wpq" Dec 10 15:39:46 crc kubenswrapper[4755]: E1210 15:39:46.185643 4755 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 10 15:39:46 crc kubenswrapper[4755]: E1210 15:39:46.185691 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3fe0f8bf-8203-4cbb-b474-d00be4716ff5-webhook-certs podName:3fe0f8bf-8203-4cbb-b474-d00be4716ff5 nodeName:}" failed. No retries permitted until 2025-12-10 15:39:48.18567692 +0000 UTC m=+984.786560552 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/3fe0f8bf-8203-4cbb-b474-d00be4716ff5-webhook-certs") pod "openstack-operator-controller-manager-656546cb8f-64wpq" (UID: "3fe0f8bf-8203-4cbb-b474-d00be4716ff5") : secret "webhook-server-cert" not found Dec 10 15:39:46 crc kubenswrapper[4755]: E1210 15:39:46.185733 4755 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 10 15:39:46 crc kubenswrapper[4755]: E1210 15:39:46.185753 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3fe0f8bf-8203-4cbb-b474-d00be4716ff5-metrics-certs podName:3fe0f8bf-8203-4cbb-b474-d00be4716ff5 nodeName:}" failed. No retries permitted until 2025-12-10 15:39:48.185747592 +0000 UTC m=+984.786631214 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3fe0f8bf-8203-4cbb-b474-d00be4716ff5-metrics-certs") pod "openstack-operator-controller-manager-656546cb8f-64wpq" (UID: "3fe0f8bf-8203-4cbb-b474-d00be4716ff5") : secret "metrics-server-cert" not found Dec 10 15:39:46 crc kubenswrapper[4755]: I1210 15:39:46.190724 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-967d97867-h8w5g" event={"ID":"359a4730-4858-4677-9977-a9d6cea57122","Type":"ContainerStarted","Data":"8fccd4c7b0d1769a11efcd27ff9dd8ed7cf7ad5c0f142920b2616cb5493915c6"} Dec 10 15:39:46 crc kubenswrapper[4755]: E1210 15:39:46.190811 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-626g8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5854674fcc-wbch9_openstack-operators(91969126-0986-41a4-8d56-19b071710ca8): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 10 15:39:46 crc kubenswrapper[4755]: E1210 15:39:46.190843 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-qrhhh" podUID="15009193-27b2-4bf2-a795-f6106327e331" Dec 10 15:39:46 crc kubenswrapper[4755]: E1210 15:39:46.208391 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-626g8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5854674fcc-wbch9_openstack-operators(91969126-0986-41a4-8d56-19b071710ca8): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 10 15:39:46 crc kubenswrapper[4755]: E1210 15:39:46.209597 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/test-operator-controller-manager-5854674fcc-wbch9" podUID="91969126-0986-41a4-8d56-19b071710ca8" Dec 10 15:39:46 crc kubenswrapper[4755]: I1210 15:39:46.224429 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-qrhhh" event={"ID":"15009193-27b2-4bf2-a795-f6106327e331","Type":"ContainerStarted","Data":"79192a80525b2ee93f4fbe2a0aad4a47646bddd429fad1006fa4f73fe1023cd4"} Dec 10 15:39:46 crc kubenswrapper[4755]: I1210 15:39:46.236250 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-xpv7s" event={"ID":"67e9d86f-4e93-4e78-a9d5-d8023721414d","Type":"ContainerStarted","Data":"970ad15853a138695a256703ea258f11c52b8762dfb356d07a6fac305ddd665c"} Dec 10 15:39:46 crc kubenswrapper[4755]: I1210 15:39:46.240162 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-xtr7m" event={"ID":"d3b1545f-1f46-4869-bc92-cdc7e5b1fc4c","Type":"ContainerStarted","Data":"c9f044bb7f75df6b099448918101923583ba86af2fd40d78b56597280ad40969"} Dec 10 15:39:46 crc kubenswrapper[4755]: I1210 15:39:46.244072 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-lbj4z" event={"ID":"ebb31199-21f8-4493-8725-1c5e1aa70d66","Type":"ContainerStarted","Data":"7808714d404cc5eaa84f66dee4440e69faa6aceff60341b4775255a07c70c746"} Dec 10 15:39:46 crc kubenswrapper[4755]: I1210 15:39:46.245569 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-t7zjt" event={"ID":"8bc636b5-ac4d-4b4e-8b50-102a72e6ee2a","Type":"ContainerStarted","Data":"75b0d6682f445299028fa22fef1894dfcb6aa410b8cd184a4d9f1f9ca675cfea"} Dec 10 15:39:46 crc kubenswrapper[4755]: I1210 15:39:46.251255 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-ljn8k" event={"ID":"313bb539-c9d7-4bb0-a5e3-3a36c45c0f79","Type":"ContainerStarted","Data":"f0b80ee6ed99cd6f70b465079e0ebcc7676e701315a6253d728993069526d2b4"} Dec 10 15:39:46 crc kubenswrapper[4755]: I1210 15:39:46.269260 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-5kfxq" event={"ID":"2a918143-c2cf-4c73-b547-c8d0d9c6e2a6","Type":"ContainerStarted","Data":"8b6553667656c8080cc597af6633325dc5e028144355dc4a0397f04a66a6ecc6"} Dec 10 15:39:46 crc kubenswrapper[4755]: I1210 15:39:46.270491 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-bgxgp" event={"ID":"46715591-f787-42bc-9871-a51b08963893","Type":"ContainerStarted","Data":"c1143d4117f148a9eca0ccb7d1a5bb013536bc52083bac0b09b21681988dcc60"} Dec 10 15:39:46 crc kubenswrapper[4755]: I1210 15:39:46.273337 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zz7fk" event={"ID":"fbaec88b-8593-468f-aefc-777f8140504d","Type":"ContainerStarted","Data":"3a9d4873e01e0cdc4e36d0ca31fe948bce0bc84abf5e910dbf34539f6895348e"} Dec 10 15:39:46 crc kubenswrapper[4755]: I1210 15:39:46.275765 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-5f8d644d5d-87jfq" event={"ID":"e36da2bb-2cc5-4a66-97f3-ace6966152fb","Type":"ContainerStarted","Data":"7c4c8c7f07754a30092ac0cb56a14e8d90a9cefac51c364e3f57b29de663cfd5"} Dec 10 15:39:46 crc kubenswrapper[4755]: I1210 15:39:46.283148 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-z5frc" event={"ID":"05b2a283-f9ce-4cbb-a92f-a22a227de36d","Type":"ContainerStarted","Data":"2acb437e6380711aff2ecbc6d5677e1328c2a07575aed67f7e560ce56952b0b0"} Dec 10 15:39:46 crc kubenswrapper[4755]: I1210 15:39:46.301824 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-6vjxq" event={"ID":"e4692dc7-ecb8-45b5-be03-9990c0a32b2a","Type":"ContainerStarted","Data":"0c9d4fd283cd84d7455cb4f607bd66d40d0255969fa90285a5f245cdb49d1ba8"} Dec 10 15:39:46 crc kubenswrapper[4755]: I1210 15:39:46.305145 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-bs4zx" event={"ID":"a07fdc07-16fa-4834-b370-378b543dde9f","Type":"ContainerStarted","Data":"04e1267b501345ea2da37d2c039b3878c5a633f253c911e9517fa8fea0dcf8c7"} Dec 10 15:39:46 crc kubenswrapper[4755]: I1210 15:39:46.305979 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-pxstj" event={"ID":"10728d77-c715-4cb1-ab30-5747594a6320","Type":"ContainerStarted","Data":"809db73b300ade30b2c16f76e384fbbf5ac3815088028aedd90ec1c1b676bf29"} Dec 10 15:39:46 crc kubenswrapper[4755]: I1210 15:39:46.314425 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-jzkms" event={"ID":"f0af4059-171e-409f-8043-8f112664e01c","Type":"ContainerStarted","Data":"5eab58719ec22994fc5eb2c5ac4443392eaed18db1ee1cac49476ed5a8dca80a"} Dec 10 15:39:46 crc kubenswrapper[4755]: E1210 15:39:46.315380 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/octavia-operator-controller-manager-998648c74-pxstj" podUID="10728d77-c715-4cb1-ab30-5747594a6320" Dec 10 15:39:46 crc kubenswrapper[4755]: E1210 15:39:46.320131 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:3aa109bb973253ae9dcf339b9b65abbd1176cdb4be672c93e538a5f113816991\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-jzkms" podUID="f0af4059-171e-409f-8043-8f112664e01c" Dec 10 15:39:46 crc kubenswrapper[4755]: I1210 15:39:46.323913 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-vqgpv" event={"ID":"ab09fdaf-b326-4221-a24c-9415dabdbcdd","Type":"ContainerStarted","Data":"a493655dd710653f5f85a52a4a208bc9382e8501145780207326281913a20d59"} Dec 10 15:39:47 crc kubenswrapper[4755]: I1210 15:39:47.210090 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/423be682-6135-4dd2-8366-b7106adbc632-cert\") pod \"infra-operator-controller-manager-78d48bff9d-wsxsj\" (UID: \"423be682-6135-4dd2-8366-b7106adbc632\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-wsxsj" Dec 10 15:39:47 crc kubenswrapper[4755]: E1210 15:39:47.210290 4755 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 10 15:39:47 crc kubenswrapper[4755]: E1210 15:39:47.211694 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/423be682-6135-4dd2-8366-b7106adbc632-cert podName:423be682-6135-4dd2-8366-b7106adbc632 nodeName:}" failed. No retries permitted until 2025-12-10 15:39:51.21165394 +0000 UTC m=+987.812537622 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/423be682-6135-4dd2-8366-b7106adbc632-cert") pod "infra-operator-controller-manager-78d48bff9d-wsxsj" (UID: "423be682-6135-4dd2-8366-b7106adbc632") : secret "infra-operator-webhook-server-cert" not found Dec 10 15:39:47 crc kubenswrapper[4755]: I1210 15:39:47.311927 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0bcf5a92-0324-4799-be55-0e49bd060ee7-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879fh778d\" (UID: \"0bcf5a92-0324-4799-be55-0e49bd060ee7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fh778d" Dec 10 15:39:47 crc kubenswrapper[4755]: E1210 15:39:47.312589 4755 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 10 15:39:47 crc kubenswrapper[4755]: E1210 15:39:47.312667 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0bcf5a92-0324-4799-be55-0e49bd060ee7-cert podName:0bcf5a92-0324-4799-be55-0e49bd060ee7 nodeName:}" failed. No retries permitted until 2025-12-10 15:39:51.312649462 +0000 UTC m=+987.913533094 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/0bcf5a92-0324-4799-be55-0e49bd060ee7-cert") pod "openstack-baremetal-operator-controller-manager-84b575879fh778d" (UID: "0bcf5a92-0324-4799-be55-0e49bd060ee7") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 10 15:39:47 crc kubenswrapper[4755]: I1210 15:39:47.338201 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-wbch9" event={"ID":"91969126-0986-41a4-8d56-19b071710ca8","Type":"ContainerStarted","Data":"1dda075904a548bc203fa97bead2a299be5972790158a688340920bc3db018f6"} Dec 10 15:39:47 crc kubenswrapper[4755]: E1210 15:39:47.344530 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/test-operator-controller-manager-5854674fcc-wbch9" podUID="91969126-0986-41a4-8d56-19b071710ca8" Dec 10 15:39:47 crc kubenswrapper[4755]: E1210 15:39:47.344757 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/octavia-operator-controller-manager-998648c74-pxstj" podUID="10728d77-c715-4cb1-ab30-5747594a6320" Dec 10 15:39:47 crc kubenswrapper[4755]: E1210 15:39:47.344948 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:3aa109bb973253ae9dcf339b9b65abbd1176cdb4be672c93e538a5f113816991\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-jzkms" podUID="f0af4059-171e-409f-8043-8f112664e01c" Dec 10 15:39:47 crc kubenswrapper[4755]: E1210 15:39:47.345345 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:961417d59f527d925ac48ff6a11de747d0493315e496e34dc83d76a1a1fff58a\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-qrhhh" podUID="15009193-27b2-4bf2-a795-f6106327e331" Dec 10 15:39:48 crc kubenswrapper[4755]: I1210 15:39:48.230226 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3fe0f8bf-8203-4cbb-b474-d00be4716ff5-webhook-certs\") pod \"openstack-operator-controller-manager-656546cb8f-64wpq\" (UID: \"3fe0f8bf-8203-4cbb-b474-d00be4716ff5\") " pod="openstack-operators/openstack-operator-controller-manager-656546cb8f-64wpq" Dec 10 15:39:48 crc kubenswrapper[4755]: I1210 15:39:48.230535 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3fe0f8bf-8203-4cbb-b474-d00be4716ff5-metrics-certs\") pod \"openstack-operator-controller-manager-656546cb8f-64wpq\" (UID: \"3fe0f8bf-8203-4cbb-b474-d00be4716ff5\") " pod="openstack-operators/openstack-operator-controller-manager-656546cb8f-64wpq" Dec 10 15:39:48 crc kubenswrapper[4755]: E1210 15:39:48.230415 4755 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 10 15:39:48 crc kubenswrapper[4755]: E1210 15:39:48.230695 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3fe0f8bf-8203-4cbb-b474-d00be4716ff5-webhook-certs podName:3fe0f8bf-8203-4cbb-b474-d00be4716ff5 nodeName:}" failed. No retries permitted until 2025-12-10 15:39:52.23067883 +0000 UTC m=+988.831562462 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/3fe0f8bf-8203-4cbb-b474-d00be4716ff5-webhook-certs") pod "openstack-operator-controller-manager-656546cb8f-64wpq" (UID: "3fe0f8bf-8203-4cbb-b474-d00be4716ff5") : secret "webhook-server-cert" not found Dec 10 15:39:48 crc kubenswrapper[4755]: E1210 15:39:48.230648 4755 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 10 15:39:48 crc kubenswrapper[4755]: E1210 15:39:48.230988 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3fe0f8bf-8203-4cbb-b474-d00be4716ff5-metrics-certs podName:3fe0f8bf-8203-4cbb-b474-d00be4716ff5 nodeName:}" failed. No retries permitted until 2025-12-10 15:39:52.230979968 +0000 UTC m=+988.831863600 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3fe0f8bf-8203-4cbb-b474-d00be4716ff5-metrics-certs") pod "openstack-operator-controller-manager-656546cb8f-64wpq" (UID: "3fe0f8bf-8203-4cbb-b474-d00be4716ff5") : secret "metrics-server-cert" not found Dec 10 15:39:48 crc kubenswrapper[4755]: E1210 15:39:48.365729 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/test-operator-controller-manager-5854674fcc-wbch9" podUID="91969126-0986-41a4-8d56-19b071710ca8" Dec 10 15:39:48 crc kubenswrapper[4755]: E1210 15:39:48.365837 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:961417d59f527d925ac48ff6a11de747d0493315e496e34dc83d76a1a1fff58a\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-qrhhh" podUID="15009193-27b2-4bf2-a795-f6106327e331" Dec 10 15:39:51 crc kubenswrapper[4755]: I1210 15:39:51.284611 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/423be682-6135-4dd2-8366-b7106adbc632-cert\") pod \"infra-operator-controller-manager-78d48bff9d-wsxsj\" (UID: \"423be682-6135-4dd2-8366-b7106adbc632\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-wsxsj" Dec 10 15:39:51 crc kubenswrapper[4755]: E1210 15:39:51.284881 4755 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 10 15:39:51 crc kubenswrapper[4755]: E1210 15:39:51.285088 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/423be682-6135-4dd2-8366-b7106adbc632-cert podName:423be682-6135-4dd2-8366-b7106adbc632 nodeName:}" failed. No retries permitted until 2025-12-10 15:39:59.285068311 +0000 UTC m=+995.885951943 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/423be682-6135-4dd2-8366-b7106adbc632-cert") pod "infra-operator-controller-manager-78d48bff9d-wsxsj" (UID: "423be682-6135-4dd2-8366-b7106adbc632") : secret "infra-operator-webhook-server-cert" not found Dec 10 15:39:51 crc kubenswrapper[4755]: I1210 15:39:51.386364 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0bcf5a92-0324-4799-be55-0e49bd060ee7-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879fh778d\" (UID: \"0bcf5a92-0324-4799-be55-0e49bd060ee7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fh778d" Dec 10 15:39:51 crc kubenswrapper[4755]: E1210 15:39:51.386704 4755 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 10 15:39:51 crc kubenswrapper[4755]: E1210 15:39:51.386831 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0bcf5a92-0324-4799-be55-0e49bd060ee7-cert podName:0bcf5a92-0324-4799-be55-0e49bd060ee7 nodeName:}" failed. No retries permitted until 2025-12-10 15:39:59.386793632 +0000 UTC m=+995.987677264 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/0bcf5a92-0324-4799-be55-0e49bd060ee7-cert") pod "openstack-baremetal-operator-controller-manager-84b575879fh778d" (UID: "0bcf5a92-0324-4799-be55-0e49bd060ee7") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 10 15:39:52 crc kubenswrapper[4755]: I1210 15:39:52.304120 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3fe0f8bf-8203-4cbb-b474-d00be4716ff5-metrics-certs\") pod \"openstack-operator-controller-manager-656546cb8f-64wpq\" (UID: \"3fe0f8bf-8203-4cbb-b474-d00be4716ff5\") " pod="openstack-operators/openstack-operator-controller-manager-656546cb8f-64wpq" Dec 10 15:39:52 crc kubenswrapper[4755]: I1210 15:39:52.304433 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3fe0f8bf-8203-4cbb-b474-d00be4716ff5-webhook-certs\") pod \"openstack-operator-controller-manager-656546cb8f-64wpq\" (UID: \"3fe0f8bf-8203-4cbb-b474-d00be4716ff5\") " pod="openstack-operators/openstack-operator-controller-manager-656546cb8f-64wpq" Dec 10 15:39:52 crc kubenswrapper[4755]: E1210 15:39:52.304576 4755 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 10 15:39:52 crc kubenswrapper[4755]: E1210 15:39:52.304626 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3fe0f8bf-8203-4cbb-b474-d00be4716ff5-webhook-certs podName:3fe0f8bf-8203-4cbb-b474-d00be4716ff5 nodeName:}" failed. No retries permitted until 2025-12-10 15:40:00.304612035 +0000 UTC m=+996.905495667 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/3fe0f8bf-8203-4cbb-b474-d00be4716ff5-webhook-certs") pod "openstack-operator-controller-manager-656546cb8f-64wpq" (UID: "3fe0f8bf-8203-4cbb-b474-d00be4716ff5") : secret "webhook-server-cert" not found Dec 10 15:39:52 crc kubenswrapper[4755]: E1210 15:39:52.305018 4755 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 10 15:39:52 crc kubenswrapper[4755]: E1210 15:39:52.305043 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3fe0f8bf-8203-4cbb-b474-d00be4716ff5-metrics-certs podName:3fe0f8bf-8203-4cbb-b474-d00be4716ff5 nodeName:}" failed. No retries permitted until 2025-12-10 15:40:00.305035636 +0000 UTC m=+996.905919268 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3fe0f8bf-8203-4cbb-b474-d00be4716ff5-metrics-certs") pod "openstack-operator-controller-manager-656546cb8f-64wpq" (UID: "3fe0f8bf-8203-4cbb-b474-d00be4716ff5") : secret "metrics-server-cert" not found Dec 10 15:39:53 crc kubenswrapper[4755]: I1210 15:39:53.280999 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fd766"] Dec 10 15:39:53 crc kubenswrapper[4755]: I1210 15:39:53.282515 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fd766" Dec 10 15:39:53 crc kubenswrapper[4755]: I1210 15:39:53.305368 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fd766"] Dec 10 15:39:53 crc kubenswrapper[4755]: I1210 15:39:53.426174 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb9cd508-2b78-45ac-93b5-9813f44c7b6d-catalog-content\") pod \"certified-operators-fd766\" (UID: \"fb9cd508-2b78-45ac-93b5-9813f44c7b6d\") " pod="openshift-marketplace/certified-operators-fd766" Dec 10 15:39:53 crc kubenswrapper[4755]: I1210 15:39:53.426297 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb9cd508-2b78-45ac-93b5-9813f44c7b6d-utilities\") pod \"certified-operators-fd766\" (UID: \"fb9cd508-2b78-45ac-93b5-9813f44c7b6d\") " pod="openshift-marketplace/certified-operators-fd766" Dec 10 15:39:53 crc kubenswrapper[4755]: I1210 15:39:53.426430 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vt5q2\" (UniqueName: \"kubernetes.io/projected/fb9cd508-2b78-45ac-93b5-9813f44c7b6d-kube-api-access-vt5q2\") pod \"certified-operators-fd766\" (UID: \"fb9cd508-2b78-45ac-93b5-9813f44c7b6d\") " pod="openshift-marketplace/certified-operators-fd766" Dec 10 15:39:53 crc kubenswrapper[4755]: I1210 15:39:53.528190 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vt5q2\" (UniqueName: \"kubernetes.io/projected/fb9cd508-2b78-45ac-93b5-9813f44c7b6d-kube-api-access-vt5q2\") pod \"certified-operators-fd766\" (UID: \"fb9cd508-2b78-45ac-93b5-9813f44c7b6d\") " pod="openshift-marketplace/certified-operators-fd766" Dec 10 15:39:53 crc kubenswrapper[4755]: I1210 15:39:53.528258 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb9cd508-2b78-45ac-93b5-9813f44c7b6d-catalog-content\") pod \"certified-operators-fd766\" (UID: \"fb9cd508-2b78-45ac-93b5-9813f44c7b6d\") " pod="openshift-marketplace/certified-operators-fd766" Dec 10 15:39:53 crc kubenswrapper[4755]: I1210 15:39:53.528342 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb9cd508-2b78-45ac-93b5-9813f44c7b6d-utilities\") pod \"certified-operators-fd766\" (UID: \"fb9cd508-2b78-45ac-93b5-9813f44c7b6d\") " pod="openshift-marketplace/certified-operators-fd766" Dec 10 15:39:53 crc kubenswrapper[4755]: I1210 15:39:53.528963 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb9cd508-2b78-45ac-93b5-9813f44c7b6d-utilities\") pod \"certified-operators-fd766\" (UID: \"fb9cd508-2b78-45ac-93b5-9813f44c7b6d\") " pod="openshift-marketplace/certified-operators-fd766" Dec 10 15:39:53 crc kubenswrapper[4755]: I1210 15:39:53.529017 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb9cd508-2b78-45ac-93b5-9813f44c7b6d-catalog-content\") pod \"certified-operators-fd766\" (UID: \"fb9cd508-2b78-45ac-93b5-9813f44c7b6d\") " pod="openshift-marketplace/certified-operators-fd766" Dec 10 15:39:53 crc kubenswrapper[4755]: I1210 15:39:53.548272 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vt5q2\" (UniqueName: \"kubernetes.io/projected/fb9cd508-2b78-45ac-93b5-9813f44c7b6d-kube-api-access-vt5q2\") pod \"certified-operators-fd766\" (UID: \"fb9cd508-2b78-45ac-93b5-9813f44c7b6d\") " pod="openshift-marketplace/certified-operators-fd766" Dec 10 15:39:53 crc kubenswrapper[4755]: I1210 15:39:53.606988 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fd766" Dec 10 15:39:59 crc kubenswrapper[4755]: I1210 15:39:59.329116 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/423be682-6135-4dd2-8366-b7106adbc632-cert\") pod \"infra-operator-controller-manager-78d48bff9d-wsxsj\" (UID: \"423be682-6135-4dd2-8366-b7106adbc632\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-wsxsj" Dec 10 15:39:59 crc kubenswrapper[4755]: I1210 15:39:59.338802 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/423be682-6135-4dd2-8366-b7106adbc632-cert\") pod \"infra-operator-controller-manager-78d48bff9d-wsxsj\" (UID: \"423be682-6135-4dd2-8366-b7106adbc632\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-wsxsj" Dec 10 15:39:59 crc kubenswrapper[4755]: I1210 15:39:59.430810 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0bcf5a92-0324-4799-be55-0e49bd060ee7-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879fh778d\" (UID: \"0bcf5a92-0324-4799-be55-0e49bd060ee7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fh778d" Dec 10 15:39:59 crc kubenswrapper[4755]: I1210 15:39:59.434699 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0bcf5a92-0324-4799-be55-0e49bd060ee7-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879fh778d\" (UID: \"0bcf5a92-0324-4799-be55-0e49bd060ee7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fh778d" Dec 10 15:39:59 crc kubenswrapper[4755]: I1210 15:39:59.501705 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-wsxsj" Dec 10 15:39:59 crc kubenswrapper[4755]: I1210 15:39:59.628961 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fh778d" Dec 10 15:40:00 crc kubenswrapper[4755]: I1210 15:40:00.347191 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3fe0f8bf-8203-4cbb-b474-d00be4716ff5-webhook-certs\") pod \"openstack-operator-controller-manager-656546cb8f-64wpq\" (UID: \"3fe0f8bf-8203-4cbb-b474-d00be4716ff5\") " pod="openstack-operators/openstack-operator-controller-manager-656546cb8f-64wpq" Dec 10 15:40:00 crc kubenswrapper[4755]: I1210 15:40:00.347602 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3fe0f8bf-8203-4cbb-b474-d00be4716ff5-metrics-certs\") pod \"openstack-operator-controller-manager-656546cb8f-64wpq\" (UID: \"3fe0f8bf-8203-4cbb-b474-d00be4716ff5\") " pod="openstack-operators/openstack-operator-controller-manager-656546cb8f-64wpq" Dec 10 15:40:00 crc kubenswrapper[4755]: I1210 15:40:00.369517 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3fe0f8bf-8203-4cbb-b474-d00be4716ff5-webhook-certs\") pod \"openstack-operator-controller-manager-656546cb8f-64wpq\" (UID: \"3fe0f8bf-8203-4cbb-b474-d00be4716ff5\") " pod="openstack-operators/openstack-operator-controller-manager-656546cb8f-64wpq" Dec 10 15:40:00 crc kubenswrapper[4755]: I1210 15:40:00.371002 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3fe0f8bf-8203-4cbb-b474-d00be4716ff5-metrics-certs\") pod \"openstack-operator-controller-manager-656546cb8f-64wpq\" (UID: \"3fe0f8bf-8203-4cbb-b474-d00be4716ff5\") " pod="openstack-operators/openstack-operator-controller-manager-656546cb8f-64wpq" Dec 10 15:40:00 crc kubenswrapper[4755]: I1210 15:40:00.434950 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-656546cb8f-64wpq" Dec 10 15:40:00 crc kubenswrapper[4755]: E1210 15:40:00.788202 4755 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:900050d3501c0785b227db34b89883efe68247816e5c7427cacb74f8aa10605a" Dec 10 15:40:00 crc kubenswrapper[4755]: E1210 15:40:00.788385 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:900050d3501c0785b227db34b89883efe68247816e5c7427cacb74f8aa10605a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rlwqg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-697fb699cf-z5frc_openstack-operators(05b2a283-f9ce-4cbb-a92f-a22a227de36d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 10 15:40:01 crc kubenswrapper[4755]: E1210 15:40:01.729912 4755 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/cinder-operator@sha256:981b6a8f95934a86c5f10ef6e198b07265aeba7f11cf84b9ccd13dfaf06f3ca3" Dec 10 15:40:01 crc kubenswrapper[4755]: E1210 15:40:01.730742 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/cinder-operator@sha256:981b6a8f95934a86c5f10ef6e198b07265aeba7f11cf84b9ccd13dfaf06f3ca3,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5xzcn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-6c677c69b-ljn8k_openstack-operators(313bb539-c9d7-4bb0-a5e3-3a36c45c0f79): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 10 15:40:02 crc kubenswrapper[4755]: E1210 15:40:02.290307 4755 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:0b3fb69f35c151895d3dffd514974a9f9fe1c77c3bca69b78b81efb183cf4557" Dec 10 15:40:02 crc kubenswrapper[4755]: E1210 15:40:02.290659 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:0b3fb69f35c151895d3dffd514974a9f9fe1c77c3bca69b78b81efb183cf4557,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fq7pt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-5fdfd5b6b5-bgxgp_openstack-operators(46715591-f787-42bc-9871-a51b08963893): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 10 15:40:02 crc kubenswrapper[4755]: E1210 15:40:02.933623 4755 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59" Dec 10 15:40:02 crc kubenswrapper[4755]: E1210 15:40:02.933872 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hl44w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-b6456fdb6-xpv7s_openstack-operators(67e9d86f-4e93-4e78-a9d5-d8023721414d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 10 15:40:04 crc kubenswrapper[4755]: E1210 15:40:04.174131 4755 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f" Dec 10 15:40:04 crc kubenswrapper[4755]: E1210 15:40:04.174357 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lkv2b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-78f8948974-lbj4z_openstack-operators(ebb31199-21f8-4493-8725-1c5e1aa70d66): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 10 15:40:04 crc kubenswrapper[4755]: E1210 15:40:04.893790 4755 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:c4abfc148600dfa85915f3dc911d988ea2335f26cb6b8d749fe79bfe53e5e429" Dec 10 15:40:04 crc kubenswrapper[4755]: E1210 15:40:04.894340 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:c4abfc148600dfa85915f3dc911d988ea2335f26cb6b8d749fe79bfe53e5e429,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mrh2x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-5f64f6f8bb-vqgpv_openstack-operators(ab09fdaf-b326-4221-a24c-9415dabdbcdd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 10 15:40:07 crc kubenswrapper[4755]: E1210 15:40:07.188590 4755 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:72ad6517987f674af0d0ae092cbb874aeae909c8b8b60188099c311762ebc8f7" Dec 10 15:40:07 crc kubenswrapper[4755]: E1210 15:40:07.188850 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:72ad6517987f674af0d0ae092cbb874aeae909c8b8b60188099c311762ebc8f7,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tbhzn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-7765d96ddf-t7zjt_openstack-operators(8bc636b5-ac4d-4b4e-8b50-102a72e6ee2a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 10 15:40:08 crc kubenswrapper[4755]: E1210 15:40:08.214311 4755 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670" Dec 10 15:40:08 crc kubenswrapper[4755]: E1210 15:40:08.215217 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jhwxs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-697bc559fc-xtr7m_openstack-operators(d3b1545f-1f46-4869-bc92-cdc7e5b1fc4c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 10 15:40:08 crc kubenswrapper[4755]: E1210 15:40:08.335486 4755 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.45:5001/openstack-k8s-operators/telemetry-operator:c4794e7165126ca78a1af546bb4ba50c90b5c4e1" Dec 10 15:40:08 crc kubenswrapper[4755]: E1210 15:40:08.335576 4755 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.45:5001/openstack-k8s-operators/telemetry-operator:c4794e7165126ca78a1af546bb4ba50c90b5c4e1" Dec 10 15:40:08 crc kubenswrapper[4755]: E1210 15:40:08.336243 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.45:5001/openstack-k8s-operators/telemetry-operator:c4794e7165126ca78a1af546bb4ba50c90b5c4e1,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-r9d7d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-5f8d644d5d-87jfq_openstack-operators(e36da2bb-2cc5-4a66-97f3-ace6966152fb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 10 15:40:09 crc kubenswrapper[4755]: E1210 15:40:09.397159 4755 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Dec 10 15:40:09 crc kubenswrapper[4755]: E1210 15:40:09.398139 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4jbdt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-zz7fk_openstack-operators(fbaec88b-8593-468f-aefc-777f8140504d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 10 15:40:09 crc kubenswrapper[4755]: E1210 15:40:09.399350 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zz7fk" podUID="fbaec88b-8593-468f-aefc-777f8140504d" Dec 10 15:40:09 crc kubenswrapper[4755]: E1210 15:40:09.541852 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zz7fk" podUID="fbaec88b-8593-468f-aefc-777f8140504d" Dec 10 15:40:15 crc kubenswrapper[4755]: I1210 15:40:15.118512 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fd766"] Dec 10 15:40:15 crc kubenswrapper[4755]: I1210 15:40:15.123908 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-656546cb8f-64wpq"] Dec 10 15:40:15 crc kubenswrapper[4755]: I1210 15:40:15.198370 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-78d48bff9d-wsxsj"] Dec 10 15:40:15 crc kubenswrapper[4755]: I1210 15:40:15.207565 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fh778d"] Dec 10 15:40:15 crc kubenswrapper[4755]: W1210 15:40:15.512205 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3fe0f8bf_8203_4cbb_b474_d00be4716ff5.slice/crio-1b49dd6213c9b6ba1bbefb6cd38b1ee80591042ae111925695b3cc644624afd1 WatchSource:0}: Error finding container 1b49dd6213c9b6ba1bbefb6cd38b1ee80591042ae111925695b3cc644624afd1: Status 404 returned error can't find the container with id 1b49dd6213c9b6ba1bbefb6cd38b1ee80591042ae111925695b3cc644624afd1 Dec 10 15:40:15 crc kubenswrapper[4755]: I1210 15:40:15.612840 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fd766" event={"ID":"fb9cd508-2b78-45ac-93b5-9813f44c7b6d","Type":"ContainerStarted","Data":"24e5009085ad4d91f330854c3d742bf9a8aa1e55274e53393fc35a334a2d8c4f"} Dec 10 15:40:15 crc kubenswrapper[4755]: I1210 15:40:15.620536 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fh778d" event={"ID":"0bcf5a92-0324-4799-be55-0e49bd060ee7","Type":"ContainerStarted","Data":"220c691b775e903aa162acc65b2ee14f7e12578429ff4edc2721a71a8f0d7d15"} Dec 10 15:40:15 crc kubenswrapper[4755]: I1210 15:40:15.649727 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-wsxsj" event={"ID":"423be682-6135-4dd2-8366-b7106adbc632","Type":"ContainerStarted","Data":"e654c9b3d7dae98de21e524fd75fbfaf1a45079cb3924fde2efabe35ef2c2776"} Dec 10 15:40:15 crc kubenswrapper[4755]: I1210 15:40:15.663224 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-656546cb8f-64wpq" event={"ID":"3fe0f8bf-8203-4cbb-b474-d00be4716ff5","Type":"ContainerStarted","Data":"1b49dd6213c9b6ba1bbefb6cd38b1ee80591042ae111925695b3cc644624afd1"} Dec 10 15:40:16 crc kubenswrapper[4755]: I1210 15:40:16.678985 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-bs4zx" event={"ID":"a07fdc07-16fa-4834-b370-378b543dde9f","Type":"ContainerStarted","Data":"8dedd5468fa96b3171a96a53410dd387a1dad9de56ada69d9dd56a070013afbe"} Dec 10 15:40:16 crc kubenswrapper[4755]: I1210 15:40:16.681363 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-5kfxq" event={"ID":"2a918143-c2cf-4c73-b547-c8d0d9c6e2a6","Type":"ContainerStarted","Data":"1f891fae0a37e0b48d5170bd8f639e76150e2639c9ad18087ab159dc3888a199"} Dec 10 15:40:16 crc kubenswrapper[4755]: I1210 15:40:16.682658 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-rjdmk" event={"ID":"8bcd3e35-31c8-4dbc-96e1-e6f4b486f082","Type":"ContainerStarted","Data":"62ab4482156d916019407c7acab40a3b85fb5346eb909fdd7ee5fe10c59cf522"} Dec 10 15:40:16 crc kubenswrapper[4755]: I1210 15:40:16.684332 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-pxstj" event={"ID":"10728d77-c715-4cb1-ab30-5747594a6320","Type":"ContainerStarted","Data":"fe3bb02c1ce9c84b8454969f230f2a4e6e87e9aa0fe593eb23dde5c9bf4e796a"} Dec 10 15:40:16 crc kubenswrapper[4755]: I1210 15:40:16.685476 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-wbch9" event={"ID":"91969126-0986-41a4-8d56-19b071710ca8","Type":"ContainerStarted","Data":"50e63676638330bae290e1de9fef8a4b2aadba0b447509ac553c18bb107eaed3"} Dec 10 15:40:16 crc kubenswrapper[4755]: I1210 15:40:16.686728 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-bk4xd" event={"ID":"83bd67ec-3fa0-4f1e-9f87-7005f731f7e4","Type":"ContainerStarted","Data":"8a8dce177dd558447a03cbfab449598da8b568102efec87f5643697dd14643be"} Dec 10 15:40:16 crc kubenswrapper[4755]: I1210 15:40:16.687832 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-6vjxq" event={"ID":"e4692dc7-ecb8-45b5-be03-9990c0a32b2a","Type":"ContainerStarted","Data":"ee1cf6c633a7db036d8dfb2273649cfc7c719292b801ad26465f9a77fbc6f9f7"} Dec 10 15:40:16 crc kubenswrapper[4755]: I1210 15:40:16.689675 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-967d97867-h8w5g" event={"ID":"359a4730-4858-4677-9977-a9d6cea57122","Type":"ContainerStarted","Data":"ebc7ce2dd270115932d76c047ea8acd1f81c14e5156c93b13b13d9003b66d6f5"} Dec 10 15:40:16 crc kubenswrapper[4755]: I1210 15:40:16.691209 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-qrhhh" event={"ID":"15009193-27b2-4bf2-a795-f6106327e331","Type":"ContainerStarted","Data":"b440c37d41183f999fc2fb75249fd29c953df619489bd1fd0b58abffbb172aa7"} Dec 10 15:40:16 crc kubenswrapper[4755]: I1210 15:40:16.717070 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-jzkms" event={"ID":"f0af4059-171e-409f-8043-8f112664e01c","Type":"ContainerStarted","Data":"c45c27294c90d89bd0d1a29a1fb61fe8f40eba11147a135093d4cf9b7fb5d39d"} Dec 10 15:40:17 crc kubenswrapper[4755]: I1210 15:40:17.745858 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-656546cb8f-64wpq" event={"ID":"3fe0f8bf-8203-4cbb-b474-d00be4716ff5","Type":"ContainerStarted","Data":"1c732ba2d08a65d40e2465024b061d7bc43a0c740e950814df65d4791237b575"} Dec 10 15:40:17 crc kubenswrapper[4755]: I1210 15:40:17.746942 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-656546cb8f-64wpq" Dec 10 15:40:17 crc kubenswrapper[4755]: I1210 15:40:17.787803 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-656546cb8f-64wpq" podStartSLOduration=34.787784348 podStartE2EDuration="34.787784348s" podCreationTimestamp="2025-12-10 15:39:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:40:17.784078771 +0000 UTC m=+1014.384962423" watchObservedRunningTime="2025-12-10 15:40:17.787784348 +0000 UTC m=+1014.388667980" Dec 10 15:40:30 crc kubenswrapper[4755]: I1210 15:40:30.440376 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-656546cb8f-64wpq" Dec 10 15:40:31 crc kubenswrapper[4755]: E1210 15:40:31.823263 4755 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 10 15:40:31 crc kubenswrapper[4755]: E1210 15:40:31.823663 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rlwqg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-697fb699cf-z5frc_openstack-operators(05b2a283-f9ce-4cbb-a92f-a22a227de36d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 10 15:40:31 crc kubenswrapper[4755]: E1210 15:40:31.824747 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"]" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-z5frc" podUID="05b2a283-f9ce-4cbb-a92f-a22a227de36d" Dec 10 15:40:31 crc kubenswrapper[4755]: E1210 15:40:31.849309 4755 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 10 15:40:31 crc kubenswrapper[4755]: E1210 15:40:31.849483 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mrh2x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-5f64f6f8bb-vqgpv_openstack-operators(ab09fdaf-b326-4221-a24c-9415dabdbcdd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 10 15:40:31 crc kubenswrapper[4755]: E1210 15:40:31.850705 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"]" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-vqgpv" podUID="ab09fdaf-b326-4221-a24c-9415dabdbcdd" Dec 10 15:40:31 crc kubenswrapper[4755]: E1210 15:40:31.874756 4755 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 10 15:40:31 crc kubenswrapper[4755]: E1210 15:40:31.874924 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tbhzn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-7765d96ddf-t7zjt_openstack-operators(8bc636b5-ac4d-4b4e-8b50-102a72e6ee2a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 10 15:40:31 crc kubenswrapper[4755]: E1210 15:40:31.876171 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"]" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-t7zjt" podUID="8bc636b5-ac4d-4b4e-8b50-102a72e6ee2a" Dec 10 15:40:31 crc kubenswrapper[4755]: E1210 15:40:31.914386 4755 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 10 15:40:31 crc kubenswrapper[4755]: E1210 15:40:31.914581 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5xzcn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-6c677c69b-ljn8k_openstack-operators(313bb539-c9d7-4bb0-a5e3-3a36c45c0f79): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 10 15:40:31 crc kubenswrapper[4755]: E1210 15:40:31.915795 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"]" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-ljn8k" podUID="313bb539-c9d7-4bb0-a5e3-3a36c45c0f79" Dec 10 15:40:31 crc kubenswrapper[4755]: E1210 15:40:31.971749 4755 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 10 15:40:31 crc kubenswrapper[4755]: E1210 15:40:31.971922 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jhwxs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-697bc559fc-xtr7m_openstack-operators(d3b1545f-1f46-4869-bc92-cdc7e5b1fc4c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 10 15:40:31 crc kubenswrapper[4755]: E1210 15:40:31.973865 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"]" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-xtr7m" podUID="d3b1545f-1f46-4869-bc92-cdc7e5b1fc4c" Dec 10 15:40:31 crc kubenswrapper[4755]: I1210 15:40:31.981418 4755 generic.go:334] "Generic (PLEG): container finished" podID="fb9cd508-2b78-45ac-93b5-9813f44c7b6d" containerID="9f6a605a7328a05b9918c9166757c4ebc5f95a1e0500796bf2d3543bba7c0e56" exitCode=0 Dec 10 15:40:31 crc kubenswrapper[4755]: I1210 15:40:31.982487 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fd766" event={"ID":"fb9cd508-2b78-45ac-93b5-9813f44c7b6d","Type":"ContainerDied","Data":"9f6a605a7328a05b9918c9166757c4ebc5f95a1e0500796bf2d3543bba7c0e56"} Dec 10 15:40:32 crc kubenswrapper[4755]: E1210 15:40:32.078677 4755 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 10 15:40:32 crc kubenswrapper[4755]: E1210 15:40:32.079041 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lkv2b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-78f8948974-lbj4z_openstack-operators(ebb31199-21f8-4493-8725-1c5e1aa70d66): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 10 15:40:32 crc kubenswrapper[4755]: E1210 15:40:32.080492 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"]" pod="openstack-operators/placement-operator-controller-manager-78f8948974-lbj4z" podUID="ebb31199-21f8-4493-8725-1c5e1aa70d66" Dec 10 15:40:32 crc kubenswrapper[4755]: E1210 15:40:32.170397 4755 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 10 15:40:32 crc kubenswrapper[4755]: E1210 15:40:32.170584 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-r9d7d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-5f8d644d5d-87jfq_openstack-operators(e36da2bb-2cc5-4a66-97f3-ace6966152fb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 10 15:40:32 crc kubenswrapper[4755]: E1210 15:40:32.174042 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"]" pod="openstack-operators/telemetry-operator-controller-manager-5f8d644d5d-87jfq" podUID="e36da2bb-2cc5-4a66-97f3-ace6966152fb" Dec 10 15:40:32 crc kubenswrapper[4755]: E1210 15:40:32.418833 4755 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:9d539fb6b72f91cfc6200bb91b7c6dbaeab17c7711342dd3a9549c66762a2d48" Dec 10 15:40:32 crc kubenswrapper[4755]: E1210 15:40:32.419285 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:9d539fb6b72f91cfc6200bb91b7c6dbaeab17c7711342dd3a9549c66762a2d48,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:true,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-baremetal-operator-agent:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_ANSIBLEEE_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_EVALUATOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-evaluator:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_LISTENER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-listener:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_NOTIFIER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-notifier:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_APACHE_IMAGE_URL_DEFAULT,Value:registry.redhat.io/ubi9/httpd-24:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_KEYSTONE_LISTENER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-keystone-listener:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_CENTRAL_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_COMPUTE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_IPMI_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_MYSQLD_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/prometheus/mysqld-exporter:v0.15.1,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_NOTIFICATION_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-notification:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_SGCORE_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/sg-core:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_BACKUP_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-backup:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_VOLUME_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-volume:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CLOUDKITTY_API_IMAGE_URL_DEFAULT,Value:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CLOUDKITTY_PROC_IMAGE_URL_DEFAULT,Value:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-processor:current,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_BACKENDBIND9_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-backend-bind9:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_CENTRAL_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-central:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_MDNS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-mdns:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_PRODUCER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-producer:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_UNBOUND_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-unbound:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_FRR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-frr:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_ISCSID_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-iscsid:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_KEPLER_IMAGE_URL_DEFAULT,Value:quay.io/sustainable_computing_io/kepler:release-0.7.12,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_LOGROTATE_CROND_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cron:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_MULTIPATHD_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-multipathd:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_DHCP_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_METADATA_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_OVN_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-ovn-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_SRIOV_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NODE_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/prometheus/node-exporter:v1.5.0,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_OVN_BGP_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-bgp-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_PODMAN_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/navidys/prometheus-podman-exporter:v1.10.1,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_GLANCE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_CFNAPI_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-api-cfn:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_ENGINE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HORIZON_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_INFRA_MEMCACHED_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_INFRA_REDIS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-redis:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_CONDUCTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-conductor:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_INSPECTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-inspector:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_NEUTRON_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-neutron-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_PXE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-pxe:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_PYTHON_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/ironic-python-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KEYSTONE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-keystone:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KSM_IMAGE_URL_DEFAULT,Value:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_SHARE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-share:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MARIADB_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NET_UTILS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-netutils:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NEUTRON_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_COMPUTE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_CONDUCTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_NOVNC_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-novncproxy:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_HEALTHMANAGER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-health-manager:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_HOUSEKEEPING_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-housekeeping:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_RSYSLOG_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-rsyslog:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_CLIENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_MUST_GATHER_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-must-gather:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_NETWORK_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OS_CONTAINER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/edpm-hardened-uefi:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_CONTROLLER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_CONTROLLER_OVS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-base:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_NB_DBCLUSTER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_NORTHD_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-northd:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_SB_DBCLUSTER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PLACEMENT_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_RABBITMQ_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_ACCOUNT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-account:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_CONTAINER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-container:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_OBJECT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-object:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_PROXY_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-proxy-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_TEST_TEMPEST_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_APPLIER_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-applier:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_DECISION_ENGINE_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-decision-engine:current-podified,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-45rgv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-baremetal-operator-controller-manager-84b575879fh778d_openstack-operators(0bcf5a92-0324-4799-be55-0e49bd060ee7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 10 15:40:32 crc kubenswrapper[4755]: E1210 15:40:32.428132 4755 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 10 15:40:32 crc kubenswrapper[4755]: E1210 15:40:32.428355 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hl44w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-b6456fdb6-xpv7s_openstack-operators(67e9d86f-4e93-4e78-a9d5-d8023721414d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 10 15:40:32 crc kubenswrapper[4755]: E1210 15:40:32.429700 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"]" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-xpv7s" podUID="67e9d86f-4e93-4e78-a9d5-d8023721414d" Dec 10 15:40:32 crc kubenswrapper[4755]: E1210 15:40:32.972550 4755 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying layer: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 10 15:40:32 crc kubenswrapper[4755]: E1210 15:40:32.972813 4755 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying layer: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 10 15:40:32 crc kubenswrapper[4755]: E1210 15:40:32.972908 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-c8d7z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-5697bb5779-rjdmk_openstack-operators(8bcd3e35-31c8-4dbc-96e1-e6f4b486f082): ErrImagePull: rpc error: code = Canceled desc = copying layer: context canceled" logger="UnhandledError" Dec 10 15:40:32 crc kubenswrapper[4755]: E1210 15:40:32.972908 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sqn2b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-5b5fd79c9c-5kfxq_openstack-operators(2a918143-c2cf-4c73-b547-c8d0d9c6e2a6): ErrImagePull: rpc error: code = Canceled desc = copying layer: context canceled" logger="UnhandledError" Dec 10 15:40:32 crc kubenswrapper[4755]: E1210 15:40:32.973670 4755 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying layer: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 10 15:40:32 crc kubenswrapper[4755]: E1210 15:40:32.973837 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vfxgp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-operator-controller-manager-7d9dfd778-bk4xd_openstack-operators(83bd67ec-3fa0-4f1e-9f87-7005f731f7e4): ErrImagePull: rpc error: code = Canceled desc = copying layer: context canceled" logger="UnhandledError" Dec 10 15:40:32 crc kubenswrapper[4755]: E1210 15:40:32.973899 4755 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying layer: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 10 15:40:32 crc kubenswrapper[4755]: E1210 15:40:32.973951 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ggq8q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-998648c74-pxstj_openstack-operators(10728d77-c715-4cb1-ab30-5747594a6320): ErrImagePull: rpc error: code = Canceled desc = copying layer: context canceled" logger="UnhandledError" Dec 10 15:40:32 crc kubenswrapper[4755]: E1210 15:40:32.973989 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying layer: context canceled\"" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-rjdmk" podUID="8bcd3e35-31c8-4dbc-96e1-e6f4b486f082" Dec 10 15:40:32 crc kubenswrapper[4755]: E1210 15:40:32.974037 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying layer: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-5kfxq" podUID="2a918143-c2cf-4c73-b547-c8d0d9c6e2a6" Dec 10 15:40:32 crc kubenswrapper[4755]: E1210 15:40:32.974539 4755 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying layer: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 10 15:40:32 crc kubenswrapper[4755]: E1210 15:40:32.974610 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lk8rs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-68c6d99b8f-bs4zx_openstack-operators(a07fdc07-16fa-4834-b370-378b543dde9f): ErrImagePull: rpc error: code = Canceled desc = copying layer: context canceled" logger="UnhandledError" Dec 10 15:40:32 crc kubenswrapper[4755]: E1210 15:40:32.975664 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying layer: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-bs4zx" podUID="a07fdc07-16fa-4834-b370-378b543dde9f" Dec 10 15:40:32 crc kubenswrapper[4755]: E1210 15:40:32.975673 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying layer: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-998648c74-pxstj" podUID="10728d77-c715-4cb1-ab30-5747594a6320" Dec 10 15:40:32 crc kubenswrapper[4755]: E1210 15:40:32.975669 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying layer: context canceled\"" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-bk4xd" podUID="83bd67ec-3fa0-4f1e-9f87-7005f731f7e4" Dec 10 15:40:32 crc kubenswrapper[4755]: E1210 15:40:32.976179 4755 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying layer: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 10 15:40:32 crc kubenswrapper[4755]: E1210 15:40:32.976448 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qf7lj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-79c8c4686c-6vjxq_openstack-operators(e4692dc7-ecb8-45b5-be03-9990c0a32b2a): ErrImagePull: rpc error: code = Canceled desc = copying layer: context canceled" logger="UnhandledError" Dec 10 15:40:32 crc kubenswrapper[4755]: E1210 15:40:32.976524 4755 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying layer: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 10 15:40:32 crc kubenswrapper[4755]: E1210 15:40:32.976613 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xrn8b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-967d97867-h8w5g_openstack-operators(359a4730-4858-4677-9977-a9d6cea57122): ErrImagePull: rpc error: code = Canceled desc = copying layer: context canceled" logger="UnhandledError" Dec 10 15:40:32 crc kubenswrapper[4755]: E1210 15:40:32.978043 4755 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying layer: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 10 15:40:32 crc kubenswrapper[4755]: E1210 15:40:32.978092 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying layer: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-967d97867-h8w5g" podUID="359a4730-4858-4677-9977-a9d6cea57122" Dec 10 15:40:32 crc kubenswrapper[4755]: E1210 15:40:32.978195 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-626g8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5854674fcc-wbch9_openstack-operators(91969126-0986-41a4-8d56-19b071710ca8): ErrImagePull: rpc error: code = Canceled desc = copying layer: context canceled" logger="UnhandledError" Dec 10 15:40:32 crc kubenswrapper[4755]: E1210 15:40:32.978243 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying layer: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-6vjxq" podUID="e4692dc7-ecb8-45b5-be03-9990c0a32b2a" Dec 10 15:40:32 crc kubenswrapper[4755]: E1210 15:40:32.979342 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying layer: context canceled\"" pod="openstack-operators/test-operator-controller-manager-5854674fcc-wbch9" podUID="91969126-0986-41a4-8d56-19b071710ca8" Dec 10 15:40:32 crc kubenswrapper[4755]: E1210 15:40:32.985369 4755 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/infra-operator@sha256:ccc60d56d8efc2e91a7d8a7131eb7e06c189c32247f2a819818c084ba2e2f2ab" Dec 10 15:40:32 crc kubenswrapper[4755]: E1210 15:40:32.985583 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/infra-operator@sha256:ccc60d56d8efc2e91a7d8a7131eb7e06c189c32247f2a819818c084ba2e2f2ab,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:true,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{600 -3} {} 600m DecimalSI},memory: {{2147483648 0} {} 2Gi BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{536870912 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mvst2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infra-operator-controller-manager-78d48bff9d-wsxsj_openstack-operators(423be682-6135-4dd2-8366-b7106adbc632): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 10 15:40:32 crc kubenswrapper[4755]: E1210 15:40:32.989452 4755 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 10 15:40:32 crc kubenswrapper[4755]: E1210 15:40:32.989679 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fq7pt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-5fdfd5b6b5-bgxgp_openstack-operators(46715591-f787-42bc-9871-a51b08963893): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 10 15:40:32 crc kubenswrapper[4755]: I1210 15:40:32.990443 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-5kfxq" Dec 10 15:40:32 crc kubenswrapper[4755]: I1210 15:40:32.991158 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-6vjxq" Dec 10 15:40:32 crc kubenswrapper[4755]: I1210 15:40:32.991195 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-rjdmk" Dec 10 15:40:32 crc kubenswrapper[4755]: E1210 15:40:32.991493 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"]" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-bgxgp" podUID="46715591-f787-42bc-9871-a51b08963893" Dec 10 15:40:32 crc kubenswrapper[4755]: I1210 15:40:32.991863 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-bs4zx" Dec 10 15:40:32 crc kubenswrapper[4755]: I1210 15:40:32.991910 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5854674fcc-wbch9" Dec 10 15:40:32 crc kubenswrapper[4755]: I1210 15:40:32.991927 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-998648c74-pxstj" Dec 10 15:40:32 crc kubenswrapper[4755]: I1210 15:40:32.991994 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-967d97867-h8w5g" Dec 10 15:40:32 crc kubenswrapper[4755]: I1210 15:40:32.992151 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-bk4xd" Dec 10 15:40:32 crc kubenswrapper[4755]: I1210 15:40:32.994828 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-bk4xd" Dec 10 15:40:32 crc kubenswrapper[4755]: I1210 15:40:32.996334 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5854674fcc-wbch9" Dec 10 15:40:32 crc kubenswrapper[4755]: I1210 15:40:32.996375 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-6vjxq" Dec 10 15:40:32 crc kubenswrapper[4755]: I1210 15:40:32.997714 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-5kfxq" Dec 10 15:40:32 crc kubenswrapper[4755]: I1210 15:40:32.997884 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-bs4zx" Dec 10 15:40:32 crc kubenswrapper[4755]: I1210 15:40:32.998059 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-967d97867-h8w5g" Dec 10 15:40:32 crc kubenswrapper[4755]: I1210 15:40:32.999146 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-rjdmk" Dec 10 15:40:33 crc kubenswrapper[4755]: I1210 15:40:33.000971 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-998648c74-pxstj" Dec 10 15:40:33 crc kubenswrapper[4755]: E1210 15:40:33.017393 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-bk4xd" podUID="83bd67ec-3fa0-4f1e-9f87-7005f731f7e4" Dec 10 15:40:33 crc kubenswrapper[4755]: E1210 15:40:33.017395 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-967d97867-h8w5g" podUID="359a4730-4858-4677-9977-a9d6cea57122" Dec 10 15:40:33 crc kubenswrapper[4755]: E1210 15:40:33.017680 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-6vjxq" podUID="e4692dc7-ecb8-45b5-be03-9990c0a32b2a" Dec 10 15:40:33 crc kubenswrapper[4755]: E1210 15:40:33.017949 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-998648c74-pxstj" podUID="10728d77-c715-4cb1-ab30-5747594a6320" Dec 10 15:40:33 crc kubenswrapper[4755]: E1210 15:40:33.018445 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-bs4zx" podUID="a07fdc07-16fa-4834-b370-378b543dde9f" Dec 10 15:40:33 crc kubenswrapper[4755]: E1210 15:40:33.049119 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/test-operator-controller-manager-5854674fcc-wbch9" podUID="91969126-0986-41a4-8d56-19b071710ca8" Dec 10 15:40:33 crc kubenswrapper[4755]: E1210 15:40:33.049131 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-rjdmk" podUID="8bcd3e35-31c8-4dbc-96e1-e6f4b486f082" Dec 10 15:40:33 crc kubenswrapper[4755]: E1210 15:40:33.049309 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-5kfxq" podUID="2a918143-c2cf-4c73-b547-c8d0d9c6e2a6" Dec 10 15:40:33 crc kubenswrapper[4755]: E1210 15:40:33.744002 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fh778d" podUID="0bcf5a92-0324-4799-be55-0e49bd060ee7" Dec 10 15:40:34 crc kubenswrapper[4755]: I1210 15:40:34.014825 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zz7fk" event={"ID":"fbaec88b-8593-468f-aefc-777f8140504d","Type":"ContainerStarted","Data":"5d3559aae0bf35a58294df185356ec684164195f97418718caea05627d77084a"} Dec 10 15:40:34 crc kubenswrapper[4755]: I1210 15:40:34.018573 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-jzkms" event={"ID":"f0af4059-171e-409f-8043-8f112664e01c","Type":"ContainerStarted","Data":"27938de33e274930abf6c0a62dfdaf65ad17c097ab92cd087e0a209b8c945e34"} Dec 10 15:40:34 crc kubenswrapper[4755]: I1210 15:40:34.019305 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-jzkms" Dec 10 15:40:34 crc kubenswrapper[4755]: I1210 15:40:34.024744 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-jzkms" Dec 10 15:40:34 crc kubenswrapper[4755]: I1210 15:40:34.029891 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-vqgpv" event={"ID":"ab09fdaf-b326-4221-a24c-9415dabdbcdd","Type":"ContainerStarted","Data":"fdc1f4a5a4c304abf019640c7f7ce142724faa517ca6596b27e90ebea696b4c2"} Dec 10 15:40:34 crc kubenswrapper[4755]: I1210 15:40:34.035864 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-ljn8k" event={"ID":"313bb539-c9d7-4bb0-a5e3-3a36c45c0f79","Type":"ContainerStarted","Data":"142c812b31071040894c895cc2841449ed45ad2d99def22320b78aec2a11f00f"} Dec 10 15:40:34 crc kubenswrapper[4755]: I1210 15:40:34.053525 4755 generic.go:334] "Generic (PLEG): container finished" podID="fb9cd508-2b78-45ac-93b5-9813f44c7b6d" containerID="b0a2efbdde2b745683def1b2a7a8e58b04e9530f40cf0aa1ea1a7692dd4bd3f1" exitCode=0 Dec 10 15:40:34 crc kubenswrapper[4755]: I1210 15:40:34.053610 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fd766" event={"ID":"fb9cd508-2b78-45ac-93b5-9813f44c7b6d","Type":"ContainerDied","Data":"b0a2efbdde2b745683def1b2a7a8e58b04e9530f40cf0aa1ea1a7692dd4bd3f1"} Dec 10 15:40:34 crc kubenswrapper[4755]: I1210 15:40:34.057877 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zz7fk" podStartSLOduration=3.010799795 podStartE2EDuration="50.057861109s" podCreationTimestamp="2025-12-10 15:39:44 +0000 UTC" firstStartedPulling="2025-12-10 15:39:46.168832193 +0000 UTC m=+982.769715825" lastFinishedPulling="2025-12-10 15:40:33.215893507 +0000 UTC m=+1029.816777139" observedRunningTime="2025-12-10 15:40:34.053788804 +0000 UTC m=+1030.654672446" watchObservedRunningTime="2025-12-10 15:40:34.057861109 +0000 UTC m=+1030.658744751" Dec 10 15:40:34 crc kubenswrapper[4755]: I1210 15:40:34.065967 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-z5frc" event={"ID":"05b2a283-f9ce-4cbb-a92f-a22a227de36d","Type":"ContainerStarted","Data":"7ba26a645686ca2e1a464d8427e1ff8b3af3a184a9a601d560690d07c536fc96"} Dec 10 15:40:34 crc kubenswrapper[4755]: I1210 15:40:34.066001 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-z5frc" event={"ID":"05b2a283-f9ce-4cbb-a92f-a22a227de36d","Type":"ContainerStarted","Data":"991e96ca8ac6bf0be45cf30fa97a8d428c4f946d3d4c3e386fccc476e196782c"} Dec 10 15:40:34 crc kubenswrapper[4755]: I1210 15:40:34.066545 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-z5frc" Dec 10 15:40:34 crc kubenswrapper[4755]: I1210 15:40:34.076660 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fh778d" event={"ID":"0bcf5a92-0324-4799-be55-0e49bd060ee7","Type":"ContainerStarted","Data":"7af9372240f6e5ca896a88adb2d41e6ac3aae620f89fc88da4ca6e3c682e7635"} Dec 10 15:40:34 crc kubenswrapper[4755]: I1210 15:40:34.084115 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-qrhhh" event={"ID":"15009193-27b2-4bf2-a795-f6106327e331","Type":"ContainerStarted","Data":"cf739f6d928b342f42be9eef87ce8b10b016a4f407a0a2dcdad9faa13fad4d4b"} Dec 10 15:40:34 crc kubenswrapper[4755]: I1210 15:40:34.085435 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-qrhhh" Dec 10 15:40:34 crc kubenswrapper[4755]: I1210 15:40:34.088563 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-5f8d644d5d-87jfq" event={"ID":"e36da2bb-2cc5-4a66-97f3-ace6966152fb","Type":"ContainerStarted","Data":"2e2960fbfcaf6b2c753b7ae46457f9f91c956678f631783d45c216cf6e06fb47"} Dec 10 15:40:34 crc kubenswrapper[4755]: I1210 15:40:34.090991 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-qrhhh" Dec 10 15:40:34 crc kubenswrapper[4755]: E1210 15:40:34.105763 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:9d539fb6b72f91cfc6200bb91b7c6dbaeab17c7711342dd3a9549c66762a2d48\\\"\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fh778d" podUID="0bcf5a92-0324-4799-be55-0e49bd060ee7" Dec 10 15:40:34 crc kubenswrapper[4755]: I1210 15:40:34.114839 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-xtr7m" event={"ID":"d3b1545f-1f46-4869-bc92-cdc7e5b1fc4c","Type":"ContainerStarted","Data":"726cc8f7765a7a298e262b5c2bda7dd6b2521bf6f3813658e5e1b1dfd2750ea6"} Dec 10 15:40:34 crc kubenswrapper[4755]: I1210 15:40:34.199070 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-jzkms" podStartSLOduration=4.016413162 podStartE2EDuration="51.199053356s" podCreationTimestamp="2025-12-10 15:39:43 +0000 UTC" firstStartedPulling="2025-12-10 15:39:46.087612604 +0000 UTC m=+982.688496236" lastFinishedPulling="2025-12-10 15:40:33.270252798 +0000 UTC m=+1029.871136430" observedRunningTime="2025-12-10 15:40:34.126958514 +0000 UTC m=+1030.727842166" watchObservedRunningTime="2025-12-10 15:40:34.199053356 +0000 UTC m=+1030.799936988" Dec 10 15:40:34 crc kubenswrapper[4755]: I1210 15:40:34.406144 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-z5frc" podStartSLOduration=4.732356265 podStartE2EDuration="52.406103833s" podCreationTimestamp="2025-12-10 15:39:42 +0000 UTC" firstStartedPulling="2025-12-10 15:39:45.60839137 +0000 UTC m=+982.209275002" lastFinishedPulling="2025-12-10 15:40:33.282138938 +0000 UTC m=+1029.883022570" observedRunningTime="2025-12-10 15:40:34.383068925 +0000 UTC m=+1030.983952587" watchObservedRunningTime="2025-12-10 15:40:34.406103833 +0000 UTC m=+1031.006987475" Dec 10 15:40:34 crc kubenswrapper[4755]: I1210 15:40:34.406273 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-qrhhh" podStartSLOduration=4.360738412 podStartE2EDuration="51.406268697s" podCreationTimestamp="2025-12-10 15:39:43 +0000 UTC" firstStartedPulling="2025-12-10 15:39:46.16912165 +0000 UTC m=+982.770005292" lastFinishedPulling="2025-12-10 15:40:33.214651945 +0000 UTC m=+1029.815535577" observedRunningTime="2025-12-10 15:40:34.340688073 +0000 UTC m=+1030.941571705" watchObservedRunningTime="2025-12-10 15:40:34.406268697 +0000 UTC m=+1031.007152329" Dec 10 15:40:35 crc kubenswrapper[4755]: I1210 15:40:35.199823 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-xtr7m" event={"ID":"d3b1545f-1f46-4869-bc92-cdc7e5b1fc4c","Type":"ContainerStarted","Data":"bd80494744c15db913bc85cb2d7e5f23f93f4984c51aa6b6c00bd7fff1e51c61"} Dec 10 15:40:35 crc kubenswrapper[4755]: I1210 15:40:35.200160 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-xtr7m" Dec 10 15:40:35 crc kubenswrapper[4755]: I1210 15:40:35.215733 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-bs4zx" event={"ID":"a07fdc07-16fa-4834-b370-378b543dde9f","Type":"ContainerStarted","Data":"4bef36c1fee6c9dfaeb14bc99136a7d0943c844f2cbc80751272cf012059e29c"} Dec 10 15:40:35 crc kubenswrapper[4755]: I1210 15:40:35.232487 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-xtr7m" podStartSLOduration=4.184249089 podStartE2EDuration="52.23245175s" podCreationTimestamp="2025-12-10 15:39:43 +0000 UTC" firstStartedPulling="2025-12-10 15:39:45.640912564 +0000 UTC m=+982.241796196" lastFinishedPulling="2025-12-10 15:40:33.689115225 +0000 UTC m=+1030.289998857" observedRunningTime="2025-12-10 15:40:35.225335045 +0000 UTC m=+1031.826218667" watchObservedRunningTime="2025-12-10 15:40:35.23245175 +0000 UTC m=+1031.833335382" Dec 10 15:40:35 crc kubenswrapper[4755]: I1210 15:40:35.237659 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-lbj4z" event={"ID":"ebb31199-21f8-4493-8725-1c5e1aa70d66","Type":"ContainerStarted","Data":"56846c30ed0fa14496a4f7873947fe85cf745c43cde9700e592f1b22255aaaee"} Dec 10 15:40:35 crc kubenswrapper[4755]: I1210 15:40:35.264769 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-t7zjt" event={"ID":"8bc636b5-ac4d-4b4e-8b50-102a72e6ee2a","Type":"ContainerStarted","Data":"9c32eed22ac7f52beb222dd5601f919adc11b34bd80f562886f50cb5011d1171"} Dec 10 15:40:35 crc kubenswrapper[4755]: I1210 15:40:35.265651 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-bs4zx" podStartSLOduration=30.636507865 podStartE2EDuration="52.265640162s" podCreationTimestamp="2025-12-10 15:39:43 +0000 UTC" firstStartedPulling="2025-12-10 15:39:45.564904271 +0000 UTC m=+982.165787903" lastFinishedPulling="2025-12-10 15:40:07.194036568 +0000 UTC m=+1003.794920200" observedRunningTime="2025-12-10 15:40:35.265160329 +0000 UTC m=+1031.866043961" watchObservedRunningTime="2025-12-10 15:40:35.265640162 +0000 UTC m=+1031.866523794" Dec 10 15:40:35 crc kubenswrapper[4755]: I1210 15:40:35.299617 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-xpv7s" event={"ID":"67e9d86f-4e93-4e78-a9d5-d8023721414d","Type":"ContainerStarted","Data":"a60dfd6967955a8d727e4e5ec77ef0d545b8d75771d9eea0c7d024f93e56055f"} Dec 10 15:40:35 crc kubenswrapper[4755]: I1210 15:40:35.327893 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-ljn8k" event={"ID":"313bb539-c9d7-4bb0-a5e3-3a36c45c0f79","Type":"ContainerStarted","Data":"3b4eeed9d606fbb166516ae8c09f691fa6a109eefbfee874e260e470912f7cb8"} Dec 10 15:40:35 crc kubenswrapper[4755]: I1210 15:40:35.328751 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-ljn8k" Dec 10 15:40:35 crc kubenswrapper[4755]: I1210 15:40:35.390508 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-vqgpv" event={"ID":"ab09fdaf-b326-4221-a24c-9415dabdbcdd","Type":"ContainerStarted","Data":"5fd677efe485a58a5c6a31f52d8a42d3bddc9e0b2c5c2dc878fd132ffd836ed2"} Dec 10 15:40:35 crc kubenswrapper[4755]: I1210 15:40:35.391128 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-vqgpv" Dec 10 15:40:35 crc kubenswrapper[4755]: I1210 15:40:35.404849 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-ljn8k" podStartSLOduration=5.130465513 podStartE2EDuration="53.404823585s" podCreationTimestamp="2025-12-10 15:39:42 +0000 UTC" firstStartedPulling="2025-12-10 15:39:45.23326597 +0000 UTC m=+981.834149602" lastFinishedPulling="2025-12-10 15:40:33.507624042 +0000 UTC m=+1030.108507674" observedRunningTime="2025-12-10 15:40:35.394737394 +0000 UTC m=+1031.995621026" watchObservedRunningTime="2025-12-10 15:40:35.404823585 +0000 UTC m=+1032.005707217" Dec 10 15:40:35 crc kubenswrapper[4755]: I1210 15:40:35.434999 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-vqgpv" podStartSLOduration=4.763729897 podStartE2EDuration="52.434981449s" podCreationTimestamp="2025-12-10 15:39:43 +0000 UTC" firstStartedPulling="2025-12-10 15:39:45.599081468 +0000 UTC m=+982.199965100" lastFinishedPulling="2025-12-10 15:40:33.27033302 +0000 UTC m=+1029.871216652" observedRunningTime="2025-12-10 15:40:35.426093638 +0000 UTC m=+1032.026977270" watchObservedRunningTime="2025-12-10 15:40:35.434981449 +0000 UTC m=+1032.035865081" Dec 10 15:40:35 crc kubenswrapper[4755]: I1210 15:40:35.478393 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-bk4xd" event={"ID":"83bd67ec-3fa0-4f1e-9f87-7005f731f7e4","Type":"ContainerStarted","Data":"158b8898e694e57d2429f9de9205912bfb975ef65596ba9de0370b042009c553"} Dec 10 15:40:35 crc kubenswrapper[4755]: I1210 15:40:35.499868 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-5f8d644d5d-87jfq" event={"ID":"e36da2bb-2cc5-4a66-97f3-ace6966152fb","Type":"ContainerStarted","Data":"c3892a3fe1e3b1b0bf60efea3bb8329f4088c29b8163602b12b04900f7a1c234"} Dec 10 15:40:35 crc kubenswrapper[4755]: I1210 15:40:35.499915 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-5f8d644d5d-87jfq" Dec 10 15:40:35 crc kubenswrapper[4755]: E1210 15:40:35.503115 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:9d539fb6b72f91cfc6200bb91b7c6dbaeab17c7711342dd3a9549c66762a2d48\\\"\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fh778d" podUID="0bcf5a92-0324-4799-be55-0e49bd060ee7" Dec 10 15:40:35 crc kubenswrapper[4755]: I1210 15:40:35.514265 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-bk4xd" podStartSLOduration=29.883900715 podStartE2EDuration="53.514243086s" podCreationTimestamp="2025-12-10 15:39:42 +0000 UTC" firstStartedPulling="2025-12-10 15:39:44.567871631 +0000 UTC m=+981.168755253" lastFinishedPulling="2025-12-10 15:40:08.198213992 +0000 UTC m=+1004.799097624" observedRunningTime="2025-12-10 15:40:35.510192612 +0000 UTC m=+1032.111076244" watchObservedRunningTime="2025-12-10 15:40:35.514243086 +0000 UTC m=+1032.115126718" Dec 10 15:40:35 crc kubenswrapper[4755]: I1210 15:40:35.672327 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-5f8d644d5d-87jfq" podStartSLOduration=5.385861251 podStartE2EDuration="52.672310631s" podCreationTimestamp="2025-12-10 15:39:43 +0000 UTC" firstStartedPulling="2025-12-10 15:39:45.983838449 +0000 UTC m=+982.584722081" lastFinishedPulling="2025-12-10 15:40:33.270287829 +0000 UTC m=+1029.871171461" observedRunningTime="2025-12-10 15:40:35.6610976 +0000 UTC m=+1032.261981252" watchObservedRunningTime="2025-12-10 15:40:35.672310631 +0000 UTC m=+1032.273194263" Dec 10 15:40:35 crc kubenswrapper[4755]: E1210 15:40:35.815878 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-wsxsj" podUID="423be682-6135-4dd2-8366-b7106adbc632" Dec 10 15:40:36 crc kubenswrapper[4755]: I1210 15:40:36.513906 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fd766" event={"ID":"fb9cd508-2b78-45ac-93b5-9813f44c7b6d","Type":"ContainerStarted","Data":"a431f03e756107e4f55b186397f11106fb6ac519f62e1f6a81e4f435b44c7688"} Dec 10 15:40:36 crc kubenswrapper[4755]: I1210 15:40:36.517261 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-6vjxq" event={"ID":"e4692dc7-ecb8-45b5-be03-9990c0a32b2a","Type":"ContainerStarted","Data":"d4d105e9a5858334891bd2fb52cd3803e12aa33dce9ef6339b5d8fc8eb03fde4"} Dec 10 15:40:36 crc kubenswrapper[4755]: I1210 15:40:36.519789 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-5kfxq" event={"ID":"2a918143-c2cf-4c73-b547-c8d0d9c6e2a6","Type":"ContainerStarted","Data":"d0cabc7a7b2a8de06078905448bdaf297248dd8822575e6e767981f8e1fdaf5a"} Dec 10 15:40:36 crc kubenswrapper[4755]: I1210 15:40:36.522328 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-lbj4z" event={"ID":"ebb31199-21f8-4493-8725-1c5e1aa70d66","Type":"ContainerStarted","Data":"9a2a031a6bf59eed0fb52ecdbbd2cd32d1cbefc853fda9bf9bfe6fc8bffdb05a"} Dec 10 15:40:36 crc kubenswrapper[4755]: I1210 15:40:36.522427 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-78f8948974-lbj4z" Dec 10 15:40:36 crc kubenswrapper[4755]: I1210 15:40:36.526054 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-bgxgp" event={"ID":"46715591-f787-42bc-9871-a51b08963893","Type":"ContainerStarted","Data":"35518acd2a64b68bb635e1e495a794b5c4f8ca68c3d3b35db4020f57133538e2"} Dec 10 15:40:36 crc kubenswrapper[4755]: I1210 15:40:36.526093 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-bgxgp" event={"ID":"46715591-f787-42bc-9871-a51b08963893","Type":"ContainerStarted","Data":"c122a338111fed7343d3e572dce0c10ffec0da98ae9f265ba55e32b6cc2099bc"} Dec 10 15:40:36 crc kubenswrapper[4755]: I1210 15:40:36.526275 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-bgxgp" Dec 10 15:40:36 crc kubenswrapper[4755]: I1210 15:40:36.528546 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-t7zjt" event={"ID":"8bc636b5-ac4d-4b4e-8b50-102a72e6ee2a","Type":"ContainerStarted","Data":"bde69150347a4d42e17d055dde64bc7e9fbe78c0349af78df698ddf4604a0b4c"} Dec 10 15:40:36 crc kubenswrapper[4755]: I1210 15:40:36.528667 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-t7zjt" Dec 10 15:40:36 crc kubenswrapper[4755]: I1210 15:40:36.530657 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-wbch9" event={"ID":"91969126-0986-41a4-8d56-19b071710ca8","Type":"ContainerStarted","Data":"abafa0858de92d5cf3e4bbbd7ad6e8b4f8a6a5c4d3130e68fca1b32c91849efb"} Dec 10 15:40:36 crc kubenswrapper[4755]: I1210 15:40:36.532114 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-wsxsj" event={"ID":"423be682-6135-4dd2-8366-b7106adbc632","Type":"ContainerStarted","Data":"e7630621f7f2db5f9cb966c3b8f4d0962099811ffbd5cba3122339c6dba078b1"} Dec 10 15:40:36 crc kubenswrapper[4755]: E1210 15:40:36.533332 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/infra-operator@sha256:ccc60d56d8efc2e91a7d8a7131eb7e06c189c32247f2a819818c084ba2e2f2ab\\\"\"" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-wsxsj" podUID="423be682-6135-4dd2-8366-b7106adbc632" Dec 10 15:40:36 crc kubenswrapper[4755]: I1210 15:40:36.534116 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-967d97867-h8w5g" event={"ID":"359a4730-4858-4677-9977-a9d6cea57122","Type":"ContainerStarted","Data":"af4ae4716366b27d4c7b8327e0c79223d0eda0f9fc6f6d9bd74faf63c2144408"} Dec 10 15:40:36 crc kubenswrapper[4755]: I1210 15:40:36.537354 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-pxstj" event={"ID":"10728d77-c715-4cb1-ab30-5747594a6320","Type":"ContainerStarted","Data":"5357b07481c54d98e4124312ed1c150666798d8cc751aa39e422144016688d9d"} Dec 10 15:40:36 crc kubenswrapper[4755]: I1210 15:40:36.539733 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-rjdmk" event={"ID":"8bcd3e35-31c8-4dbc-96e1-e6f4b486f082","Type":"ContainerStarted","Data":"de8e11f9c3ec66310d5c42f473e0c04644e795c4c2569f40b6832fa3bcb5e7cf"} Dec 10 15:40:36 crc kubenswrapper[4755]: I1210 15:40:36.541953 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-xpv7s" event={"ID":"67e9d86f-4e93-4e78-a9d5-d8023721414d","Type":"ContainerStarted","Data":"cc23605d7215fe6f76588cca98a4188e6d5be989c7c05faa96d8c4c626a5d072"} Dec 10 15:40:36 crc kubenswrapper[4755]: I1210 15:40:36.553999 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fd766" podStartSLOduration=41.195344709 podStartE2EDuration="43.553978094s" podCreationTimestamp="2025-12-10 15:39:53 +0000 UTC" firstStartedPulling="2025-12-10 15:40:32.406989663 +0000 UTC m=+1029.007873295" lastFinishedPulling="2025-12-10 15:40:34.765623048 +0000 UTC m=+1031.366506680" observedRunningTime="2025-12-10 15:40:36.544148119 +0000 UTC m=+1033.145031751" watchObservedRunningTime="2025-12-10 15:40:36.553978094 +0000 UTC m=+1033.154861726" Dec 10 15:40:36 crc kubenswrapper[4755]: I1210 15:40:36.573947 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-967d97867-h8w5g" podStartSLOduration=31.019235653 podStartE2EDuration="53.573927663s" podCreationTimestamp="2025-12-10 15:39:43 +0000 UTC" firstStartedPulling="2025-12-10 15:39:45.6434506 +0000 UTC m=+982.244334232" lastFinishedPulling="2025-12-10 15:40:08.19814261 +0000 UTC m=+1004.799026242" observedRunningTime="2025-12-10 15:40:36.571037008 +0000 UTC m=+1033.171920640" watchObservedRunningTime="2025-12-10 15:40:36.573927663 +0000 UTC m=+1033.174811305" Dec 10 15:40:36 crc kubenswrapper[4755]: I1210 15:40:36.623627 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-t7zjt" podStartSLOduration=5.212948569 podStartE2EDuration="53.623609652s" podCreationTimestamp="2025-12-10 15:39:43 +0000 UTC" firstStartedPulling="2025-12-10 15:39:45.608738869 +0000 UTC m=+982.209622501" lastFinishedPulling="2025-12-10 15:40:34.019399952 +0000 UTC m=+1030.620283584" observedRunningTime="2025-12-10 15:40:36.622037702 +0000 UTC m=+1033.222921334" watchObservedRunningTime="2025-12-10 15:40:36.623609652 +0000 UTC m=+1033.224493284" Dec 10 15:40:36 crc kubenswrapper[4755]: I1210 15:40:36.682372 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-6vjxq" podStartSLOduration=31.137412742 podStartE2EDuration="53.682355018s" podCreationTimestamp="2025-12-10 15:39:43 +0000 UTC" firstStartedPulling="2025-12-10 15:39:45.653237245 +0000 UTC m=+982.254120877" lastFinishedPulling="2025-12-10 15:40:08.198179521 +0000 UTC m=+1004.799063153" observedRunningTime="2025-12-10 15:40:36.65853788 +0000 UTC m=+1033.259421522" watchObservedRunningTime="2025-12-10 15:40:36.682355018 +0000 UTC m=+1033.283238650" Dec 10 15:40:36 crc kubenswrapper[4755]: I1210 15:40:36.682986 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5854674fcc-wbch9" podStartSLOduration=25.031001871 podStartE2EDuration="53.682982385s" podCreationTimestamp="2025-12-10 15:39:43 +0000 UTC" firstStartedPulling="2025-12-10 15:39:46.190652719 +0000 UTC m=+982.791536351" lastFinishedPulling="2025-12-10 15:40:14.842633233 +0000 UTC m=+1011.443516865" observedRunningTime="2025-12-10 15:40:36.682186424 +0000 UTC m=+1033.283070056" watchObservedRunningTime="2025-12-10 15:40:36.682982385 +0000 UTC m=+1033.283866017" Dec 10 15:40:36 crc kubenswrapper[4755]: I1210 15:40:36.736213 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-5kfxq" podStartSLOduration=31.146326594 podStartE2EDuration="53.736186076s" podCreationTimestamp="2025-12-10 15:39:43 +0000 UTC" firstStartedPulling="2025-12-10 15:39:45.60837889 +0000 UTC m=+982.209262522" lastFinishedPulling="2025-12-10 15:40:08.198238382 +0000 UTC m=+1004.799122004" observedRunningTime="2025-12-10 15:40:36.709934944 +0000 UTC m=+1033.310818576" watchObservedRunningTime="2025-12-10 15:40:36.736186076 +0000 UTC m=+1033.337069718" Dec 10 15:40:36 crc kubenswrapper[4755]: I1210 15:40:36.813413 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-998648c74-pxstj" podStartSLOduration=25.015031156 podStartE2EDuration="53.813396601s" podCreationTimestamp="2025-12-10 15:39:43 +0000 UTC" firstStartedPulling="2025-12-10 15:39:46.058064856 +0000 UTC m=+982.658948488" lastFinishedPulling="2025-12-10 15:40:14.856430301 +0000 UTC m=+1011.457313933" observedRunningTime="2025-12-10 15:40:36.744719658 +0000 UTC m=+1033.345603290" watchObservedRunningTime="2025-12-10 15:40:36.813396601 +0000 UTC m=+1033.414280233" Dec 10 15:40:36 crc kubenswrapper[4755]: I1210 15:40:36.834168 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-rjdmk" podStartSLOduration=31.761672663 podStartE2EDuration="54.834149599s" podCreationTimestamp="2025-12-10 15:39:42 +0000 UTC" firstStartedPulling="2025-12-10 15:39:45.125569292 +0000 UTC m=+981.726452924" lastFinishedPulling="2025-12-10 15:40:08.198046238 +0000 UTC m=+1004.798929860" observedRunningTime="2025-12-10 15:40:36.780706872 +0000 UTC m=+1033.381590504" watchObservedRunningTime="2025-12-10 15:40:36.834149599 +0000 UTC m=+1033.435033231" Dec 10 15:40:36 crc kubenswrapper[4755]: I1210 15:40:36.870725 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-bgxgp" podStartSLOduration=4.364919761 podStartE2EDuration="53.87070869s" podCreationTimestamp="2025-12-10 15:39:43 +0000 UTC" firstStartedPulling="2025-12-10 15:39:45.56489294 +0000 UTC m=+982.165776572" lastFinishedPulling="2025-12-10 15:40:35.070681869 +0000 UTC m=+1031.671565501" observedRunningTime="2025-12-10 15:40:36.84804804 +0000 UTC m=+1033.448931672" watchObservedRunningTime="2025-12-10 15:40:36.87070869 +0000 UTC m=+1033.471592322" Dec 10 15:40:36 crc kubenswrapper[4755]: I1210 15:40:36.877016 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-xpv7s" podStartSLOduration=5.545893166 podStartE2EDuration="53.876999393s" podCreationTimestamp="2025-12-10 15:39:43 +0000 UTC" firstStartedPulling="2025-12-10 15:39:45.730041529 +0000 UTC m=+982.330925161" lastFinishedPulling="2025-12-10 15:40:34.061147756 +0000 UTC m=+1030.662031388" observedRunningTime="2025-12-10 15:40:36.876193352 +0000 UTC m=+1033.477076994" watchObservedRunningTime="2025-12-10 15:40:36.876999393 +0000 UTC m=+1033.477883025" Dec 10 15:40:36 crc kubenswrapper[4755]: I1210 15:40:36.900051 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-78f8948974-lbj4z" podStartSLOduration=5.927232247 podStartE2EDuration="53.90003307s" podCreationTimestamp="2025-12-10 15:39:43 +0000 UTC" firstStartedPulling="2025-12-10 15:39:46.046550617 +0000 UTC m=+982.647434249" lastFinishedPulling="2025-12-10 15:40:34.01935145 +0000 UTC m=+1030.620235072" observedRunningTime="2025-12-10 15:40:36.896805207 +0000 UTC m=+1033.497688849" watchObservedRunningTime="2025-12-10 15:40:36.90003307 +0000 UTC m=+1033.500916702" Dec 10 15:40:37 crc kubenswrapper[4755]: I1210 15:40:37.548384 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-xpv7s" Dec 10 15:40:37 crc kubenswrapper[4755]: E1210 15:40:37.550084 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/infra-operator@sha256:ccc60d56d8efc2e91a7d8a7131eb7e06c189c32247f2a819818c084ba2e2f2ab\\\"\"" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-wsxsj" podUID="423be682-6135-4dd2-8366-b7106adbc632" Dec 10 15:40:43 crc kubenswrapper[4755]: I1210 15:40:43.327085 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-ljn8k" Dec 10 15:40:43 crc kubenswrapper[4755]: I1210 15:40:43.342340 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-z5frc" Dec 10 15:40:43 crc kubenswrapper[4755]: I1210 15:40:43.398157 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-vqgpv" Dec 10 15:40:43 crc kubenswrapper[4755]: I1210 15:40:43.608002 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fd766" Dec 10 15:40:43 crc kubenswrapper[4755]: I1210 15:40:43.608059 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fd766" Dec 10 15:40:43 crc kubenswrapper[4755]: I1210 15:40:43.649918 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fd766" Dec 10 15:40:43 crc kubenswrapper[4755]: I1210 15:40:43.659813 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-bgxgp" Dec 10 15:40:43 crc kubenswrapper[4755]: I1210 15:40:43.776578 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-t7zjt" Dec 10 15:40:43 crc kubenswrapper[4755]: I1210 15:40:43.813749 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-xpv7s" Dec 10 15:40:44 crc kubenswrapper[4755]: I1210 15:40:44.094838 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-xtr7m" Dec 10 15:40:44 crc kubenswrapper[4755]: I1210 15:40:44.375488 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-78f8948974-lbj4z" Dec 10 15:40:44 crc kubenswrapper[4755]: I1210 15:40:44.420656 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-5f8d644d5d-87jfq" Dec 10 15:40:44 crc kubenswrapper[4755]: I1210 15:40:44.634054 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fd766" Dec 10 15:40:44 crc kubenswrapper[4755]: I1210 15:40:44.676897 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fd766"] Dec 10 15:40:46 crc kubenswrapper[4755]: I1210 15:40:46.612192 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-fd766" podUID="fb9cd508-2b78-45ac-93b5-9813f44c7b6d" containerName="registry-server" containerID="cri-o://a431f03e756107e4f55b186397f11106fb6ac519f62e1f6a81e4f435b44c7688" gracePeriod=2 Dec 10 15:40:47 crc kubenswrapper[4755]: I1210 15:40:47.618322 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fd766" Dec 10 15:40:47 crc kubenswrapper[4755]: I1210 15:40:47.622253 4755 generic.go:334] "Generic (PLEG): container finished" podID="fb9cd508-2b78-45ac-93b5-9813f44c7b6d" containerID="a431f03e756107e4f55b186397f11106fb6ac519f62e1f6a81e4f435b44c7688" exitCode=0 Dec 10 15:40:47 crc kubenswrapper[4755]: I1210 15:40:47.622292 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fd766" event={"ID":"fb9cd508-2b78-45ac-93b5-9813f44c7b6d","Type":"ContainerDied","Data":"a431f03e756107e4f55b186397f11106fb6ac519f62e1f6a81e4f435b44c7688"} Dec 10 15:40:47 crc kubenswrapper[4755]: I1210 15:40:47.622340 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fd766" Dec 10 15:40:47 crc kubenswrapper[4755]: I1210 15:40:47.622361 4755 scope.go:117] "RemoveContainer" containerID="a431f03e756107e4f55b186397f11106fb6ac519f62e1f6a81e4f435b44c7688" Dec 10 15:40:47 crc kubenswrapper[4755]: I1210 15:40:47.622348 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fd766" event={"ID":"fb9cd508-2b78-45ac-93b5-9813f44c7b6d","Type":"ContainerDied","Data":"24e5009085ad4d91f330854c3d742bf9a8aa1e55274e53393fc35a334a2d8c4f"} Dec 10 15:40:47 crc kubenswrapper[4755]: I1210 15:40:47.658482 4755 scope.go:117] "RemoveContainer" containerID="b0a2efbdde2b745683def1b2a7a8e58b04e9530f40cf0aa1ea1a7692dd4bd3f1" Dec 10 15:40:47 crc kubenswrapper[4755]: I1210 15:40:47.674311 4755 scope.go:117] "RemoveContainer" containerID="9f6a605a7328a05b9918c9166757c4ebc5f95a1e0500796bf2d3543bba7c0e56" Dec 10 15:40:47 crc kubenswrapper[4755]: I1210 15:40:47.699692 4755 scope.go:117] "RemoveContainer" containerID="a431f03e756107e4f55b186397f11106fb6ac519f62e1f6a81e4f435b44c7688" Dec 10 15:40:47 crc kubenswrapper[4755]: E1210 15:40:47.703795 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a431f03e756107e4f55b186397f11106fb6ac519f62e1f6a81e4f435b44c7688\": container with ID starting with a431f03e756107e4f55b186397f11106fb6ac519f62e1f6a81e4f435b44c7688 not found: ID does not exist" containerID="a431f03e756107e4f55b186397f11106fb6ac519f62e1f6a81e4f435b44c7688" Dec 10 15:40:47 crc kubenswrapper[4755]: I1210 15:40:47.703840 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a431f03e756107e4f55b186397f11106fb6ac519f62e1f6a81e4f435b44c7688"} err="failed to get container status \"a431f03e756107e4f55b186397f11106fb6ac519f62e1f6a81e4f435b44c7688\": rpc error: code = NotFound desc = could not find container \"a431f03e756107e4f55b186397f11106fb6ac519f62e1f6a81e4f435b44c7688\": container with ID starting with a431f03e756107e4f55b186397f11106fb6ac519f62e1f6a81e4f435b44c7688 not found: ID does not exist" Dec 10 15:40:47 crc kubenswrapper[4755]: I1210 15:40:47.703868 4755 scope.go:117] "RemoveContainer" containerID="b0a2efbdde2b745683def1b2a7a8e58b04e9530f40cf0aa1ea1a7692dd4bd3f1" Dec 10 15:40:47 crc kubenswrapper[4755]: E1210 15:40:47.706144 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0a2efbdde2b745683def1b2a7a8e58b04e9530f40cf0aa1ea1a7692dd4bd3f1\": container with ID starting with b0a2efbdde2b745683def1b2a7a8e58b04e9530f40cf0aa1ea1a7692dd4bd3f1 not found: ID does not exist" containerID="b0a2efbdde2b745683def1b2a7a8e58b04e9530f40cf0aa1ea1a7692dd4bd3f1" Dec 10 15:40:47 crc kubenswrapper[4755]: I1210 15:40:47.706195 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0a2efbdde2b745683def1b2a7a8e58b04e9530f40cf0aa1ea1a7692dd4bd3f1"} err="failed to get container status \"b0a2efbdde2b745683def1b2a7a8e58b04e9530f40cf0aa1ea1a7692dd4bd3f1\": rpc error: code = NotFound desc = could not find container \"b0a2efbdde2b745683def1b2a7a8e58b04e9530f40cf0aa1ea1a7692dd4bd3f1\": container with ID starting with b0a2efbdde2b745683def1b2a7a8e58b04e9530f40cf0aa1ea1a7692dd4bd3f1 not found: ID does not exist" Dec 10 15:40:47 crc kubenswrapper[4755]: I1210 15:40:47.706232 4755 scope.go:117] "RemoveContainer" containerID="9f6a605a7328a05b9918c9166757c4ebc5f95a1e0500796bf2d3543bba7c0e56" Dec 10 15:40:47 crc kubenswrapper[4755]: E1210 15:40:47.706881 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f6a605a7328a05b9918c9166757c4ebc5f95a1e0500796bf2d3543bba7c0e56\": container with ID starting with 9f6a605a7328a05b9918c9166757c4ebc5f95a1e0500796bf2d3543bba7c0e56 not found: ID does not exist" containerID="9f6a605a7328a05b9918c9166757c4ebc5f95a1e0500796bf2d3543bba7c0e56" Dec 10 15:40:47 crc kubenswrapper[4755]: I1210 15:40:47.706922 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f6a605a7328a05b9918c9166757c4ebc5f95a1e0500796bf2d3543bba7c0e56"} err="failed to get container status \"9f6a605a7328a05b9918c9166757c4ebc5f95a1e0500796bf2d3543bba7c0e56\": rpc error: code = NotFound desc = could not find container \"9f6a605a7328a05b9918c9166757c4ebc5f95a1e0500796bf2d3543bba7c0e56\": container with ID starting with 9f6a605a7328a05b9918c9166757c4ebc5f95a1e0500796bf2d3543bba7c0e56 not found: ID does not exist" Dec 10 15:40:47 crc kubenswrapper[4755]: I1210 15:40:47.786332 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb9cd508-2b78-45ac-93b5-9813f44c7b6d-catalog-content\") pod \"fb9cd508-2b78-45ac-93b5-9813f44c7b6d\" (UID: \"fb9cd508-2b78-45ac-93b5-9813f44c7b6d\") " Dec 10 15:40:47 crc kubenswrapper[4755]: I1210 15:40:47.786451 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5q2\" (UniqueName: \"kubernetes.io/projected/fb9cd508-2b78-45ac-93b5-9813f44c7b6d-kube-api-access-vt5q2\") pod \"fb9cd508-2b78-45ac-93b5-9813f44c7b6d\" (UID: \"fb9cd508-2b78-45ac-93b5-9813f44c7b6d\") " Dec 10 15:40:47 crc kubenswrapper[4755]: I1210 15:40:47.786569 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb9cd508-2b78-45ac-93b5-9813f44c7b6d-utilities\") pod \"fb9cd508-2b78-45ac-93b5-9813f44c7b6d\" (UID: \"fb9cd508-2b78-45ac-93b5-9813f44c7b6d\") " Dec 10 15:40:47 crc kubenswrapper[4755]: I1210 15:40:47.787442 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb9cd508-2b78-45ac-93b5-9813f44c7b6d-utilities" (OuterVolumeSpecName: "utilities") pod "fb9cd508-2b78-45ac-93b5-9813f44c7b6d" (UID: "fb9cd508-2b78-45ac-93b5-9813f44c7b6d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:40:47 crc kubenswrapper[4755]: I1210 15:40:47.791920 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb9cd508-2b78-45ac-93b5-9813f44c7b6d-kube-api-access-vt5q2" (OuterVolumeSpecName: "kube-api-access-vt5q2") pod "fb9cd508-2b78-45ac-93b5-9813f44c7b6d" (UID: "fb9cd508-2b78-45ac-93b5-9813f44c7b6d"). InnerVolumeSpecName "kube-api-access-vt5q2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:40:47 crc kubenswrapper[4755]: I1210 15:40:47.860017 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb9cd508-2b78-45ac-93b5-9813f44c7b6d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fb9cd508-2b78-45ac-93b5-9813f44c7b6d" (UID: "fb9cd508-2b78-45ac-93b5-9813f44c7b6d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:40:47 crc kubenswrapper[4755]: I1210 15:40:47.890697 4755 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb9cd508-2b78-45ac-93b5-9813f44c7b6d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 15:40:47 crc kubenswrapper[4755]: I1210 15:40:47.890752 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5q2\" (UniqueName: \"kubernetes.io/projected/fb9cd508-2b78-45ac-93b5-9813f44c7b6d-kube-api-access-vt5q2\") on node \"crc\" DevicePath \"\"" Dec 10 15:40:47 crc kubenswrapper[4755]: I1210 15:40:47.890764 4755 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb9cd508-2b78-45ac-93b5-9813f44c7b6d-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 15:40:47 crc kubenswrapper[4755]: I1210 15:40:47.972303 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fd766"] Dec 10 15:40:47 crc kubenswrapper[4755]: I1210 15:40:47.978311 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-fd766"] Dec 10 15:40:48 crc kubenswrapper[4755]: E1210 15:40:48.007151 4755 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfb9cd508_2b78_45ac_93b5_9813f44c7b6d.slice\": RecentStats: unable to find data in memory cache]" Dec 10 15:40:48 crc kubenswrapper[4755]: I1210 15:40:48.632591 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fh778d" event={"ID":"0bcf5a92-0324-4799-be55-0e49bd060ee7","Type":"ContainerStarted","Data":"b37f23997af489e67960991430cba46ccbff7171a5e71a82cb9842a8a36b0cc8"} Dec 10 15:40:48 crc kubenswrapper[4755]: I1210 15:40:48.633382 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fh778d" Dec 10 15:40:48 crc kubenswrapper[4755]: I1210 15:40:48.662609 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fh778d" podStartSLOduration=33.683075017 podStartE2EDuration="1m5.662590935s" podCreationTimestamp="2025-12-10 15:39:43 +0000 UTC" firstStartedPulling="2025-12-10 15:40:15.513173905 +0000 UTC m=+1012.114057537" lastFinishedPulling="2025-12-10 15:40:47.492689823 +0000 UTC m=+1044.093573455" observedRunningTime="2025-12-10 15:40:48.661836705 +0000 UTC m=+1045.262720347" watchObservedRunningTime="2025-12-10 15:40:48.662590935 +0000 UTC m=+1045.263474587" Dec 10 15:40:49 crc kubenswrapper[4755]: I1210 15:40:49.766943 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb9cd508-2b78-45ac-93b5-9813f44c7b6d" path="/var/lib/kubelet/pods/fb9cd508-2b78-45ac-93b5-9813f44c7b6d/volumes" Dec 10 15:40:52 crc kubenswrapper[4755]: I1210 15:40:52.665099 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-wsxsj" event={"ID":"423be682-6135-4dd2-8366-b7106adbc632","Type":"ContainerStarted","Data":"fb3563c938d9eef4b02fa91d2d17c2411f3693d84a890e142bbf070fee4d0ffa"} Dec 10 15:40:52 crc kubenswrapper[4755]: I1210 15:40:52.665618 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-wsxsj" Dec 10 15:40:52 crc kubenswrapper[4755]: I1210 15:40:52.686001 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-wsxsj" podStartSLOduration=33.488649946 podStartE2EDuration="1m9.685975632s" podCreationTimestamp="2025-12-10 15:39:43 +0000 UTC" firstStartedPulling="2025-12-10 15:40:15.518170314 +0000 UTC m=+1012.119053946" lastFinishedPulling="2025-12-10 15:40:51.715496 +0000 UTC m=+1048.316379632" observedRunningTime="2025-12-10 15:40:52.684127073 +0000 UTC m=+1049.285010735" watchObservedRunningTime="2025-12-10 15:40:52.685975632 +0000 UTC m=+1049.286859264" Dec 10 15:40:59 crc kubenswrapper[4755]: I1210 15:40:59.508815 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-wsxsj" Dec 10 15:40:59 crc kubenswrapper[4755]: I1210 15:40:59.635175 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fh778d" Dec 10 15:41:17 crc kubenswrapper[4755]: I1210 15:41:17.317553 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-zm77c"] Dec 10 15:41:17 crc kubenswrapper[4755]: E1210 15:41:17.321952 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb9cd508-2b78-45ac-93b5-9813f44c7b6d" containerName="extract-content" Dec 10 15:41:17 crc kubenswrapper[4755]: I1210 15:41:17.323030 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb9cd508-2b78-45ac-93b5-9813f44c7b6d" containerName="extract-content" Dec 10 15:41:17 crc kubenswrapper[4755]: E1210 15:41:17.323182 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb9cd508-2b78-45ac-93b5-9813f44c7b6d" containerName="registry-server" Dec 10 15:41:17 crc kubenswrapper[4755]: I1210 15:41:17.323267 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb9cd508-2b78-45ac-93b5-9813f44c7b6d" containerName="registry-server" Dec 10 15:41:17 crc kubenswrapper[4755]: E1210 15:41:17.323366 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb9cd508-2b78-45ac-93b5-9813f44c7b6d" containerName="extract-utilities" Dec 10 15:41:17 crc kubenswrapper[4755]: I1210 15:41:17.323481 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb9cd508-2b78-45ac-93b5-9813f44c7b6d" containerName="extract-utilities" Dec 10 15:41:17 crc kubenswrapper[4755]: I1210 15:41:17.323916 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb9cd508-2b78-45ac-93b5-9813f44c7b6d" containerName="registry-server" Dec 10 15:41:17 crc kubenswrapper[4755]: I1210 15:41:17.325067 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-zm77c" Dec 10 15:41:17 crc kubenswrapper[4755]: I1210 15:41:17.325761 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-zm77c"] Dec 10 15:41:17 crc kubenswrapper[4755]: I1210 15:41:17.339000 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Dec 10 15:41:17 crc kubenswrapper[4755]: I1210 15:41:17.339401 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Dec 10 15:41:17 crc kubenswrapper[4755]: I1210 15:41:17.339694 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-xbj6c" Dec 10 15:41:17 crc kubenswrapper[4755]: I1210 15:41:17.339915 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Dec 10 15:41:17 crc kubenswrapper[4755]: I1210 15:41:17.387147 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-frv75"] Dec 10 15:41:17 crc kubenswrapper[4755]: I1210 15:41:17.388333 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-frv75" Dec 10 15:41:17 crc kubenswrapper[4755]: I1210 15:41:17.391983 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Dec 10 15:41:17 crc kubenswrapper[4755]: I1210 15:41:17.412209 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-frv75"] Dec 10 15:41:17 crc kubenswrapper[4755]: I1210 15:41:17.507105 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdwdt\" (UniqueName: \"kubernetes.io/projected/2a56f7f7-a7ff-44b8-8253-2cbdaa9f0e91-kube-api-access-jdwdt\") pod \"dnsmasq-dns-78dd6ddcc-frv75\" (UID: \"2a56f7f7-a7ff-44b8-8253-2cbdaa9f0e91\") " pod="openstack/dnsmasq-dns-78dd6ddcc-frv75" Dec 10 15:41:17 crc kubenswrapper[4755]: I1210 15:41:17.507301 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2a56f7f7-a7ff-44b8-8253-2cbdaa9f0e91-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-frv75\" (UID: \"2a56f7f7-a7ff-44b8-8253-2cbdaa9f0e91\") " pod="openstack/dnsmasq-dns-78dd6ddcc-frv75" Dec 10 15:41:17 crc kubenswrapper[4755]: I1210 15:41:17.507357 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5rp2\" (UniqueName: \"kubernetes.io/projected/7e6c07e9-7f64-4a32-88f0-301723cb221b-kube-api-access-b5rp2\") pod \"dnsmasq-dns-675f4bcbfc-zm77c\" (UID: \"7e6c07e9-7f64-4a32-88f0-301723cb221b\") " pod="openstack/dnsmasq-dns-675f4bcbfc-zm77c" Dec 10 15:41:17 crc kubenswrapper[4755]: I1210 15:41:17.507393 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e6c07e9-7f64-4a32-88f0-301723cb221b-config\") pod \"dnsmasq-dns-675f4bcbfc-zm77c\" (UID: \"7e6c07e9-7f64-4a32-88f0-301723cb221b\") " pod="openstack/dnsmasq-dns-675f4bcbfc-zm77c" Dec 10 15:41:17 crc kubenswrapper[4755]: I1210 15:41:17.507435 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a56f7f7-a7ff-44b8-8253-2cbdaa9f0e91-config\") pod \"dnsmasq-dns-78dd6ddcc-frv75\" (UID: \"2a56f7f7-a7ff-44b8-8253-2cbdaa9f0e91\") " pod="openstack/dnsmasq-dns-78dd6ddcc-frv75" Dec 10 15:41:17 crc kubenswrapper[4755]: I1210 15:41:17.608740 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdwdt\" (UniqueName: \"kubernetes.io/projected/2a56f7f7-a7ff-44b8-8253-2cbdaa9f0e91-kube-api-access-jdwdt\") pod \"dnsmasq-dns-78dd6ddcc-frv75\" (UID: \"2a56f7f7-a7ff-44b8-8253-2cbdaa9f0e91\") " pod="openstack/dnsmasq-dns-78dd6ddcc-frv75" Dec 10 15:41:17 crc kubenswrapper[4755]: I1210 15:41:17.609196 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2a56f7f7-a7ff-44b8-8253-2cbdaa9f0e91-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-frv75\" (UID: \"2a56f7f7-a7ff-44b8-8253-2cbdaa9f0e91\") " pod="openstack/dnsmasq-dns-78dd6ddcc-frv75" Dec 10 15:41:17 crc kubenswrapper[4755]: I1210 15:41:17.609242 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5rp2\" (UniqueName: \"kubernetes.io/projected/7e6c07e9-7f64-4a32-88f0-301723cb221b-kube-api-access-b5rp2\") pod \"dnsmasq-dns-675f4bcbfc-zm77c\" (UID: \"7e6c07e9-7f64-4a32-88f0-301723cb221b\") " pod="openstack/dnsmasq-dns-675f4bcbfc-zm77c" Dec 10 15:41:17 crc kubenswrapper[4755]: I1210 15:41:17.609264 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e6c07e9-7f64-4a32-88f0-301723cb221b-config\") pod \"dnsmasq-dns-675f4bcbfc-zm77c\" (UID: \"7e6c07e9-7f64-4a32-88f0-301723cb221b\") " pod="openstack/dnsmasq-dns-675f4bcbfc-zm77c" Dec 10 15:41:17 crc kubenswrapper[4755]: I1210 15:41:17.609297 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a56f7f7-a7ff-44b8-8253-2cbdaa9f0e91-config\") pod \"dnsmasq-dns-78dd6ddcc-frv75\" (UID: \"2a56f7f7-a7ff-44b8-8253-2cbdaa9f0e91\") " pod="openstack/dnsmasq-dns-78dd6ddcc-frv75" Dec 10 15:41:17 crc kubenswrapper[4755]: I1210 15:41:17.610158 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e6c07e9-7f64-4a32-88f0-301723cb221b-config\") pod \"dnsmasq-dns-675f4bcbfc-zm77c\" (UID: \"7e6c07e9-7f64-4a32-88f0-301723cb221b\") " pod="openstack/dnsmasq-dns-675f4bcbfc-zm77c" Dec 10 15:41:17 crc kubenswrapper[4755]: I1210 15:41:17.610159 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2a56f7f7-a7ff-44b8-8253-2cbdaa9f0e91-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-frv75\" (UID: \"2a56f7f7-a7ff-44b8-8253-2cbdaa9f0e91\") " pod="openstack/dnsmasq-dns-78dd6ddcc-frv75" Dec 10 15:41:17 crc kubenswrapper[4755]: I1210 15:41:17.610486 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a56f7f7-a7ff-44b8-8253-2cbdaa9f0e91-config\") pod \"dnsmasq-dns-78dd6ddcc-frv75\" (UID: \"2a56f7f7-a7ff-44b8-8253-2cbdaa9f0e91\") " pod="openstack/dnsmasq-dns-78dd6ddcc-frv75" Dec 10 15:41:17 crc kubenswrapper[4755]: I1210 15:41:17.641101 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5rp2\" (UniqueName: \"kubernetes.io/projected/7e6c07e9-7f64-4a32-88f0-301723cb221b-kube-api-access-b5rp2\") pod \"dnsmasq-dns-675f4bcbfc-zm77c\" (UID: \"7e6c07e9-7f64-4a32-88f0-301723cb221b\") " pod="openstack/dnsmasq-dns-675f4bcbfc-zm77c" Dec 10 15:41:17 crc kubenswrapper[4755]: I1210 15:41:17.644906 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdwdt\" (UniqueName: \"kubernetes.io/projected/2a56f7f7-a7ff-44b8-8253-2cbdaa9f0e91-kube-api-access-jdwdt\") pod \"dnsmasq-dns-78dd6ddcc-frv75\" (UID: \"2a56f7f7-a7ff-44b8-8253-2cbdaa9f0e91\") " pod="openstack/dnsmasq-dns-78dd6ddcc-frv75" Dec 10 15:41:17 crc kubenswrapper[4755]: I1210 15:41:17.655824 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-zm77c" Dec 10 15:41:17 crc kubenswrapper[4755]: I1210 15:41:17.709171 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-frv75" Dec 10 15:41:18 crc kubenswrapper[4755]: I1210 15:41:18.150323 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-zm77c"] Dec 10 15:41:18 crc kubenswrapper[4755]: W1210 15:41:18.162321 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7e6c07e9_7f64_4a32_88f0_301723cb221b.slice/crio-627d1a2723990ba512e2e57be3e0c054b618944b99abfc67b013bdda6676596e WatchSource:0}: Error finding container 627d1a2723990ba512e2e57be3e0c054b618944b99abfc67b013bdda6676596e: Status 404 returned error can't find the container with id 627d1a2723990ba512e2e57be3e0c054b618944b99abfc67b013bdda6676596e Dec 10 15:41:18 crc kubenswrapper[4755]: I1210 15:41:18.216786 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-frv75"] Dec 10 15:41:18 crc kubenswrapper[4755]: W1210 15:41:18.225400 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2a56f7f7_a7ff_44b8_8253_2cbdaa9f0e91.slice/crio-d65cf09873ebf3ece30094293de4bc1df10173929fb94670f9aca3897c151b2d WatchSource:0}: Error finding container d65cf09873ebf3ece30094293de4bc1df10173929fb94670f9aca3897c151b2d: Status 404 returned error can't find the container with id d65cf09873ebf3ece30094293de4bc1df10173929fb94670f9aca3897c151b2d Dec 10 15:41:18 crc kubenswrapper[4755]: I1210 15:41:18.885143 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-frv75" event={"ID":"2a56f7f7-a7ff-44b8-8253-2cbdaa9f0e91","Type":"ContainerStarted","Data":"d65cf09873ebf3ece30094293de4bc1df10173929fb94670f9aca3897c151b2d"} Dec 10 15:41:18 crc kubenswrapper[4755]: I1210 15:41:18.886501 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-zm77c" event={"ID":"7e6c07e9-7f64-4a32-88f0-301723cb221b","Type":"ContainerStarted","Data":"627d1a2723990ba512e2e57be3e0c054b618944b99abfc67b013bdda6676596e"} Dec 10 15:41:20 crc kubenswrapper[4755]: I1210 15:41:20.789846 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-zm77c"] Dec 10 15:41:20 crc kubenswrapper[4755]: I1210 15:41:20.814374 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-prnhr"] Dec 10 15:41:20 crc kubenswrapper[4755]: I1210 15:41:20.815663 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-prnhr" Dec 10 15:41:20 crc kubenswrapper[4755]: I1210 15:41:20.843713 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-prnhr"] Dec 10 15:41:20 crc kubenswrapper[4755]: I1210 15:41:20.964946 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r94x9\" (UniqueName: \"kubernetes.io/projected/5bf05873-62f3-4a0f-b58e-ec6346b5f057-kube-api-access-r94x9\") pod \"dnsmasq-dns-666b6646f7-prnhr\" (UID: \"5bf05873-62f3-4a0f-b58e-ec6346b5f057\") " pod="openstack/dnsmasq-dns-666b6646f7-prnhr" Dec 10 15:41:20 crc kubenswrapper[4755]: I1210 15:41:20.965012 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bf05873-62f3-4a0f-b58e-ec6346b5f057-config\") pod \"dnsmasq-dns-666b6646f7-prnhr\" (UID: \"5bf05873-62f3-4a0f-b58e-ec6346b5f057\") " pod="openstack/dnsmasq-dns-666b6646f7-prnhr" Dec 10 15:41:20 crc kubenswrapper[4755]: I1210 15:41:20.965068 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5bf05873-62f3-4a0f-b58e-ec6346b5f057-dns-svc\") pod \"dnsmasq-dns-666b6646f7-prnhr\" (UID: \"5bf05873-62f3-4a0f-b58e-ec6346b5f057\") " pod="openstack/dnsmasq-dns-666b6646f7-prnhr" Dec 10 15:41:21 crc kubenswrapper[4755]: I1210 15:41:21.068634 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r94x9\" (UniqueName: \"kubernetes.io/projected/5bf05873-62f3-4a0f-b58e-ec6346b5f057-kube-api-access-r94x9\") pod \"dnsmasq-dns-666b6646f7-prnhr\" (UID: \"5bf05873-62f3-4a0f-b58e-ec6346b5f057\") " pod="openstack/dnsmasq-dns-666b6646f7-prnhr" Dec 10 15:41:21 crc kubenswrapper[4755]: I1210 15:41:21.068695 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bf05873-62f3-4a0f-b58e-ec6346b5f057-config\") pod \"dnsmasq-dns-666b6646f7-prnhr\" (UID: \"5bf05873-62f3-4a0f-b58e-ec6346b5f057\") " pod="openstack/dnsmasq-dns-666b6646f7-prnhr" Dec 10 15:41:21 crc kubenswrapper[4755]: I1210 15:41:21.068752 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5bf05873-62f3-4a0f-b58e-ec6346b5f057-dns-svc\") pod \"dnsmasq-dns-666b6646f7-prnhr\" (UID: \"5bf05873-62f3-4a0f-b58e-ec6346b5f057\") " pod="openstack/dnsmasq-dns-666b6646f7-prnhr" Dec 10 15:41:21 crc kubenswrapper[4755]: I1210 15:41:21.069825 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5bf05873-62f3-4a0f-b58e-ec6346b5f057-dns-svc\") pod \"dnsmasq-dns-666b6646f7-prnhr\" (UID: \"5bf05873-62f3-4a0f-b58e-ec6346b5f057\") " pod="openstack/dnsmasq-dns-666b6646f7-prnhr" Dec 10 15:41:21 crc kubenswrapper[4755]: I1210 15:41:21.070330 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bf05873-62f3-4a0f-b58e-ec6346b5f057-config\") pod \"dnsmasq-dns-666b6646f7-prnhr\" (UID: \"5bf05873-62f3-4a0f-b58e-ec6346b5f057\") " pod="openstack/dnsmasq-dns-666b6646f7-prnhr" Dec 10 15:41:21 crc kubenswrapper[4755]: I1210 15:41:21.108013 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r94x9\" (UniqueName: \"kubernetes.io/projected/5bf05873-62f3-4a0f-b58e-ec6346b5f057-kube-api-access-r94x9\") pod \"dnsmasq-dns-666b6646f7-prnhr\" (UID: \"5bf05873-62f3-4a0f-b58e-ec6346b5f057\") " pod="openstack/dnsmasq-dns-666b6646f7-prnhr" Dec 10 15:41:21 crc kubenswrapper[4755]: I1210 15:41:21.145186 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-prnhr" Dec 10 15:41:21 crc kubenswrapper[4755]: I1210 15:41:21.272412 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-frv75"] Dec 10 15:41:21 crc kubenswrapper[4755]: I1210 15:41:21.301051 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-x9c7j"] Dec 10 15:41:21 crc kubenswrapper[4755]: I1210 15:41:21.302324 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-x9c7j" Dec 10 15:41:21 crc kubenswrapper[4755]: I1210 15:41:21.317773 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-x9c7j"] Dec 10 15:41:21 crc kubenswrapper[4755]: I1210 15:41:21.475187 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64c097c3-2f2a-4d0d-b7cb-8d1c88ffd3eb-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-x9c7j\" (UID: \"64c097c3-2f2a-4d0d-b7cb-8d1c88ffd3eb\") " pod="openstack/dnsmasq-dns-57d769cc4f-x9c7j" Dec 10 15:41:21 crc kubenswrapper[4755]: I1210 15:41:21.475250 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgrjp\" (UniqueName: \"kubernetes.io/projected/64c097c3-2f2a-4d0d-b7cb-8d1c88ffd3eb-kube-api-access-jgrjp\") pod \"dnsmasq-dns-57d769cc4f-x9c7j\" (UID: \"64c097c3-2f2a-4d0d-b7cb-8d1c88ffd3eb\") " pod="openstack/dnsmasq-dns-57d769cc4f-x9c7j" Dec 10 15:41:21 crc kubenswrapper[4755]: I1210 15:41:21.475334 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64c097c3-2f2a-4d0d-b7cb-8d1c88ffd3eb-config\") pod \"dnsmasq-dns-57d769cc4f-x9c7j\" (UID: \"64c097c3-2f2a-4d0d-b7cb-8d1c88ffd3eb\") " pod="openstack/dnsmasq-dns-57d769cc4f-x9c7j" Dec 10 15:41:21 crc kubenswrapper[4755]: I1210 15:41:21.576346 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64c097c3-2f2a-4d0d-b7cb-8d1c88ffd3eb-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-x9c7j\" (UID: \"64c097c3-2f2a-4d0d-b7cb-8d1c88ffd3eb\") " pod="openstack/dnsmasq-dns-57d769cc4f-x9c7j" Dec 10 15:41:21 crc kubenswrapper[4755]: I1210 15:41:21.576393 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgrjp\" (UniqueName: \"kubernetes.io/projected/64c097c3-2f2a-4d0d-b7cb-8d1c88ffd3eb-kube-api-access-jgrjp\") pod \"dnsmasq-dns-57d769cc4f-x9c7j\" (UID: \"64c097c3-2f2a-4d0d-b7cb-8d1c88ffd3eb\") " pod="openstack/dnsmasq-dns-57d769cc4f-x9c7j" Dec 10 15:41:21 crc kubenswrapper[4755]: I1210 15:41:21.576434 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64c097c3-2f2a-4d0d-b7cb-8d1c88ffd3eb-config\") pod \"dnsmasq-dns-57d769cc4f-x9c7j\" (UID: \"64c097c3-2f2a-4d0d-b7cb-8d1c88ffd3eb\") " pod="openstack/dnsmasq-dns-57d769cc4f-x9c7j" Dec 10 15:41:21 crc kubenswrapper[4755]: I1210 15:41:21.577237 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64c097c3-2f2a-4d0d-b7cb-8d1c88ffd3eb-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-x9c7j\" (UID: \"64c097c3-2f2a-4d0d-b7cb-8d1c88ffd3eb\") " pod="openstack/dnsmasq-dns-57d769cc4f-x9c7j" Dec 10 15:41:21 crc kubenswrapper[4755]: I1210 15:41:21.577239 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64c097c3-2f2a-4d0d-b7cb-8d1c88ffd3eb-config\") pod \"dnsmasq-dns-57d769cc4f-x9c7j\" (UID: \"64c097c3-2f2a-4d0d-b7cb-8d1c88ffd3eb\") " pod="openstack/dnsmasq-dns-57d769cc4f-x9c7j" Dec 10 15:41:21 crc kubenswrapper[4755]: I1210 15:41:21.600677 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgrjp\" (UniqueName: \"kubernetes.io/projected/64c097c3-2f2a-4d0d-b7cb-8d1c88ffd3eb-kube-api-access-jgrjp\") pod \"dnsmasq-dns-57d769cc4f-x9c7j\" (UID: \"64c097c3-2f2a-4d0d-b7cb-8d1c88ffd3eb\") " pod="openstack/dnsmasq-dns-57d769cc4f-x9c7j" Dec 10 15:41:21 crc kubenswrapper[4755]: I1210 15:41:21.630036 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-x9c7j" Dec 10 15:41:21 crc kubenswrapper[4755]: I1210 15:41:21.980579 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Dec 10 15:41:21 crc kubenswrapper[4755]: I1210 15:41:21.981791 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 10 15:41:21 crc kubenswrapper[4755]: I1210 15:41:21.985346 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Dec 10 15:41:21 crc kubenswrapper[4755]: I1210 15:41:21.985445 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Dec 10 15:41:21 crc kubenswrapper[4755]: I1210 15:41:21.994435 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Dec 10 15:41:21 crc kubenswrapper[4755]: I1210 15:41:21.994626 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Dec 10 15:41:21 crc kubenswrapper[4755]: I1210 15:41:21.994716 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Dec 10 15:41:21 crc kubenswrapper[4755]: I1210 15:41:21.994872 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-jh72g" Dec 10 15:41:21 crc kubenswrapper[4755]: I1210 15:41:21.994892 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Dec 10 15:41:22 crc kubenswrapper[4755]: I1210 15:41:21.999760 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 10 15:41:22 crc kubenswrapper[4755]: I1210 15:41:22.084048 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/89e8722f-e9fc-4850-bb96-e51f9859805e-config-data\") pod \"rabbitmq-server-0\" (UID: \"89e8722f-e9fc-4850-bb96-e51f9859805e\") " pod="openstack/rabbitmq-server-0" Dec 10 15:41:22 crc kubenswrapper[4755]: I1210 15:41:22.084112 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5ng9\" (UniqueName: \"kubernetes.io/projected/89e8722f-e9fc-4850-bb96-e51f9859805e-kube-api-access-h5ng9\") pod \"rabbitmq-server-0\" (UID: \"89e8722f-e9fc-4850-bb96-e51f9859805e\") " pod="openstack/rabbitmq-server-0" Dec 10 15:41:22 crc kubenswrapper[4755]: I1210 15:41:22.084140 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/89e8722f-e9fc-4850-bb96-e51f9859805e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"89e8722f-e9fc-4850-bb96-e51f9859805e\") " pod="openstack/rabbitmq-server-0" Dec 10 15:41:22 crc kubenswrapper[4755]: I1210 15:41:22.084230 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-61a3430d-015d-4835-b4fc-5566f9913b53\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-61a3430d-015d-4835-b4fc-5566f9913b53\") pod \"rabbitmq-server-0\" (UID: \"89e8722f-e9fc-4850-bb96-e51f9859805e\") " pod="openstack/rabbitmq-server-0" Dec 10 15:41:22 crc kubenswrapper[4755]: I1210 15:41:22.084251 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/89e8722f-e9fc-4850-bb96-e51f9859805e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"89e8722f-e9fc-4850-bb96-e51f9859805e\") " pod="openstack/rabbitmq-server-0" Dec 10 15:41:22 crc kubenswrapper[4755]: I1210 15:41:22.084303 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/89e8722f-e9fc-4850-bb96-e51f9859805e-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"89e8722f-e9fc-4850-bb96-e51f9859805e\") " pod="openstack/rabbitmq-server-0" Dec 10 15:41:22 crc kubenswrapper[4755]: I1210 15:41:22.084333 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/89e8722f-e9fc-4850-bb96-e51f9859805e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"89e8722f-e9fc-4850-bb96-e51f9859805e\") " pod="openstack/rabbitmq-server-0" Dec 10 15:41:22 crc kubenswrapper[4755]: I1210 15:41:22.084407 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/89e8722f-e9fc-4850-bb96-e51f9859805e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"89e8722f-e9fc-4850-bb96-e51f9859805e\") " pod="openstack/rabbitmq-server-0" Dec 10 15:41:22 crc kubenswrapper[4755]: I1210 15:41:22.084442 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/89e8722f-e9fc-4850-bb96-e51f9859805e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"89e8722f-e9fc-4850-bb96-e51f9859805e\") " pod="openstack/rabbitmq-server-0" Dec 10 15:41:22 crc kubenswrapper[4755]: I1210 15:41:22.084616 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/89e8722f-e9fc-4850-bb96-e51f9859805e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"89e8722f-e9fc-4850-bb96-e51f9859805e\") " pod="openstack/rabbitmq-server-0" Dec 10 15:41:22 crc kubenswrapper[4755]: I1210 15:41:22.084680 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/89e8722f-e9fc-4850-bb96-e51f9859805e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"89e8722f-e9fc-4850-bb96-e51f9859805e\") " pod="openstack/rabbitmq-server-0" Dec 10 15:41:22 crc kubenswrapper[4755]: I1210 15:41:22.186487 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/89e8722f-e9fc-4850-bb96-e51f9859805e-config-data\") pod \"rabbitmq-server-0\" (UID: \"89e8722f-e9fc-4850-bb96-e51f9859805e\") " pod="openstack/rabbitmq-server-0" Dec 10 15:41:22 crc kubenswrapper[4755]: I1210 15:41:22.186555 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5ng9\" (UniqueName: \"kubernetes.io/projected/89e8722f-e9fc-4850-bb96-e51f9859805e-kube-api-access-h5ng9\") pod \"rabbitmq-server-0\" (UID: \"89e8722f-e9fc-4850-bb96-e51f9859805e\") " pod="openstack/rabbitmq-server-0" Dec 10 15:41:22 crc kubenswrapper[4755]: I1210 15:41:22.186582 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/89e8722f-e9fc-4850-bb96-e51f9859805e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"89e8722f-e9fc-4850-bb96-e51f9859805e\") " pod="openstack/rabbitmq-server-0" Dec 10 15:41:22 crc kubenswrapper[4755]: I1210 15:41:22.186631 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-61a3430d-015d-4835-b4fc-5566f9913b53\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-61a3430d-015d-4835-b4fc-5566f9913b53\") pod \"rabbitmq-server-0\" (UID: \"89e8722f-e9fc-4850-bb96-e51f9859805e\") " pod="openstack/rabbitmq-server-0" Dec 10 15:41:22 crc kubenswrapper[4755]: I1210 15:41:22.186652 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/89e8722f-e9fc-4850-bb96-e51f9859805e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"89e8722f-e9fc-4850-bb96-e51f9859805e\") " pod="openstack/rabbitmq-server-0" Dec 10 15:41:22 crc kubenswrapper[4755]: I1210 15:41:22.186670 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/89e8722f-e9fc-4850-bb96-e51f9859805e-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"89e8722f-e9fc-4850-bb96-e51f9859805e\") " pod="openstack/rabbitmq-server-0" Dec 10 15:41:22 crc kubenswrapper[4755]: I1210 15:41:22.186692 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/89e8722f-e9fc-4850-bb96-e51f9859805e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"89e8722f-e9fc-4850-bb96-e51f9859805e\") " pod="openstack/rabbitmq-server-0" Dec 10 15:41:22 crc kubenswrapper[4755]: I1210 15:41:22.186717 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/89e8722f-e9fc-4850-bb96-e51f9859805e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"89e8722f-e9fc-4850-bb96-e51f9859805e\") " pod="openstack/rabbitmq-server-0" Dec 10 15:41:22 crc kubenswrapper[4755]: I1210 15:41:22.186738 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/89e8722f-e9fc-4850-bb96-e51f9859805e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"89e8722f-e9fc-4850-bb96-e51f9859805e\") " pod="openstack/rabbitmq-server-0" Dec 10 15:41:22 crc kubenswrapper[4755]: I1210 15:41:22.186762 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/89e8722f-e9fc-4850-bb96-e51f9859805e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"89e8722f-e9fc-4850-bb96-e51f9859805e\") " pod="openstack/rabbitmq-server-0" Dec 10 15:41:22 crc kubenswrapper[4755]: I1210 15:41:22.187526 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/89e8722f-e9fc-4850-bb96-e51f9859805e-config-data\") pod \"rabbitmq-server-0\" (UID: \"89e8722f-e9fc-4850-bb96-e51f9859805e\") " pod="openstack/rabbitmq-server-0" Dec 10 15:41:22 crc kubenswrapper[4755]: I1210 15:41:22.188151 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/89e8722f-e9fc-4850-bb96-e51f9859805e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"89e8722f-e9fc-4850-bb96-e51f9859805e\") " pod="openstack/rabbitmq-server-0" Dec 10 15:41:22 crc kubenswrapper[4755]: I1210 15:41:22.188602 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/89e8722f-e9fc-4850-bb96-e51f9859805e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"89e8722f-e9fc-4850-bb96-e51f9859805e\") " pod="openstack/rabbitmq-server-0" Dec 10 15:41:22 crc kubenswrapper[4755]: I1210 15:41:22.188705 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/89e8722f-e9fc-4850-bb96-e51f9859805e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"89e8722f-e9fc-4850-bb96-e51f9859805e\") " pod="openstack/rabbitmq-server-0" Dec 10 15:41:22 crc kubenswrapper[4755]: I1210 15:41:22.189020 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/89e8722f-e9fc-4850-bb96-e51f9859805e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"89e8722f-e9fc-4850-bb96-e51f9859805e\") " pod="openstack/rabbitmq-server-0" Dec 10 15:41:22 crc kubenswrapper[4755]: I1210 15:41:22.194001 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/89e8722f-e9fc-4850-bb96-e51f9859805e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"89e8722f-e9fc-4850-bb96-e51f9859805e\") " pod="openstack/rabbitmq-server-0" Dec 10 15:41:22 crc kubenswrapper[4755]: I1210 15:41:22.194452 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/89e8722f-e9fc-4850-bb96-e51f9859805e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"89e8722f-e9fc-4850-bb96-e51f9859805e\") " pod="openstack/rabbitmq-server-0" Dec 10 15:41:22 crc kubenswrapper[4755]: I1210 15:41:22.194669 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/89e8722f-e9fc-4850-bb96-e51f9859805e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"89e8722f-e9fc-4850-bb96-e51f9859805e\") " pod="openstack/rabbitmq-server-0" Dec 10 15:41:22 crc kubenswrapper[4755]: I1210 15:41:22.194749 4755 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 10 15:41:22 crc kubenswrapper[4755]: I1210 15:41:22.197560 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/89e8722f-e9fc-4850-bb96-e51f9859805e-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"89e8722f-e9fc-4850-bb96-e51f9859805e\") " pod="openstack/rabbitmq-server-0" Dec 10 15:41:22 crc kubenswrapper[4755]: I1210 15:41:22.200676 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/89e8722f-e9fc-4850-bb96-e51f9859805e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"89e8722f-e9fc-4850-bb96-e51f9859805e\") " pod="openstack/rabbitmq-server-0" Dec 10 15:41:22 crc kubenswrapper[4755]: I1210 15:41:22.209622 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5ng9\" (UniqueName: \"kubernetes.io/projected/89e8722f-e9fc-4850-bb96-e51f9859805e-kube-api-access-h5ng9\") pod \"rabbitmq-server-0\" (UID: \"89e8722f-e9fc-4850-bb96-e51f9859805e\") " pod="openstack/rabbitmq-server-0" Dec 10 15:41:22 crc kubenswrapper[4755]: I1210 15:41:22.215541 4755 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-61a3430d-015d-4835-b4fc-5566f9913b53\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-61a3430d-015d-4835-b4fc-5566f9913b53\") pod \"rabbitmq-server-0\" (UID: \"89e8722f-e9fc-4850-bb96-e51f9859805e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/d4252b507526a7e91a94eb844c0ffdc167a616b1bba916a7295ffc4900f2a3e9/globalmount\"" pod="openstack/rabbitmq-server-0" Dec 10 15:41:22 crc kubenswrapper[4755]: I1210 15:41:22.265768 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-61a3430d-015d-4835-b4fc-5566f9913b53\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-61a3430d-015d-4835-b4fc-5566f9913b53\") pod \"rabbitmq-server-0\" (UID: \"89e8722f-e9fc-4850-bb96-e51f9859805e\") " pod="openstack/rabbitmq-server-0" Dec 10 15:41:22 crc kubenswrapper[4755]: I1210 15:41:22.308298 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 10 15:41:22 crc kubenswrapper[4755]: I1210 15:41:22.451615 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 10 15:41:22 crc kubenswrapper[4755]: I1210 15:41:22.453176 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:41:22 crc kubenswrapper[4755]: I1210 15:41:22.456432 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-mjpt2" Dec 10 15:41:22 crc kubenswrapper[4755]: I1210 15:41:22.456443 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Dec 10 15:41:22 crc kubenswrapper[4755]: I1210 15:41:22.456540 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Dec 10 15:41:22 crc kubenswrapper[4755]: I1210 15:41:22.458043 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Dec 10 15:41:22 crc kubenswrapper[4755]: I1210 15:41:22.458203 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Dec 10 15:41:22 crc kubenswrapper[4755]: I1210 15:41:22.458248 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Dec 10 15:41:22 crc kubenswrapper[4755]: I1210 15:41:22.458392 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Dec 10 15:41:22 crc kubenswrapper[4755]: I1210 15:41:22.507781 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 10 15:41:22 crc kubenswrapper[4755]: I1210 15:41:22.598315 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/fb480bc7-6936-4208-964b-44cffd08f907-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"fb480bc7-6936-4208-964b-44cffd08f907\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:41:22 crc kubenswrapper[4755]: I1210 15:41:22.598786 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/fb480bc7-6936-4208-964b-44cffd08f907-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"fb480bc7-6936-4208-964b-44cffd08f907\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:41:22 crc kubenswrapper[4755]: I1210 15:41:22.598869 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/fb480bc7-6936-4208-964b-44cffd08f907-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"fb480bc7-6936-4208-964b-44cffd08f907\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:41:22 crc kubenswrapper[4755]: I1210 15:41:22.598994 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-77593632-dc40-4f21-b52e-726e9f34d0e5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-77593632-dc40-4f21-b52e-726e9f34d0e5\") pod \"rabbitmq-cell1-server-0\" (UID: \"fb480bc7-6936-4208-964b-44cffd08f907\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:41:22 crc kubenswrapper[4755]: I1210 15:41:22.599148 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/fb480bc7-6936-4208-964b-44cffd08f907-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"fb480bc7-6936-4208-964b-44cffd08f907\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:41:22 crc kubenswrapper[4755]: I1210 15:41:22.599278 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tj5zs\" (UniqueName: \"kubernetes.io/projected/fb480bc7-6936-4208-964b-44cffd08f907-kube-api-access-tj5zs\") pod \"rabbitmq-cell1-server-0\" (UID: \"fb480bc7-6936-4208-964b-44cffd08f907\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:41:22 crc kubenswrapper[4755]: I1210 15:41:22.599376 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/fb480bc7-6936-4208-964b-44cffd08f907-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"fb480bc7-6936-4208-964b-44cffd08f907\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:41:22 crc kubenswrapper[4755]: I1210 15:41:22.599480 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/fb480bc7-6936-4208-964b-44cffd08f907-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"fb480bc7-6936-4208-964b-44cffd08f907\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:41:22 crc kubenswrapper[4755]: I1210 15:41:22.599581 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fb480bc7-6936-4208-964b-44cffd08f907-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"fb480bc7-6936-4208-964b-44cffd08f907\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:41:22 crc kubenswrapper[4755]: I1210 15:41:22.599701 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/fb480bc7-6936-4208-964b-44cffd08f907-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"fb480bc7-6936-4208-964b-44cffd08f907\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:41:22 crc kubenswrapper[4755]: I1210 15:41:22.599915 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/fb480bc7-6936-4208-964b-44cffd08f907-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"fb480bc7-6936-4208-964b-44cffd08f907\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:41:22 crc kubenswrapper[4755]: I1210 15:41:22.703022 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/fb480bc7-6936-4208-964b-44cffd08f907-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"fb480bc7-6936-4208-964b-44cffd08f907\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:41:22 crc kubenswrapper[4755]: I1210 15:41:22.703092 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/fb480bc7-6936-4208-964b-44cffd08f907-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"fb480bc7-6936-4208-964b-44cffd08f907\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:41:22 crc kubenswrapper[4755]: I1210 15:41:22.703141 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fb480bc7-6936-4208-964b-44cffd08f907-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"fb480bc7-6936-4208-964b-44cffd08f907\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:41:22 crc kubenswrapper[4755]: I1210 15:41:22.703170 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/fb480bc7-6936-4208-964b-44cffd08f907-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"fb480bc7-6936-4208-964b-44cffd08f907\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:41:22 crc kubenswrapper[4755]: I1210 15:41:22.703194 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/fb480bc7-6936-4208-964b-44cffd08f907-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"fb480bc7-6936-4208-964b-44cffd08f907\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:41:22 crc kubenswrapper[4755]: I1210 15:41:22.704083 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/fb480bc7-6936-4208-964b-44cffd08f907-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"fb480bc7-6936-4208-964b-44cffd08f907\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:41:22 crc kubenswrapper[4755]: I1210 15:41:22.704214 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/fb480bc7-6936-4208-964b-44cffd08f907-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"fb480bc7-6936-4208-964b-44cffd08f907\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:41:22 crc kubenswrapper[4755]: I1210 15:41:22.704221 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/fb480bc7-6936-4208-964b-44cffd08f907-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"fb480bc7-6936-4208-964b-44cffd08f907\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:41:22 crc kubenswrapper[4755]: I1210 15:41:22.704337 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fb480bc7-6936-4208-964b-44cffd08f907-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"fb480bc7-6936-4208-964b-44cffd08f907\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:41:22 crc kubenswrapper[4755]: I1210 15:41:22.705638 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/fb480bc7-6936-4208-964b-44cffd08f907-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"fb480bc7-6936-4208-964b-44cffd08f907\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:41:22 crc kubenswrapper[4755]: I1210 15:41:22.705700 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/fb480bc7-6936-4208-964b-44cffd08f907-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"fb480bc7-6936-4208-964b-44cffd08f907\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:41:22 crc kubenswrapper[4755]: I1210 15:41:22.705725 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/fb480bc7-6936-4208-964b-44cffd08f907-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"fb480bc7-6936-4208-964b-44cffd08f907\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:41:22 crc kubenswrapper[4755]: I1210 15:41:22.705828 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-77593632-dc40-4f21-b52e-726e9f34d0e5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-77593632-dc40-4f21-b52e-726e9f34d0e5\") pod \"rabbitmq-cell1-server-0\" (UID: \"fb480bc7-6936-4208-964b-44cffd08f907\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:41:22 crc kubenswrapper[4755]: I1210 15:41:22.705879 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/fb480bc7-6936-4208-964b-44cffd08f907-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"fb480bc7-6936-4208-964b-44cffd08f907\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:41:22 crc kubenswrapper[4755]: I1210 15:41:22.705946 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tj5zs\" (UniqueName: \"kubernetes.io/projected/fb480bc7-6936-4208-964b-44cffd08f907-kube-api-access-tj5zs\") pod \"rabbitmq-cell1-server-0\" (UID: \"fb480bc7-6936-4208-964b-44cffd08f907\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:41:22 crc kubenswrapper[4755]: I1210 15:41:22.708209 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/fb480bc7-6936-4208-964b-44cffd08f907-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"fb480bc7-6936-4208-964b-44cffd08f907\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:41:22 crc kubenswrapper[4755]: I1210 15:41:22.711179 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/fb480bc7-6936-4208-964b-44cffd08f907-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"fb480bc7-6936-4208-964b-44cffd08f907\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:41:22 crc kubenswrapper[4755]: I1210 15:41:22.711566 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/fb480bc7-6936-4208-964b-44cffd08f907-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"fb480bc7-6936-4208-964b-44cffd08f907\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:41:22 crc kubenswrapper[4755]: I1210 15:41:22.732056 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/fb480bc7-6936-4208-964b-44cffd08f907-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"fb480bc7-6936-4208-964b-44cffd08f907\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:41:22 crc kubenswrapper[4755]: I1210 15:41:22.732074 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/fb480bc7-6936-4208-964b-44cffd08f907-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"fb480bc7-6936-4208-964b-44cffd08f907\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:41:22 crc kubenswrapper[4755]: I1210 15:41:22.734181 4755 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 10 15:41:22 crc kubenswrapper[4755]: I1210 15:41:22.734220 4755 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-77593632-dc40-4f21-b52e-726e9f34d0e5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-77593632-dc40-4f21-b52e-726e9f34d0e5\") pod \"rabbitmq-cell1-server-0\" (UID: \"fb480bc7-6936-4208-964b-44cffd08f907\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/46fdf78cc94e5747c93bd944a88b2597f9ef25d3ce7984ed1662cc52337b7889/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:41:22 crc kubenswrapper[4755]: I1210 15:41:22.734312 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tj5zs\" (UniqueName: \"kubernetes.io/projected/fb480bc7-6936-4208-964b-44cffd08f907-kube-api-access-tj5zs\") pod \"rabbitmq-cell1-server-0\" (UID: \"fb480bc7-6936-4208-964b-44cffd08f907\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:41:22 crc kubenswrapper[4755]: I1210 15:41:22.771203 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-77593632-dc40-4f21-b52e-726e9f34d0e5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-77593632-dc40-4f21-b52e-726e9f34d0e5\") pod \"rabbitmq-cell1-server-0\" (UID: \"fb480bc7-6936-4208-964b-44cffd08f907\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:41:22 crc kubenswrapper[4755]: I1210 15:41:22.782022 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:41:23 crc kubenswrapper[4755]: I1210 15:41:23.475291 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Dec 10 15:41:23 crc kubenswrapper[4755]: I1210 15:41:23.476940 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Dec 10 15:41:23 crc kubenswrapper[4755]: I1210 15:41:23.484309 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-zqb2b" Dec 10 15:41:23 crc kubenswrapper[4755]: I1210 15:41:23.484953 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Dec 10 15:41:23 crc kubenswrapper[4755]: I1210 15:41:23.485139 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Dec 10 15:41:23 crc kubenswrapper[4755]: I1210 15:41:23.485204 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Dec 10 15:41:23 crc kubenswrapper[4755]: I1210 15:41:23.490201 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Dec 10 15:41:23 crc kubenswrapper[4755]: I1210 15:41:23.490418 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Dec 10 15:41:23 crc kubenswrapper[4755]: I1210 15:41:23.620439 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/48b9cc99-2595-445c-aca6-b13972e95324-config-data-generated\") pod \"openstack-galera-0\" (UID: \"48b9cc99-2595-445c-aca6-b13972e95324\") " pod="openstack/openstack-galera-0" Dec 10 15:41:23 crc kubenswrapper[4755]: I1210 15:41:23.620539 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48b9cc99-2595-445c-aca6-b13972e95324-operator-scripts\") pod \"openstack-galera-0\" (UID: \"48b9cc99-2595-445c-aca6-b13972e95324\") " pod="openstack/openstack-galera-0" Dec 10 15:41:23 crc kubenswrapper[4755]: I1210 15:41:23.620655 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/48b9cc99-2595-445c-aca6-b13972e95324-config-data-default\") pod \"openstack-galera-0\" (UID: \"48b9cc99-2595-445c-aca6-b13972e95324\") " pod="openstack/openstack-galera-0" Dec 10 15:41:23 crc kubenswrapper[4755]: I1210 15:41:23.620699 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/48b9cc99-2595-445c-aca6-b13972e95324-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"48b9cc99-2595-445c-aca6-b13972e95324\") " pod="openstack/openstack-galera-0" Dec 10 15:41:23 crc kubenswrapper[4755]: I1210 15:41:23.620792 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9z6s\" (UniqueName: \"kubernetes.io/projected/48b9cc99-2595-445c-aca6-b13972e95324-kube-api-access-g9z6s\") pod \"openstack-galera-0\" (UID: \"48b9cc99-2595-445c-aca6-b13972e95324\") " pod="openstack/openstack-galera-0" Dec 10 15:41:23 crc kubenswrapper[4755]: I1210 15:41:23.620843 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/48b9cc99-2595-445c-aca6-b13972e95324-kolla-config\") pod \"openstack-galera-0\" (UID: \"48b9cc99-2595-445c-aca6-b13972e95324\") " pod="openstack/openstack-galera-0" Dec 10 15:41:23 crc kubenswrapper[4755]: I1210 15:41:23.620926 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e3ae8073-244e-4251-95b8-ad1af0de8ab0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e3ae8073-244e-4251-95b8-ad1af0de8ab0\") pod \"openstack-galera-0\" (UID: \"48b9cc99-2595-445c-aca6-b13972e95324\") " pod="openstack/openstack-galera-0" Dec 10 15:41:23 crc kubenswrapper[4755]: I1210 15:41:23.620987 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48b9cc99-2595-445c-aca6-b13972e95324-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"48b9cc99-2595-445c-aca6-b13972e95324\") " pod="openstack/openstack-galera-0" Dec 10 15:41:23 crc kubenswrapper[4755]: I1210 15:41:23.722781 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48b9cc99-2595-445c-aca6-b13972e95324-operator-scripts\") pod \"openstack-galera-0\" (UID: \"48b9cc99-2595-445c-aca6-b13972e95324\") " pod="openstack/openstack-galera-0" Dec 10 15:41:23 crc kubenswrapper[4755]: I1210 15:41:23.722832 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/48b9cc99-2595-445c-aca6-b13972e95324-config-data-default\") pod \"openstack-galera-0\" (UID: \"48b9cc99-2595-445c-aca6-b13972e95324\") " pod="openstack/openstack-galera-0" Dec 10 15:41:23 crc kubenswrapper[4755]: I1210 15:41:23.723039 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/48b9cc99-2595-445c-aca6-b13972e95324-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"48b9cc99-2595-445c-aca6-b13972e95324\") " pod="openstack/openstack-galera-0" Dec 10 15:41:23 crc kubenswrapper[4755]: I1210 15:41:23.723065 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9z6s\" (UniqueName: \"kubernetes.io/projected/48b9cc99-2595-445c-aca6-b13972e95324-kube-api-access-g9z6s\") pod \"openstack-galera-0\" (UID: \"48b9cc99-2595-445c-aca6-b13972e95324\") " pod="openstack/openstack-galera-0" Dec 10 15:41:23 crc kubenswrapper[4755]: I1210 15:41:23.723094 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/48b9cc99-2595-445c-aca6-b13972e95324-kolla-config\") pod \"openstack-galera-0\" (UID: \"48b9cc99-2595-445c-aca6-b13972e95324\") " pod="openstack/openstack-galera-0" Dec 10 15:41:23 crc kubenswrapper[4755]: I1210 15:41:23.723138 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-e3ae8073-244e-4251-95b8-ad1af0de8ab0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e3ae8073-244e-4251-95b8-ad1af0de8ab0\") pod \"openstack-galera-0\" (UID: \"48b9cc99-2595-445c-aca6-b13972e95324\") " pod="openstack/openstack-galera-0" Dec 10 15:41:23 crc kubenswrapper[4755]: I1210 15:41:23.723177 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48b9cc99-2595-445c-aca6-b13972e95324-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"48b9cc99-2595-445c-aca6-b13972e95324\") " pod="openstack/openstack-galera-0" Dec 10 15:41:23 crc kubenswrapper[4755]: I1210 15:41:23.723205 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/48b9cc99-2595-445c-aca6-b13972e95324-config-data-generated\") pod \"openstack-galera-0\" (UID: \"48b9cc99-2595-445c-aca6-b13972e95324\") " pod="openstack/openstack-galera-0" Dec 10 15:41:23 crc kubenswrapper[4755]: I1210 15:41:23.723608 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/48b9cc99-2595-445c-aca6-b13972e95324-config-data-generated\") pod \"openstack-galera-0\" (UID: \"48b9cc99-2595-445c-aca6-b13972e95324\") " pod="openstack/openstack-galera-0" Dec 10 15:41:23 crc kubenswrapper[4755]: I1210 15:41:23.727718 4755 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 10 15:41:23 crc kubenswrapper[4755]: I1210 15:41:23.727779 4755 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-e3ae8073-244e-4251-95b8-ad1af0de8ab0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e3ae8073-244e-4251-95b8-ad1af0de8ab0\") pod \"openstack-galera-0\" (UID: \"48b9cc99-2595-445c-aca6-b13972e95324\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/cab9d35e932d5f31d6337bd18aff0a4a660214ebc909a5f0447318272f05e88f/globalmount\"" pod="openstack/openstack-galera-0" Dec 10 15:41:23 crc kubenswrapper[4755]: I1210 15:41:23.728566 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Dec 10 15:41:23 crc kubenswrapper[4755]: I1210 15:41:23.728784 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Dec 10 15:41:23 crc kubenswrapper[4755]: I1210 15:41:23.728954 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Dec 10 15:41:23 crc kubenswrapper[4755]: I1210 15:41:23.733962 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Dec 10 15:41:23 crc kubenswrapper[4755]: I1210 15:41:23.738319 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/48b9cc99-2595-445c-aca6-b13972e95324-kolla-config\") pod \"openstack-galera-0\" (UID: \"48b9cc99-2595-445c-aca6-b13972e95324\") " pod="openstack/openstack-galera-0" Dec 10 15:41:23 crc kubenswrapper[4755]: I1210 15:41:23.738741 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48b9cc99-2595-445c-aca6-b13972e95324-operator-scripts\") pod \"openstack-galera-0\" (UID: \"48b9cc99-2595-445c-aca6-b13972e95324\") " pod="openstack/openstack-galera-0" Dec 10 15:41:23 crc kubenswrapper[4755]: I1210 15:41:23.748580 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9z6s\" (UniqueName: \"kubernetes.io/projected/48b9cc99-2595-445c-aca6-b13972e95324-kube-api-access-g9z6s\") pod \"openstack-galera-0\" (UID: \"48b9cc99-2595-445c-aca6-b13972e95324\") " pod="openstack/openstack-galera-0" Dec 10 15:41:23 crc kubenswrapper[4755]: I1210 15:41:23.761785 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/48b9cc99-2595-445c-aca6-b13972e95324-config-data-default\") pod \"openstack-galera-0\" (UID: \"48b9cc99-2595-445c-aca6-b13972e95324\") " pod="openstack/openstack-galera-0" Dec 10 15:41:23 crc kubenswrapper[4755]: I1210 15:41:23.767115 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/48b9cc99-2595-445c-aca6-b13972e95324-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"48b9cc99-2595-445c-aca6-b13972e95324\") " pod="openstack/openstack-galera-0" Dec 10 15:41:23 crc kubenswrapper[4755]: I1210 15:41:23.778587 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48b9cc99-2595-445c-aca6-b13972e95324-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"48b9cc99-2595-445c-aca6-b13972e95324\") " pod="openstack/openstack-galera-0" Dec 10 15:41:23 crc kubenswrapper[4755]: I1210 15:41:23.844657 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-e3ae8073-244e-4251-95b8-ad1af0de8ab0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e3ae8073-244e-4251-95b8-ad1af0de8ab0\") pod \"openstack-galera-0\" (UID: \"48b9cc99-2595-445c-aca6-b13972e95324\") " pod="openstack/openstack-galera-0" Dec 10 15:41:24 crc kubenswrapper[4755]: I1210 15:41:24.105123 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-zqb2b" Dec 10 15:41:24 crc kubenswrapper[4755]: I1210 15:41:24.112794 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Dec 10 15:41:24 crc kubenswrapper[4755]: I1210 15:41:24.831764 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 10 15:41:24 crc kubenswrapper[4755]: I1210 15:41:24.832948 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Dec 10 15:41:24 crc kubenswrapper[4755]: I1210 15:41:24.838045 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Dec 10 15:41:24 crc kubenswrapper[4755]: I1210 15:41:24.838096 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-9c2fj" Dec 10 15:41:24 crc kubenswrapper[4755]: I1210 15:41:24.838641 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Dec 10 15:41:24 crc kubenswrapper[4755]: I1210 15:41:24.838821 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Dec 10 15:41:24 crc kubenswrapper[4755]: I1210 15:41:24.845782 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 10 15:41:24 crc kubenswrapper[4755]: I1210 15:41:24.942638 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-5164a94c-3c1f-4adb-8fe7-ecb4a0382f3b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5164a94c-3c1f-4adb-8fe7-ecb4a0382f3b\") pod \"openstack-cell1-galera-0\" (UID: \"c0cfe6de-3c35-486c-a767-35484b3a0f3d\") " pod="openstack/openstack-cell1-galera-0" Dec 10 15:41:24 crc kubenswrapper[4755]: I1210 15:41:24.942691 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0cfe6de-3c35-486c-a767-35484b3a0f3d-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"c0cfe6de-3c35-486c-a767-35484b3a0f3d\") " pod="openstack/openstack-cell1-galera-0" Dec 10 15:41:24 crc kubenswrapper[4755]: I1210 15:41:24.942727 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c0cfe6de-3c35-486c-a767-35484b3a0f3d-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"c0cfe6de-3c35-486c-a767-35484b3a0f3d\") " pod="openstack/openstack-cell1-galera-0" Dec 10 15:41:24 crc kubenswrapper[4755]: I1210 15:41:24.942758 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c0cfe6de-3c35-486c-a767-35484b3a0f3d-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"c0cfe6de-3c35-486c-a767-35484b3a0f3d\") " pod="openstack/openstack-cell1-galera-0" Dec 10 15:41:24 crc kubenswrapper[4755]: I1210 15:41:24.942792 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c0cfe6de-3c35-486c-a767-35484b3a0f3d-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"c0cfe6de-3c35-486c-a767-35484b3a0f3d\") " pod="openstack/openstack-cell1-galera-0" Dec 10 15:41:24 crc kubenswrapper[4755]: I1210 15:41:24.942819 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0cfe6de-3c35-486c-a767-35484b3a0f3d-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"c0cfe6de-3c35-486c-a767-35484b3a0f3d\") " pod="openstack/openstack-cell1-galera-0" Dec 10 15:41:24 crc kubenswrapper[4755]: I1210 15:41:24.942848 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfgw4\" (UniqueName: \"kubernetes.io/projected/c0cfe6de-3c35-486c-a767-35484b3a0f3d-kube-api-access-kfgw4\") pod \"openstack-cell1-galera-0\" (UID: \"c0cfe6de-3c35-486c-a767-35484b3a0f3d\") " pod="openstack/openstack-cell1-galera-0" Dec 10 15:41:24 crc kubenswrapper[4755]: I1210 15:41:24.942913 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c0cfe6de-3c35-486c-a767-35484b3a0f3d-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"c0cfe6de-3c35-486c-a767-35484b3a0f3d\") " pod="openstack/openstack-cell1-galera-0" Dec 10 15:41:25 crc kubenswrapper[4755]: I1210 15:41:25.044785 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0cfe6de-3c35-486c-a767-35484b3a0f3d-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"c0cfe6de-3c35-486c-a767-35484b3a0f3d\") " pod="openstack/openstack-cell1-galera-0" Dec 10 15:41:25 crc kubenswrapper[4755]: I1210 15:41:25.044852 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfgw4\" (UniqueName: \"kubernetes.io/projected/c0cfe6de-3c35-486c-a767-35484b3a0f3d-kube-api-access-kfgw4\") pod \"openstack-cell1-galera-0\" (UID: \"c0cfe6de-3c35-486c-a767-35484b3a0f3d\") " pod="openstack/openstack-cell1-galera-0" Dec 10 15:41:25 crc kubenswrapper[4755]: I1210 15:41:25.044913 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c0cfe6de-3c35-486c-a767-35484b3a0f3d-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"c0cfe6de-3c35-486c-a767-35484b3a0f3d\") " pod="openstack/openstack-cell1-galera-0" Dec 10 15:41:25 crc kubenswrapper[4755]: I1210 15:41:25.044987 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-5164a94c-3c1f-4adb-8fe7-ecb4a0382f3b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5164a94c-3c1f-4adb-8fe7-ecb4a0382f3b\") pod \"openstack-cell1-galera-0\" (UID: \"c0cfe6de-3c35-486c-a767-35484b3a0f3d\") " pod="openstack/openstack-cell1-galera-0" Dec 10 15:41:25 crc kubenswrapper[4755]: I1210 15:41:25.045011 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0cfe6de-3c35-486c-a767-35484b3a0f3d-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"c0cfe6de-3c35-486c-a767-35484b3a0f3d\") " pod="openstack/openstack-cell1-galera-0" Dec 10 15:41:25 crc kubenswrapper[4755]: I1210 15:41:25.045034 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c0cfe6de-3c35-486c-a767-35484b3a0f3d-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"c0cfe6de-3c35-486c-a767-35484b3a0f3d\") " pod="openstack/openstack-cell1-galera-0" Dec 10 15:41:25 crc kubenswrapper[4755]: I1210 15:41:25.045065 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c0cfe6de-3c35-486c-a767-35484b3a0f3d-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"c0cfe6de-3c35-486c-a767-35484b3a0f3d\") " pod="openstack/openstack-cell1-galera-0" Dec 10 15:41:25 crc kubenswrapper[4755]: I1210 15:41:25.045098 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c0cfe6de-3c35-486c-a767-35484b3a0f3d-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"c0cfe6de-3c35-486c-a767-35484b3a0f3d\") " pod="openstack/openstack-cell1-galera-0" Dec 10 15:41:25 crc kubenswrapper[4755]: I1210 15:41:25.046299 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c0cfe6de-3c35-486c-a767-35484b3a0f3d-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"c0cfe6de-3c35-486c-a767-35484b3a0f3d\") " pod="openstack/openstack-cell1-galera-0" Dec 10 15:41:25 crc kubenswrapper[4755]: I1210 15:41:25.047769 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c0cfe6de-3c35-486c-a767-35484b3a0f3d-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"c0cfe6de-3c35-486c-a767-35484b3a0f3d\") " pod="openstack/openstack-cell1-galera-0" Dec 10 15:41:25 crc kubenswrapper[4755]: I1210 15:41:25.048049 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c0cfe6de-3c35-486c-a767-35484b3a0f3d-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"c0cfe6de-3c35-486c-a767-35484b3a0f3d\") " pod="openstack/openstack-cell1-galera-0" Dec 10 15:41:25 crc kubenswrapper[4755]: I1210 15:41:25.088747 4755 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 10 15:41:25 crc kubenswrapper[4755]: I1210 15:41:25.088796 4755 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-5164a94c-3c1f-4adb-8fe7-ecb4a0382f3b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5164a94c-3c1f-4adb-8fe7-ecb4a0382f3b\") pod \"openstack-cell1-galera-0\" (UID: \"c0cfe6de-3c35-486c-a767-35484b3a0f3d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/243d6630e2aa3ff027dc086038f92625bcff70f8d8e09fc8932113208ffb2eaf/globalmount\"" pod="openstack/openstack-cell1-galera-0" Dec 10 15:41:25 crc kubenswrapper[4755]: I1210 15:41:25.089412 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0cfe6de-3c35-486c-a767-35484b3a0f3d-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"c0cfe6de-3c35-486c-a767-35484b3a0f3d\") " pod="openstack/openstack-cell1-galera-0" Dec 10 15:41:25 crc kubenswrapper[4755]: I1210 15:41:25.089770 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c0cfe6de-3c35-486c-a767-35484b3a0f3d-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"c0cfe6de-3c35-486c-a767-35484b3a0f3d\") " pod="openstack/openstack-cell1-galera-0" Dec 10 15:41:25 crc kubenswrapper[4755]: I1210 15:41:25.091295 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0cfe6de-3c35-486c-a767-35484b3a0f3d-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"c0cfe6de-3c35-486c-a767-35484b3a0f3d\") " pod="openstack/openstack-cell1-galera-0" Dec 10 15:41:25 crc kubenswrapper[4755]: I1210 15:41:25.101502 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfgw4\" (UniqueName: \"kubernetes.io/projected/c0cfe6de-3c35-486c-a767-35484b3a0f3d-kube-api-access-kfgw4\") pod \"openstack-cell1-galera-0\" (UID: \"c0cfe6de-3c35-486c-a767-35484b3a0f3d\") " pod="openstack/openstack-cell1-galera-0" Dec 10 15:41:25 crc kubenswrapper[4755]: I1210 15:41:25.136174 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-5164a94c-3c1f-4adb-8fe7-ecb4a0382f3b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5164a94c-3c1f-4adb-8fe7-ecb4a0382f3b\") pod \"openstack-cell1-galera-0\" (UID: \"c0cfe6de-3c35-486c-a767-35484b3a0f3d\") " pod="openstack/openstack-cell1-galera-0" Dec 10 15:41:25 crc kubenswrapper[4755]: I1210 15:41:25.186427 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Dec 10 15:41:25 crc kubenswrapper[4755]: I1210 15:41:25.187374 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Dec 10 15:41:25 crc kubenswrapper[4755]: I1210 15:41:25.189671 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-dgnnn" Dec 10 15:41:25 crc kubenswrapper[4755]: I1210 15:41:25.189857 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Dec 10 15:41:25 crc kubenswrapper[4755]: I1210 15:41:25.190798 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Dec 10 15:41:25 crc kubenswrapper[4755]: I1210 15:41:25.200968 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Dec 10 15:41:25 crc kubenswrapper[4755]: I1210 15:41:25.219820 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Dec 10 15:41:25 crc kubenswrapper[4755]: I1210 15:41:25.247114 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6fc3b5b-a2c7-404f-8435-6c2a72d2c4a8-combined-ca-bundle\") pod \"memcached-0\" (UID: \"b6fc3b5b-a2c7-404f-8435-6c2a72d2c4a8\") " pod="openstack/memcached-0" Dec 10 15:41:25 crc kubenswrapper[4755]: I1210 15:41:25.247175 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b6fc3b5b-a2c7-404f-8435-6c2a72d2c4a8-config-data\") pod \"memcached-0\" (UID: \"b6fc3b5b-a2c7-404f-8435-6c2a72d2c4a8\") " pod="openstack/memcached-0" Dec 10 15:41:25 crc kubenswrapper[4755]: I1210 15:41:25.247237 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b6fc3b5b-a2c7-404f-8435-6c2a72d2c4a8-kolla-config\") pod \"memcached-0\" (UID: \"b6fc3b5b-a2c7-404f-8435-6c2a72d2c4a8\") " pod="openstack/memcached-0" Dec 10 15:41:25 crc kubenswrapper[4755]: I1210 15:41:25.247253 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6fc3b5b-a2c7-404f-8435-6c2a72d2c4a8-memcached-tls-certs\") pod \"memcached-0\" (UID: \"b6fc3b5b-a2c7-404f-8435-6c2a72d2c4a8\") " pod="openstack/memcached-0" Dec 10 15:41:25 crc kubenswrapper[4755]: I1210 15:41:25.247277 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56xq6\" (UniqueName: \"kubernetes.io/projected/b6fc3b5b-a2c7-404f-8435-6c2a72d2c4a8-kube-api-access-56xq6\") pod \"memcached-0\" (UID: \"b6fc3b5b-a2c7-404f-8435-6c2a72d2c4a8\") " pod="openstack/memcached-0" Dec 10 15:41:25 crc kubenswrapper[4755]: I1210 15:41:25.348356 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6fc3b5b-a2c7-404f-8435-6c2a72d2c4a8-combined-ca-bundle\") pod \"memcached-0\" (UID: \"b6fc3b5b-a2c7-404f-8435-6c2a72d2c4a8\") " pod="openstack/memcached-0" Dec 10 15:41:25 crc kubenswrapper[4755]: I1210 15:41:25.348419 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b6fc3b5b-a2c7-404f-8435-6c2a72d2c4a8-config-data\") pod \"memcached-0\" (UID: \"b6fc3b5b-a2c7-404f-8435-6c2a72d2c4a8\") " pod="openstack/memcached-0" Dec 10 15:41:25 crc kubenswrapper[4755]: I1210 15:41:25.348458 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b6fc3b5b-a2c7-404f-8435-6c2a72d2c4a8-kolla-config\") pod \"memcached-0\" (UID: \"b6fc3b5b-a2c7-404f-8435-6c2a72d2c4a8\") " pod="openstack/memcached-0" Dec 10 15:41:25 crc kubenswrapper[4755]: I1210 15:41:25.348493 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6fc3b5b-a2c7-404f-8435-6c2a72d2c4a8-memcached-tls-certs\") pod \"memcached-0\" (UID: \"b6fc3b5b-a2c7-404f-8435-6c2a72d2c4a8\") " pod="openstack/memcached-0" Dec 10 15:41:25 crc kubenswrapper[4755]: I1210 15:41:25.348512 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56xq6\" (UniqueName: \"kubernetes.io/projected/b6fc3b5b-a2c7-404f-8435-6c2a72d2c4a8-kube-api-access-56xq6\") pod \"memcached-0\" (UID: \"b6fc3b5b-a2c7-404f-8435-6c2a72d2c4a8\") " pod="openstack/memcached-0" Dec 10 15:41:25 crc kubenswrapper[4755]: I1210 15:41:25.349983 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b6fc3b5b-a2c7-404f-8435-6c2a72d2c4a8-config-data\") pod \"memcached-0\" (UID: \"b6fc3b5b-a2c7-404f-8435-6c2a72d2c4a8\") " pod="openstack/memcached-0" Dec 10 15:41:25 crc kubenswrapper[4755]: I1210 15:41:25.350137 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b6fc3b5b-a2c7-404f-8435-6c2a72d2c4a8-kolla-config\") pod \"memcached-0\" (UID: \"b6fc3b5b-a2c7-404f-8435-6c2a72d2c4a8\") " pod="openstack/memcached-0" Dec 10 15:41:25 crc kubenswrapper[4755]: I1210 15:41:25.352758 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6fc3b5b-a2c7-404f-8435-6c2a72d2c4a8-combined-ca-bundle\") pod \"memcached-0\" (UID: \"b6fc3b5b-a2c7-404f-8435-6c2a72d2c4a8\") " pod="openstack/memcached-0" Dec 10 15:41:25 crc kubenswrapper[4755]: I1210 15:41:25.364661 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6fc3b5b-a2c7-404f-8435-6c2a72d2c4a8-memcached-tls-certs\") pod \"memcached-0\" (UID: \"b6fc3b5b-a2c7-404f-8435-6c2a72d2c4a8\") " pod="openstack/memcached-0" Dec 10 15:41:25 crc kubenswrapper[4755]: I1210 15:41:25.373115 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56xq6\" (UniqueName: \"kubernetes.io/projected/b6fc3b5b-a2c7-404f-8435-6c2a72d2c4a8-kube-api-access-56xq6\") pod \"memcached-0\" (UID: \"b6fc3b5b-a2c7-404f-8435-6c2a72d2c4a8\") " pod="openstack/memcached-0" Dec 10 15:41:25 crc kubenswrapper[4755]: I1210 15:41:25.504873 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Dec 10 15:41:26 crc kubenswrapper[4755]: I1210 15:41:26.956888 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Dec 10 15:41:26 crc kubenswrapper[4755]: I1210 15:41:26.957901 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 10 15:41:26 crc kubenswrapper[4755]: I1210 15:41:26.963285 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-v2btb" Dec 10 15:41:26 crc kubenswrapper[4755]: I1210 15:41:26.976263 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 10 15:41:27 crc kubenswrapper[4755]: I1210 15:41:27.075753 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltt7r\" (UniqueName: \"kubernetes.io/projected/60343e12-2433-4e98-9759-09d5e2b9d82b-kube-api-access-ltt7r\") pod \"kube-state-metrics-0\" (UID: \"60343e12-2433-4e98-9759-09d5e2b9d82b\") " pod="openstack/kube-state-metrics-0" Dec 10 15:41:27 crc kubenswrapper[4755]: I1210 15:41:27.181339 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltt7r\" (UniqueName: \"kubernetes.io/projected/60343e12-2433-4e98-9759-09d5e2b9d82b-kube-api-access-ltt7r\") pod \"kube-state-metrics-0\" (UID: \"60343e12-2433-4e98-9759-09d5e2b9d82b\") " pod="openstack/kube-state-metrics-0" Dec 10 15:41:27 crc kubenswrapper[4755]: I1210 15:41:27.222945 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltt7r\" (UniqueName: \"kubernetes.io/projected/60343e12-2433-4e98-9759-09d5e2b9d82b-kube-api-access-ltt7r\") pod \"kube-state-metrics-0\" (UID: \"60343e12-2433-4e98-9759-09d5e2b9d82b\") " pod="openstack/kube-state-metrics-0" Dec 10 15:41:27 crc kubenswrapper[4755]: I1210 15:41:27.282065 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 10 15:41:27 crc kubenswrapper[4755]: I1210 15:41:27.769834 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/alertmanager-metric-storage-0"] Dec 10 15:41:27 crc kubenswrapper[4755]: I1210 15:41:27.771845 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Dec 10 15:41:27 crc kubenswrapper[4755]: I1210 15:41:27.775150 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-generated" Dec 10 15:41:27 crc kubenswrapper[4755]: I1210 15:41:27.777518 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-tls-assets-0" Dec 10 15:41:27 crc kubenswrapper[4755]: I1210 15:41:27.777831 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-cluster-tls-config" Dec 10 15:41:27 crc kubenswrapper[4755]: I1210 15:41:27.778044 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-alertmanager-dockercfg-l7qq6" Dec 10 15:41:27 crc kubenswrapper[4755]: I1210 15:41:27.778272 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-web-config" Dec 10 15:41:27 crc kubenswrapper[4755]: I1210 15:41:27.794277 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Dec 10 15:41:27 crc kubenswrapper[4755]: I1210 15:41:27.892521 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vvsn\" (UniqueName: \"kubernetes.io/projected/376461c9-8e89-4c8c-bcef-6a873320a293-kube-api-access-5vvsn\") pod \"alertmanager-metric-storage-0\" (UID: \"376461c9-8e89-4c8c-bcef-6a873320a293\") " pod="openstack/alertmanager-metric-storage-0" Dec 10 15:41:27 crc kubenswrapper[4755]: I1210 15:41:27.892574 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/376461c9-8e89-4c8c-bcef-6a873320a293-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"376461c9-8e89-4c8c-bcef-6a873320a293\") " pod="openstack/alertmanager-metric-storage-0" Dec 10 15:41:27 crc kubenswrapper[4755]: I1210 15:41:27.892782 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/376461c9-8e89-4c8c-bcef-6a873320a293-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"376461c9-8e89-4c8c-bcef-6a873320a293\") " pod="openstack/alertmanager-metric-storage-0" Dec 10 15:41:27 crc kubenswrapper[4755]: I1210 15:41:27.892839 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/376461c9-8e89-4c8c-bcef-6a873320a293-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"376461c9-8e89-4c8c-bcef-6a873320a293\") " pod="openstack/alertmanager-metric-storage-0" Dec 10 15:41:27 crc kubenswrapper[4755]: I1210 15:41:27.892932 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/376461c9-8e89-4c8c-bcef-6a873320a293-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"376461c9-8e89-4c8c-bcef-6a873320a293\") " pod="openstack/alertmanager-metric-storage-0" Dec 10 15:41:27 crc kubenswrapper[4755]: I1210 15:41:27.892991 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/376461c9-8e89-4c8c-bcef-6a873320a293-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"376461c9-8e89-4c8c-bcef-6a873320a293\") " pod="openstack/alertmanager-metric-storage-0" Dec 10 15:41:27 crc kubenswrapper[4755]: I1210 15:41:27.893192 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/376461c9-8e89-4c8c-bcef-6a873320a293-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"376461c9-8e89-4c8c-bcef-6a873320a293\") " pod="openstack/alertmanager-metric-storage-0" Dec 10 15:41:27 crc kubenswrapper[4755]: I1210 15:41:27.994405 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vvsn\" (UniqueName: \"kubernetes.io/projected/376461c9-8e89-4c8c-bcef-6a873320a293-kube-api-access-5vvsn\") pod \"alertmanager-metric-storage-0\" (UID: \"376461c9-8e89-4c8c-bcef-6a873320a293\") " pod="openstack/alertmanager-metric-storage-0" Dec 10 15:41:27 crc kubenswrapper[4755]: I1210 15:41:27.995419 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/376461c9-8e89-4c8c-bcef-6a873320a293-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"376461c9-8e89-4c8c-bcef-6a873320a293\") " pod="openstack/alertmanager-metric-storage-0" Dec 10 15:41:27 crc kubenswrapper[4755]: I1210 15:41:27.995646 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/376461c9-8e89-4c8c-bcef-6a873320a293-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"376461c9-8e89-4c8c-bcef-6a873320a293\") " pod="openstack/alertmanager-metric-storage-0" Dec 10 15:41:27 crc kubenswrapper[4755]: I1210 15:41:27.995753 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/376461c9-8e89-4c8c-bcef-6a873320a293-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"376461c9-8e89-4c8c-bcef-6a873320a293\") " pod="openstack/alertmanager-metric-storage-0" Dec 10 15:41:27 crc kubenswrapper[4755]: I1210 15:41:27.995896 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/376461c9-8e89-4c8c-bcef-6a873320a293-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"376461c9-8e89-4c8c-bcef-6a873320a293\") " pod="openstack/alertmanager-metric-storage-0" Dec 10 15:41:27 crc kubenswrapper[4755]: I1210 15:41:27.996005 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/376461c9-8e89-4c8c-bcef-6a873320a293-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"376461c9-8e89-4c8c-bcef-6a873320a293\") " pod="openstack/alertmanager-metric-storage-0" Dec 10 15:41:27 crc kubenswrapper[4755]: I1210 15:41:27.996258 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/376461c9-8e89-4c8c-bcef-6a873320a293-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"376461c9-8e89-4c8c-bcef-6a873320a293\") " pod="openstack/alertmanager-metric-storage-0" Dec 10 15:41:27 crc kubenswrapper[4755]: I1210 15:41:27.998114 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/376461c9-8e89-4c8c-bcef-6a873320a293-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"376461c9-8e89-4c8c-bcef-6a873320a293\") " pod="openstack/alertmanager-metric-storage-0" Dec 10 15:41:28 crc kubenswrapper[4755]: I1210 15:41:28.000888 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/376461c9-8e89-4c8c-bcef-6a873320a293-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"376461c9-8e89-4c8c-bcef-6a873320a293\") " pod="openstack/alertmanager-metric-storage-0" Dec 10 15:41:28 crc kubenswrapper[4755]: I1210 15:41:28.000979 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/376461c9-8e89-4c8c-bcef-6a873320a293-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"376461c9-8e89-4c8c-bcef-6a873320a293\") " pod="openstack/alertmanager-metric-storage-0" Dec 10 15:41:28 crc kubenswrapper[4755]: I1210 15:41:28.001441 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/376461c9-8e89-4c8c-bcef-6a873320a293-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"376461c9-8e89-4c8c-bcef-6a873320a293\") " pod="openstack/alertmanager-metric-storage-0" Dec 10 15:41:28 crc kubenswrapper[4755]: I1210 15:41:28.002789 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/376461c9-8e89-4c8c-bcef-6a873320a293-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"376461c9-8e89-4c8c-bcef-6a873320a293\") " pod="openstack/alertmanager-metric-storage-0" Dec 10 15:41:28 crc kubenswrapper[4755]: I1210 15:41:28.015678 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/376461c9-8e89-4c8c-bcef-6a873320a293-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"376461c9-8e89-4c8c-bcef-6a873320a293\") " pod="openstack/alertmanager-metric-storage-0" Dec 10 15:41:28 crc kubenswrapper[4755]: I1210 15:41:28.020346 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vvsn\" (UniqueName: \"kubernetes.io/projected/376461c9-8e89-4c8c-bcef-6a873320a293-kube-api-access-5vvsn\") pod \"alertmanager-metric-storage-0\" (UID: \"376461c9-8e89-4c8c-bcef-6a873320a293\") " pod="openstack/alertmanager-metric-storage-0" Dec 10 15:41:28 crc kubenswrapper[4755]: I1210 15:41:28.109086 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Dec 10 15:41:28 crc kubenswrapper[4755]: I1210 15:41:28.369806 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 10 15:41:28 crc kubenswrapper[4755]: I1210 15:41:28.371897 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Dec 10 15:41:28 crc kubenswrapper[4755]: I1210 15:41:28.379764 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Dec 10 15:41:28 crc kubenswrapper[4755]: I1210 15:41:28.379865 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Dec 10 15:41:28 crc kubenswrapper[4755]: I1210 15:41:28.379972 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Dec 10 15:41:28 crc kubenswrapper[4755]: I1210 15:41:28.380058 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Dec 10 15:41:28 crc kubenswrapper[4755]: I1210 15:41:28.380114 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Dec 10 15:41:28 crc kubenswrapper[4755]: I1210 15:41:28.380278 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-gw2gg" Dec 10 15:41:28 crc kubenswrapper[4755]: I1210 15:41:28.390973 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 10 15:41:28 crc kubenswrapper[4755]: I1210 15:41:28.507002 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/251dc547-e1a7-418e-95fd-6b7e8e5c5d35-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"251dc547-e1a7-418e-95fd-6b7e8e5c5d35\") " pod="openstack/prometheus-metric-storage-0" Dec 10 15:41:28 crc kubenswrapper[4755]: I1210 15:41:28.507066 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/251dc547-e1a7-418e-95fd-6b7e8e5c5d35-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"251dc547-e1a7-418e-95fd-6b7e8e5c5d35\") " pod="openstack/prometheus-metric-storage-0" Dec 10 15:41:28 crc kubenswrapper[4755]: I1210 15:41:28.507101 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/251dc547-e1a7-418e-95fd-6b7e8e5c5d35-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"251dc547-e1a7-418e-95fd-6b7e8e5c5d35\") " pod="openstack/prometheus-metric-storage-0" Dec 10 15:41:28 crc kubenswrapper[4755]: I1210 15:41:28.507137 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54p6w\" (UniqueName: \"kubernetes.io/projected/251dc547-e1a7-418e-95fd-6b7e8e5c5d35-kube-api-access-54p6w\") pod \"prometheus-metric-storage-0\" (UID: \"251dc547-e1a7-418e-95fd-6b7e8e5c5d35\") " pod="openstack/prometheus-metric-storage-0" Dec 10 15:41:28 crc kubenswrapper[4755]: I1210 15:41:28.507404 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-79f2f86d-f9f7-48cb-8d8e-8ad3ded58407\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-79f2f86d-f9f7-48cb-8d8e-8ad3ded58407\") pod \"prometheus-metric-storage-0\" (UID: \"251dc547-e1a7-418e-95fd-6b7e8e5c5d35\") " pod="openstack/prometheus-metric-storage-0" Dec 10 15:41:28 crc kubenswrapper[4755]: I1210 15:41:28.507490 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/251dc547-e1a7-418e-95fd-6b7e8e5c5d35-config\") pod \"prometheus-metric-storage-0\" (UID: \"251dc547-e1a7-418e-95fd-6b7e8e5c5d35\") " pod="openstack/prometheus-metric-storage-0" Dec 10 15:41:28 crc kubenswrapper[4755]: I1210 15:41:28.507534 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/251dc547-e1a7-418e-95fd-6b7e8e5c5d35-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"251dc547-e1a7-418e-95fd-6b7e8e5c5d35\") " pod="openstack/prometheus-metric-storage-0" Dec 10 15:41:28 crc kubenswrapper[4755]: I1210 15:41:28.507601 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/251dc547-e1a7-418e-95fd-6b7e8e5c5d35-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"251dc547-e1a7-418e-95fd-6b7e8e5c5d35\") " pod="openstack/prometheus-metric-storage-0" Dec 10 15:41:28 crc kubenswrapper[4755]: I1210 15:41:28.608817 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/251dc547-e1a7-418e-95fd-6b7e8e5c5d35-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"251dc547-e1a7-418e-95fd-6b7e8e5c5d35\") " pod="openstack/prometheus-metric-storage-0" Dec 10 15:41:28 crc kubenswrapper[4755]: I1210 15:41:28.608907 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/251dc547-e1a7-418e-95fd-6b7e8e5c5d35-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"251dc547-e1a7-418e-95fd-6b7e8e5c5d35\") " pod="openstack/prometheus-metric-storage-0" Dec 10 15:41:28 crc kubenswrapper[4755]: I1210 15:41:28.608946 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/251dc547-e1a7-418e-95fd-6b7e8e5c5d35-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"251dc547-e1a7-418e-95fd-6b7e8e5c5d35\") " pod="openstack/prometheus-metric-storage-0" Dec 10 15:41:28 crc kubenswrapper[4755]: I1210 15:41:28.608977 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/251dc547-e1a7-418e-95fd-6b7e8e5c5d35-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"251dc547-e1a7-418e-95fd-6b7e8e5c5d35\") " pod="openstack/prometheus-metric-storage-0" Dec 10 15:41:28 crc kubenswrapper[4755]: I1210 15:41:28.609021 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54p6w\" (UniqueName: \"kubernetes.io/projected/251dc547-e1a7-418e-95fd-6b7e8e5c5d35-kube-api-access-54p6w\") pod \"prometheus-metric-storage-0\" (UID: \"251dc547-e1a7-418e-95fd-6b7e8e5c5d35\") " pod="openstack/prometheus-metric-storage-0" Dec 10 15:41:28 crc kubenswrapper[4755]: I1210 15:41:28.609055 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-79f2f86d-f9f7-48cb-8d8e-8ad3ded58407\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-79f2f86d-f9f7-48cb-8d8e-8ad3ded58407\") pod \"prometheus-metric-storage-0\" (UID: \"251dc547-e1a7-418e-95fd-6b7e8e5c5d35\") " pod="openstack/prometheus-metric-storage-0" Dec 10 15:41:28 crc kubenswrapper[4755]: I1210 15:41:28.609112 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/251dc547-e1a7-418e-95fd-6b7e8e5c5d35-config\") pod \"prometheus-metric-storage-0\" (UID: \"251dc547-e1a7-418e-95fd-6b7e8e5c5d35\") " pod="openstack/prometheus-metric-storage-0" Dec 10 15:41:28 crc kubenswrapper[4755]: I1210 15:41:28.609159 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/251dc547-e1a7-418e-95fd-6b7e8e5c5d35-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"251dc547-e1a7-418e-95fd-6b7e8e5c5d35\") " pod="openstack/prometheus-metric-storage-0" Dec 10 15:41:28 crc kubenswrapper[4755]: I1210 15:41:28.609913 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/251dc547-e1a7-418e-95fd-6b7e8e5c5d35-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"251dc547-e1a7-418e-95fd-6b7e8e5c5d35\") " pod="openstack/prometheus-metric-storage-0" Dec 10 15:41:28 crc kubenswrapper[4755]: I1210 15:41:28.613753 4755 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 10 15:41:28 crc kubenswrapper[4755]: I1210 15:41:28.613788 4755 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-79f2f86d-f9f7-48cb-8d8e-8ad3ded58407\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-79f2f86d-f9f7-48cb-8d8e-8ad3ded58407\") pod \"prometheus-metric-storage-0\" (UID: \"251dc547-e1a7-418e-95fd-6b7e8e5c5d35\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/d5e41e57ba6a2e605b0cf3bdcb01431c97f507b808900a5d6e4da4950adfc002/globalmount\"" pod="openstack/prometheus-metric-storage-0" Dec 10 15:41:28 crc kubenswrapper[4755]: I1210 15:41:28.637808 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54p6w\" (UniqueName: \"kubernetes.io/projected/251dc547-e1a7-418e-95fd-6b7e8e5c5d35-kube-api-access-54p6w\") pod \"prometheus-metric-storage-0\" (UID: \"251dc547-e1a7-418e-95fd-6b7e8e5c5d35\") " pod="openstack/prometheus-metric-storage-0" Dec 10 15:41:28 crc kubenswrapper[4755]: I1210 15:41:28.641710 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/251dc547-e1a7-418e-95fd-6b7e8e5c5d35-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"251dc547-e1a7-418e-95fd-6b7e8e5c5d35\") " pod="openstack/prometheus-metric-storage-0" Dec 10 15:41:28 crc kubenswrapper[4755]: I1210 15:41:28.642117 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/251dc547-e1a7-418e-95fd-6b7e8e5c5d35-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"251dc547-e1a7-418e-95fd-6b7e8e5c5d35\") " pod="openstack/prometheus-metric-storage-0" Dec 10 15:41:28 crc kubenswrapper[4755]: I1210 15:41:28.642200 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/251dc547-e1a7-418e-95fd-6b7e8e5c5d35-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"251dc547-e1a7-418e-95fd-6b7e8e5c5d35\") " pod="openstack/prometheus-metric-storage-0" Dec 10 15:41:28 crc kubenswrapper[4755]: I1210 15:41:28.642246 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/251dc547-e1a7-418e-95fd-6b7e8e5c5d35-config\") pod \"prometheus-metric-storage-0\" (UID: \"251dc547-e1a7-418e-95fd-6b7e8e5c5d35\") " pod="openstack/prometheus-metric-storage-0" Dec 10 15:41:28 crc kubenswrapper[4755]: I1210 15:41:28.645171 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/251dc547-e1a7-418e-95fd-6b7e8e5c5d35-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"251dc547-e1a7-418e-95fd-6b7e8e5c5d35\") " pod="openstack/prometheus-metric-storage-0" Dec 10 15:41:28 crc kubenswrapper[4755]: I1210 15:41:28.674279 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-79f2f86d-f9f7-48cb-8d8e-8ad3ded58407\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-79f2f86d-f9f7-48cb-8d8e-8ad3ded58407\") pod \"prometheus-metric-storage-0\" (UID: \"251dc547-e1a7-418e-95fd-6b7e8e5c5d35\") " pod="openstack/prometheus-metric-storage-0" Dec 10 15:41:28 crc kubenswrapper[4755]: I1210 15:41:28.697639 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Dec 10 15:41:32 crc kubenswrapper[4755]: I1210 15:41:32.049249 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-q6n4p"] Dec 10 15:41:32 crc kubenswrapper[4755]: I1210 15:41:32.051110 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-q6n4p" Dec 10 15:41:32 crc kubenswrapper[4755]: I1210 15:41:32.054945 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Dec 10 15:41:32 crc kubenswrapper[4755]: I1210 15:41:32.056019 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Dec 10 15:41:32 crc kubenswrapper[4755]: I1210 15:41:32.056198 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-jf5gj" Dec 10 15:41:32 crc kubenswrapper[4755]: I1210 15:41:32.066147 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-x972h"] Dec 10 15:41:32 crc kubenswrapper[4755]: I1210 15:41:32.066820 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/46b6df85-96b1-4583-a80f-97a5d980cc72-var-log-ovn\") pod \"ovn-controller-q6n4p\" (UID: \"46b6df85-96b1-4583-a80f-97a5d980cc72\") " pod="openstack/ovn-controller-q6n4p" Dec 10 15:41:32 crc kubenswrapper[4755]: I1210 15:41:32.066861 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/46b6df85-96b1-4583-a80f-97a5d980cc72-var-run\") pod \"ovn-controller-q6n4p\" (UID: \"46b6df85-96b1-4583-a80f-97a5d980cc72\") " pod="openstack/ovn-controller-q6n4p" Dec 10 15:41:32 crc kubenswrapper[4755]: I1210 15:41:32.066884 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/46b6df85-96b1-4583-a80f-97a5d980cc72-scripts\") pod \"ovn-controller-q6n4p\" (UID: \"46b6df85-96b1-4583-a80f-97a5d980cc72\") " pod="openstack/ovn-controller-q6n4p" Dec 10 15:41:32 crc kubenswrapper[4755]: I1210 15:41:32.066905 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kgf8\" (UniqueName: \"kubernetes.io/projected/46b6df85-96b1-4583-a80f-97a5d980cc72-kube-api-access-7kgf8\") pod \"ovn-controller-q6n4p\" (UID: \"46b6df85-96b1-4583-a80f-97a5d980cc72\") " pod="openstack/ovn-controller-q6n4p" Dec 10 15:41:32 crc kubenswrapper[4755]: I1210 15:41:32.066945 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/46b6df85-96b1-4583-a80f-97a5d980cc72-ovn-controller-tls-certs\") pod \"ovn-controller-q6n4p\" (UID: \"46b6df85-96b1-4583-a80f-97a5d980cc72\") " pod="openstack/ovn-controller-q6n4p" Dec 10 15:41:32 crc kubenswrapper[4755]: I1210 15:41:32.066989 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/46b6df85-96b1-4583-a80f-97a5d980cc72-var-run-ovn\") pod \"ovn-controller-q6n4p\" (UID: \"46b6df85-96b1-4583-a80f-97a5d980cc72\") " pod="openstack/ovn-controller-q6n4p" Dec 10 15:41:32 crc kubenswrapper[4755]: I1210 15:41:32.067045 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46b6df85-96b1-4583-a80f-97a5d980cc72-combined-ca-bundle\") pod \"ovn-controller-q6n4p\" (UID: \"46b6df85-96b1-4583-a80f-97a5d980cc72\") " pod="openstack/ovn-controller-q6n4p" Dec 10 15:41:32 crc kubenswrapper[4755]: I1210 15:41:32.068225 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-x972h" Dec 10 15:41:32 crc kubenswrapper[4755]: I1210 15:41:32.078686 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-q6n4p"] Dec 10 15:41:32 crc kubenswrapper[4755]: I1210 15:41:32.111954 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-x972h"] Dec 10 15:41:32 crc kubenswrapper[4755]: I1210 15:41:32.168200 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/46b6df85-96b1-4583-a80f-97a5d980cc72-var-log-ovn\") pod \"ovn-controller-q6n4p\" (UID: \"46b6df85-96b1-4583-a80f-97a5d980cc72\") " pod="openstack/ovn-controller-q6n4p" Dec 10 15:41:32 crc kubenswrapper[4755]: I1210 15:41:32.168254 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/46b6df85-96b1-4583-a80f-97a5d980cc72-var-run\") pod \"ovn-controller-q6n4p\" (UID: \"46b6df85-96b1-4583-a80f-97a5d980cc72\") " pod="openstack/ovn-controller-q6n4p" Dec 10 15:41:32 crc kubenswrapper[4755]: I1210 15:41:32.168279 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/46b6df85-96b1-4583-a80f-97a5d980cc72-scripts\") pod \"ovn-controller-q6n4p\" (UID: \"46b6df85-96b1-4583-a80f-97a5d980cc72\") " pod="openstack/ovn-controller-q6n4p" Dec 10 15:41:32 crc kubenswrapper[4755]: I1210 15:41:32.168293 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kgf8\" (UniqueName: \"kubernetes.io/projected/46b6df85-96b1-4583-a80f-97a5d980cc72-kube-api-access-7kgf8\") pod \"ovn-controller-q6n4p\" (UID: \"46b6df85-96b1-4583-a80f-97a5d980cc72\") " pod="openstack/ovn-controller-q6n4p" Dec 10 15:41:32 crc kubenswrapper[4755]: I1210 15:41:32.168324 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/46b6df85-96b1-4583-a80f-97a5d980cc72-ovn-controller-tls-certs\") pod \"ovn-controller-q6n4p\" (UID: \"46b6df85-96b1-4583-a80f-97a5d980cc72\") " pod="openstack/ovn-controller-q6n4p" Dec 10 15:41:32 crc kubenswrapper[4755]: I1210 15:41:32.168347 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7b79f2f6-2414-4403-8c2e-b58f114d941a-var-run\") pod \"ovn-controller-ovs-x972h\" (UID: \"7b79f2f6-2414-4403-8c2e-b58f114d941a\") " pod="openstack/ovn-controller-ovs-x972h" Dec 10 15:41:32 crc kubenswrapper[4755]: I1210 15:41:32.168369 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/7b79f2f6-2414-4403-8c2e-b58f114d941a-etc-ovs\") pod \"ovn-controller-ovs-x972h\" (UID: \"7b79f2f6-2414-4403-8c2e-b58f114d941a\") " pod="openstack/ovn-controller-ovs-x972h" Dec 10 15:41:32 crc kubenswrapper[4755]: I1210 15:41:32.168390 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/46b6df85-96b1-4583-a80f-97a5d980cc72-var-run-ovn\") pod \"ovn-controller-q6n4p\" (UID: \"46b6df85-96b1-4583-a80f-97a5d980cc72\") " pod="openstack/ovn-controller-q6n4p" Dec 10 15:41:32 crc kubenswrapper[4755]: I1210 15:41:32.168408 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/7b79f2f6-2414-4403-8c2e-b58f114d941a-var-lib\") pod \"ovn-controller-ovs-x972h\" (UID: \"7b79f2f6-2414-4403-8c2e-b58f114d941a\") " pod="openstack/ovn-controller-ovs-x972h" Dec 10 15:41:32 crc kubenswrapper[4755]: I1210 15:41:32.168457 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7b79f2f6-2414-4403-8c2e-b58f114d941a-scripts\") pod \"ovn-controller-ovs-x972h\" (UID: \"7b79f2f6-2414-4403-8c2e-b58f114d941a\") " pod="openstack/ovn-controller-ovs-x972h" Dec 10 15:41:32 crc kubenswrapper[4755]: I1210 15:41:32.168495 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gx5gh\" (UniqueName: \"kubernetes.io/projected/7b79f2f6-2414-4403-8c2e-b58f114d941a-kube-api-access-gx5gh\") pod \"ovn-controller-ovs-x972h\" (UID: \"7b79f2f6-2414-4403-8c2e-b58f114d941a\") " pod="openstack/ovn-controller-ovs-x972h" Dec 10 15:41:32 crc kubenswrapper[4755]: I1210 15:41:32.168524 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/7b79f2f6-2414-4403-8c2e-b58f114d941a-var-log\") pod \"ovn-controller-ovs-x972h\" (UID: \"7b79f2f6-2414-4403-8c2e-b58f114d941a\") " pod="openstack/ovn-controller-ovs-x972h" Dec 10 15:41:32 crc kubenswrapper[4755]: I1210 15:41:32.168550 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46b6df85-96b1-4583-a80f-97a5d980cc72-combined-ca-bundle\") pod \"ovn-controller-q6n4p\" (UID: \"46b6df85-96b1-4583-a80f-97a5d980cc72\") " pod="openstack/ovn-controller-q6n4p" Dec 10 15:41:32 crc kubenswrapper[4755]: I1210 15:41:32.168755 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/46b6df85-96b1-4583-a80f-97a5d980cc72-var-log-ovn\") pod \"ovn-controller-q6n4p\" (UID: \"46b6df85-96b1-4583-a80f-97a5d980cc72\") " pod="openstack/ovn-controller-q6n4p" Dec 10 15:41:32 crc kubenswrapper[4755]: I1210 15:41:32.168924 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/46b6df85-96b1-4583-a80f-97a5d980cc72-var-run\") pod \"ovn-controller-q6n4p\" (UID: \"46b6df85-96b1-4583-a80f-97a5d980cc72\") " pod="openstack/ovn-controller-q6n4p" Dec 10 15:41:32 crc kubenswrapper[4755]: I1210 15:41:32.169516 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/46b6df85-96b1-4583-a80f-97a5d980cc72-var-run-ovn\") pod \"ovn-controller-q6n4p\" (UID: \"46b6df85-96b1-4583-a80f-97a5d980cc72\") " pod="openstack/ovn-controller-q6n4p" Dec 10 15:41:32 crc kubenswrapper[4755]: I1210 15:41:32.171896 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/46b6df85-96b1-4583-a80f-97a5d980cc72-scripts\") pod \"ovn-controller-q6n4p\" (UID: \"46b6df85-96b1-4583-a80f-97a5d980cc72\") " pod="openstack/ovn-controller-q6n4p" Dec 10 15:41:32 crc kubenswrapper[4755]: I1210 15:41:32.174632 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/46b6df85-96b1-4583-a80f-97a5d980cc72-ovn-controller-tls-certs\") pod \"ovn-controller-q6n4p\" (UID: \"46b6df85-96b1-4583-a80f-97a5d980cc72\") " pod="openstack/ovn-controller-q6n4p" Dec 10 15:41:32 crc kubenswrapper[4755]: I1210 15:41:32.174797 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46b6df85-96b1-4583-a80f-97a5d980cc72-combined-ca-bundle\") pod \"ovn-controller-q6n4p\" (UID: \"46b6df85-96b1-4583-a80f-97a5d980cc72\") " pod="openstack/ovn-controller-q6n4p" Dec 10 15:41:32 crc kubenswrapper[4755]: I1210 15:41:32.185299 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kgf8\" (UniqueName: \"kubernetes.io/projected/46b6df85-96b1-4583-a80f-97a5d980cc72-kube-api-access-7kgf8\") pod \"ovn-controller-q6n4p\" (UID: \"46b6df85-96b1-4583-a80f-97a5d980cc72\") " pod="openstack/ovn-controller-q6n4p" Dec 10 15:41:32 crc kubenswrapper[4755]: I1210 15:41:32.269567 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/7b79f2f6-2414-4403-8c2e-b58f114d941a-var-log\") pod \"ovn-controller-ovs-x972h\" (UID: \"7b79f2f6-2414-4403-8c2e-b58f114d941a\") " pod="openstack/ovn-controller-ovs-x972h" Dec 10 15:41:32 crc kubenswrapper[4755]: I1210 15:41:32.269718 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7b79f2f6-2414-4403-8c2e-b58f114d941a-var-run\") pod \"ovn-controller-ovs-x972h\" (UID: \"7b79f2f6-2414-4403-8c2e-b58f114d941a\") " pod="openstack/ovn-controller-ovs-x972h" Dec 10 15:41:32 crc kubenswrapper[4755]: I1210 15:41:32.269754 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/7b79f2f6-2414-4403-8c2e-b58f114d941a-etc-ovs\") pod \"ovn-controller-ovs-x972h\" (UID: \"7b79f2f6-2414-4403-8c2e-b58f114d941a\") " pod="openstack/ovn-controller-ovs-x972h" Dec 10 15:41:32 crc kubenswrapper[4755]: I1210 15:41:32.269781 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/7b79f2f6-2414-4403-8c2e-b58f114d941a-var-lib\") pod \"ovn-controller-ovs-x972h\" (UID: \"7b79f2f6-2414-4403-8c2e-b58f114d941a\") " pod="openstack/ovn-controller-ovs-x972h" Dec 10 15:41:32 crc kubenswrapper[4755]: I1210 15:41:32.269815 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7b79f2f6-2414-4403-8c2e-b58f114d941a-scripts\") pod \"ovn-controller-ovs-x972h\" (UID: \"7b79f2f6-2414-4403-8c2e-b58f114d941a\") " pod="openstack/ovn-controller-ovs-x972h" Dec 10 15:41:32 crc kubenswrapper[4755]: I1210 15:41:32.269840 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gx5gh\" (UniqueName: \"kubernetes.io/projected/7b79f2f6-2414-4403-8c2e-b58f114d941a-kube-api-access-gx5gh\") pod \"ovn-controller-ovs-x972h\" (UID: \"7b79f2f6-2414-4403-8c2e-b58f114d941a\") " pod="openstack/ovn-controller-ovs-x972h" Dec 10 15:41:32 crc kubenswrapper[4755]: I1210 15:41:32.269898 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/7b79f2f6-2414-4403-8c2e-b58f114d941a-var-log\") pod \"ovn-controller-ovs-x972h\" (UID: \"7b79f2f6-2414-4403-8c2e-b58f114d941a\") " pod="openstack/ovn-controller-ovs-x972h" Dec 10 15:41:32 crc kubenswrapper[4755]: I1210 15:41:32.269913 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7b79f2f6-2414-4403-8c2e-b58f114d941a-var-run\") pod \"ovn-controller-ovs-x972h\" (UID: \"7b79f2f6-2414-4403-8c2e-b58f114d941a\") " pod="openstack/ovn-controller-ovs-x972h" Dec 10 15:41:32 crc kubenswrapper[4755]: I1210 15:41:32.270118 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/7b79f2f6-2414-4403-8c2e-b58f114d941a-etc-ovs\") pod \"ovn-controller-ovs-x972h\" (UID: \"7b79f2f6-2414-4403-8c2e-b58f114d941a\") " pod="openstack/ovn-controller-ovs-x972h" Dec 10 15:41:32 crc kubenswrapper[4755]: I1210 15:41:32.270133 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/7b79f2f6-2414-4403-8c2e-b58f114d941a-var-lib\") pod \"ovn-controller-ovs-x972h\" (UID: \"7b79f2f6-2414-4403-8c2e-b58f114d941a\") " pod="openstack/ovn-controller-ovs-x972h" Dec 10 15:41:32 crc kubenswrapper[4755]: I1210 15:41:32.272135 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7b79f2f6-2414-4403-8c2e-b58f114d941a-scripts\") pod \"ovn-controller-ovs-x972h\" (UID: \"7b79f2f6-2414-4403-8c2e-b58f114d941a\") " pod="openstack/ovn-controller-ovs-x972h" Dec 10 15:41:32 crc kubenswrapper[4755]: I1210 15:41:32.289214 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gx5gh\" (UniqueName: \"kubernetes.io/projected/7b79f2f6-2414-4403-8c2e-b58f114d941a-kube-api-access-gx5gh\") pod \"ovn-controller-ovs-x972h\" (UID: \"7b79f2f6-2414-4403-8c2e-b58f114d941a\") " pod="openstack/ovn-controller-ovs-x972h" Dec 10 15:41:32 crc kubenswrapper[4755]: I1210 15:41:32.378636 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-q6n4p" Dec 10 15:41:32 crc kubenswrapper[4755]: I1210 15:41:32.402365 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-x972h" Dec 10 15:41:32 crc kubenswrapper[4755]: I1210 15:41:32.938002 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 10 15:41:32 crc kubenswrapper[4755]: I1210 15:41:32.939801 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Dec 10 15:41:32 crc kubenswrapper[4755]: I1210 15:41:32.944240 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Dec 10 15:41:32 crc kubenswrapper[4755]: I1210 15:41:32.944294 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Dec 10 15:41:32 crc kubenswrapper[4755]: I1210 15:41:32.944419 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Dec 10 15:41:32 crc kubenswrapper[4755]: I1210 15:41:32.944623 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-q76rf" Dec 10 15:41:32 crc kubenswrapper[4755]: I1210 15:41:32.944743 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Dec 10 15:41:32 crc kubenswrapper[4755]: I1210 15:41:32.974439 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 10 15:41:33 crc kubenswrapper[4755]: I1210 15:41:33.082853 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d586c26d-c444-4202-b286-522cfc372f16-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"d586c26d-c444-4202-b286-522cfc372f16\") " pod="openstack/ovsdbserver-sb-0" Dec 10 15:41:33 crc kubenswrapper[4755]: I1210 15:41:33.083044 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d586c26d-c444-4202-b286-522cfc372f16-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"d586c26d-c444-4202-b286-522cfc372f16\") " pod="openstack/ovsdbserver-sb-0" Dec 10 15:41:33 crc kubenswrapper[4755]: I1210 15:41:33.083115 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d586c26d-c444-4202-b286-522cfc372f16-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"d586c26d-c444-4202-b286-522cfc372f16\") " pod="openstack/ovsdbserver-sb-0" Dec 10 15:41:33 crc kubenswrapper[4755]: I1210 15:41:33.083212 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d586c26d-c444-4202-b286-522cfc372f16-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"d586c26d-c444-4202-b286-522cfc372f16\") " pod="openstack/ovsdbserver-sb-0" Dec 10 15:41:33 crc kubenswrapper[4755]: I1210 15:41:33.083277 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tccl\" (UniqueName: \"kubernetes.io/projected/d586c26d-c444-4202-b286-522cfc372f16-kube-api-access-2tccl\") pod \"ovsdbserver-sb-0\" (UID: \"d586c26d-c444-4202-b286-522cfc372f16\") " pod="openstack/ovsdbserver-sb-0" Dec 10 15:41:33 crc kubenswrapper[4755]: I1210 15:41:33.083299 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d586c26d-c444-4202-b286-522cfc372f16-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"d586c26d-c444-4202-b286-522cfc372f16\") " pod="openstack/ovsdbserver-sb-0" Dec 10 15:41:33 crc kubenswrapper[4755]: I1210 15:41:33.083352 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d586c26d-c444-4202-b286-522cfc372f16-config\") pod \"ovsdbserver-sb-0\" (UID: \"d586c26d-c444-4202-b286-522cfc372f16\") " pod="openstack/ovsdbserver-sb-0" Dec 10 15:41:33 crc kubenswrapper[4755]: I1210 15:41:33.083428 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-40139bd9-935c-4387-88b6-272728cfe3e6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-40139bd9-935c-4387-88b6-272728cfe3e6\") pod \"ovsdbserver-sb-0\" (UID: \"d586c26d-c444-4202-b286-522cfc372f16\") " pod="openstack/ovsdbserver-sb-0" Dec 10 15:41:33 crc kubenswrapper[4755]: I1210 15:41:33.185402 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d586c26d-c444-4202-b286-522cfc372f16-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"d586c26d-c444-4202-b286-522cfc372f16\") " pod="openstack/ovsdbserver-sb-0" Dec 10 15:41:33 crc kubenswrapper[4755]: I1210 15:41:33.185539 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d586c26d-c444-4202-b286-522cfc372f16-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"d586c26d-c444-4202-b286-522cfc372f16\") " pod="openstack/ovsdbserver-sb-0" Dec 10 15:41:33 crc kubenswrapper[4755]: I1210 15:41:33.185594 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tccl\" (UniqueName: \"kubernetes.io/projected/d586c26d-c444-4202-b286-522cfc372f16-kube-api-access-2tccl\") pod \"ovsdbserver-sb-0\" (UID: \"d586c26d-c444-4202-b286-522cfc372f16\") " pod="openstack/ovsdbserver-sb-0" Dec 10 15:41:33 crc kubenswrapper[4755]: I1210 15:41:33.185624 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d586c26d-c444-4202-b286-522cfc372f16-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"d586c26d-c444-4202-b286-522cfc372f16\") " pod="openstack/ovsdbserver-sb-0" Dec 10 15:41:33 crc kubenswrapper[4755]: I1210 15:41:33.185674 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d586c26d-c444-4202-b286-522cfc372f16-config\") pod \"ovsdbserver-sb-0\" (UID: \"d586c26d-c444-4202-b286-522cfc372f16\") " pod="openstack/ovsdbserver-sb-0" Dec 10 15:41:33 crc kubenswrapper[4755]: I1210 15:41:33.185712 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-40139bd9-935c-4387-88b6-272728cfe3e6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-40139bd9-935c-4387-88b6-272728cfe3e6\") pod \"ovsdbserver-sb-0\" (UID: \"d586c26d-c444-4202-b286-522cfc372f16\") " pod="openstack/ovsdbserver-sb-0" Dec 10 15:41:33 crc kubenswrapper[4755]: I1210 15:41:33.185750 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d586c26d-c444-4202-b286-522cfc372f16-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"d586c26d-c444-4202-b286-522cfc372f16\") " pod="openstack/ovsdbserver-sb-0" Dec 10 15:41:33 crc kubenswrapper[4755]: I1210 15:41:33.185817 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d586c26d-c444-4202-b286-522cfc372f16-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"d586c26d-c444-4202-b286-522cfc372f16\") " pod="openstack/ovsdbserver-sb-0" Dec 10 15:41:33 crc kubenswrapper[4755]: I1210 15:41:33.190154 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d586c26d-c444-4202-b286-522cfc372f16-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"d586c26d-c444-4202-b286-522cfc372f16\") " pod="openstack/ovsdbserver-sb-0" Dec 10 15:41:33 crc kubenswrapper[4755]: I1210 15:41:33.190861 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d586c26d-c444-4202-b286-522cfc372f16-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"d586c26d-c444-4202-b286-522cfc372f16\") " pod="openstack/ovsdbserver-sb-0" Dec 10 15:41:33 crc kubenswrapper[4755]: I1210 15:41:33.190935 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d586c26d-c444-4202-b286-522cfc372f16-config\") pod \"ovsdbserver-sb-0\" (UID: \"d586c26d-c444-4202-b286-522cfc372f16\") " pod="openstack/ovsdbserver-sb-0" Dec 10 15:41:33 crc kubenswrapper[4755]: I1210 15:41:33.195522 4755 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 10 15:41:33 crc kubenswrapper[4755]: I1210 15:41:33.195555 4755 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-40139bd9-935c-4387-88b6-272728cfe3e6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-40139bd9-935c-4387-88b6-272728cfe3e6\") pod \"ovsdbserver-sb-0\" (UID: \"d586c26d-c444-4202-b286-522cfc372f16\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/700c0edad385d0c1f7d1fd9d80788126954fbb4bc6f34c11fd71f48af2c7a9ac/globalmount\"" pod="openstack/ovsdbserver-sb-0" Dec 10 15:41:33 crc kubenswrapper[4755]: I1210 15:41:33.197024 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d586c26d-c444-4202-b286-522cfc372f16-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"d586c26d-c444-4202-b286-522cfc372f16\") " pod="openstack/ovsdbserver-sb-0" Dec 10 15:41:33 crc kubenswrapper[4755]: I1210 15:41:33.197616 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d586c26d-c444-4202-b286-522cfc372f16-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"d586c26d-c444-4202-b286-522cfc372f16\") " pod="openstack/ovsdbserver-sb-0" Dec 10 15:41:33 crc kubenswrapper[4755]: I1210 15:41:33.197668 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d586c26d-c444-4202-b286-522cfc372f16-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"d586c26d-c444-4202-b286-522cfc372f16\") " pod="openstack/ovsdbserver-sb-0" Dec 10 15:41:33 crc kubenswrapper[4755]: I1210 15:41:33.212108 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tccl\" (UniqueName: \"kubernetes.io/projected/d586c26d-c444-4202-b286-522cfc372f16-kube-api-access-2tccl\") pod \"ovsdbserver-sb-0\" (UID: \"d586c26d-c444-4202-b286-522cfc372f16\") " pod="openstack/ovsdbserver-sb-0" Dec 10 15:41:33 crc kubenswrapper[4755]: I1210 15:41:33.237709 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-40139bd9-935c-4387-88b6-272728cfe3e6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-40139bd9-935c-4387-88b6-272728cfe3e6\") pod \"ovsdbserver-sb-0\" (UID: \"d586c26d-c444-4202-b286-522cfc372f16\") " pod="openstack/ovsdbserver-sb-0" Dec 10 15:41:33 crc kubenswrapper[4755]: I1210 15:41:33.270777 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Dec 10 15:41:34 crc kubenswrapper[4755]: I1210 15:41:34.973058 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 10 15:41:34 crc kubenswrapper[4755]: I1210 15:41:34.975616 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Dec 10 15:41:34 crc kubenswrapper[4755]: I1210 15:41:34.980317 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 10 15:41:34 crc kubenswrapper[4755]: I1210 15:41:34.982321 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Dec 10 15:41:34 crc kubenswrapper[4755]: I1210 15:41:34.982432 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Dec 10 15:41:34 crc kubenswrapper[4755]: I1210 15:41:34.982815 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Dec 10 15:41:34 crc kubenswrapper[4755]: I1210 15:41:34.982930 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-p6nvl" Dec 10 15:41:35 crc kubenswrapper[4755]: I1210 15:41:35.106982 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fbkw\" (UniqueName: \"kubernetes.io/projected/e6228d01-72c0-4088-a51d-e90dc686a41a-kube-api-access-5fbkw\") pod \"ovsdbserver-nb-0\" (UID: \"e6228d01-72c0-4088-a51d-e90dc686a41a\") " pod="openstack/ovsdbserver-nb-0" Dec 10 15:41:35 crc kubenswrapper[4755]: I1210 15:41:35.107062 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e6228d01-72c0-4088-a51d-e90dc686a41a-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"e6228d01-72c0-4088-a51d-e90dc686a41a\") " pod="openstack/ovsdbserver-nb-0" Dec 10 15:41:35 crc kubenswrapper[4755]: I1210 15:41:35.107118 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6228d01-72c0-4088-a51d-e90dc686a41a-config\") pod \"ovsdbserver-nb-0\" (UID: \"e6228d01-72c0-4088-a51d-e90dc686a41a\") " pod="openstack/ovsdbserver-nb-0" Dec 10 15:41:35 crc kubenswrapper[4755]: I1210 15:41:35.107184 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6228d01-72c0-4088-a51d-e90dc686a41a-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"e6228d01-72c0-4088-a51d-e90dc686a41a\") " pod="openstack/ovsdbserver-nb-0" Dec 10 15:41:35 crc kubenswrapper[4755]: I1210 15:41:35.107212 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6228d01-72c0-4088-a51d-e90dc686a41a-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"e6228d01-72c0-4088-a51d-e90dc686a41a\") " pod="openstack/ovsdbserver-nb-0" Dec 10 15:41:35 crc kubenswrapper[4755]: I1210 15:41:35.107253 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e6228d01-72c0-4088-a51d-e90dc686a41a-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"e6228d01-72c0-4088-a51d-e90dc686a41a\") " pod="openstack/ovsdbserver-nb-0" Dec 10 15:41:35 crc kubenswrapper[4755]: I1210 15:41:35.107385 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6228d01-72c0-4088-a51d-e90dc686a41a-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"e6228d01-72c0-4088-a51d-e90dc686a41a\") " pod="openstack/ovsdbserver-nb-0" Dec 10 15:41:35 crc kubenswrapper[4755]: I1210 15:41:35.107524 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-903083b2-5697-4370-a9c2-b668b2912f2c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-903083b2-5697-4370-a9c2-b668b2912f2c\") pod \"ovsdbserver-nb-0\" (UID: \"e6228d01-72c0-4088-a51d-e90dc686a41a\") " pod="openstack/ovsdbserver-nb-0" Dec 10 15:41:35 crc kubenswrapper[4755]: I1210 15:41:35.209537 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6228d01-72c0-4088-a51d-e90dc686a41a-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"e6228d01-72c0-4088-a51d-e90dc686a41a\") " pod="openstack/ovsdbserver-nb-0" Dec 10 15:41:35 crc kubenswrapper[4755]: I1210 15:41:35.209601 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6228d01-72c0-4088-a51d-e90dc686a41a-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"e6228d01-72c0-4088-a51d-e90dc686a41a\") " pod="openstack/ovsdbserver-nb-0" Dec 10 15:41:35 crc kubenswrapper[4755]: I1210 15:41:35.209644 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e6228d01-72c0-4088-a51d-e90dc686a41a-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"e6228d01-72c0-4088-a51d-e90dc686a41a\") " pod="openstack/ovsdbserver-nb-0" Dec 10 15:41:35 crc kubenswrapper[4755]: I1210 15:41:35.209663 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6228d01-72c0-4088-a51d-e90dc686a41a-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"e6228d01-72c0-4088-a51d-e90dc686a41a\") " pod="openstack/ovsdbserver-nb-0" Dec 10 15:41:35 crc kubenswrapper[4755]: I1210 15:41:35.209713 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-903083b2-5697-4370-a9c2-b668b2912f2c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-903083b2-5697-4370-a9c2-b668b2912f2c\") pod \"ovsdbserver-nb-0\" (UID: \"e6228d01-72c0-4088-a51d-e90dc686a41a\") " pod="openstack/ovsdbserver-nb-0" Dec 10 15:41:35 crc kubenswrapper[4755]: I1210 15:41:35.209743 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fbkw\" (UniqueName: \"kubernetes.io/projected/e6228d01-72c0-4088-a51d-e90dc686a41a-kube-api-access-5fbkw\") pod \"ovsdbserver-nb-0\" (UID: \"e6228d01-72c0-4088-a51d-e90dc686a41a\") " pod="openstack/ovsdbserver-nb-0" Dec 10 15:41:35 crc kubenswrapper[4755]: I1210 15:41:35.209777 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e6228d01-72c0-4088-a51d-e90dc686a41a-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"e6228d01-72c0-4088-a51d-e90dc686a41a\") " pod="openstack/ovsdbserver-nb-0" Dec 10 15:41:35 crc kubenswrapper[4755]: I1210 15:41:35.209817 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6228d01-72c0-4088-a51d-e90dc686a41a-config\") pod \"ovsdbserver-nb-0\" (UID: \"e6228d01-72c0-4088-a51d-e90dc686a41a\") " pod="openstack/ovsdbserver-nb-0" Dec 10 15:41:35 crc kubenswrapper[4755]: I1210 15:41:35.210760 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6228d01-72c0-4088-a51d-e90dc686a41a-config\") pod \"ovsdbserver-nb-0\" (UID: \"e6228d01-72c0-4088-a51d-e90dc686a41a\") " pod="openstack/ovsdbserver-nb-0" Dec 10 15:41:35 crc kubenswrapper[4755]: I1210 15:41:35.211058 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e6228d01-72c0-4088-a51d-e90dc686a41a-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"e6228d01-72c0-4088-a51d-e90dc686a41a\") " pod="openstack/ovsdbserver-nb-0" Dec 10 15:41:35 crc kubenswrapper[4755]: I1210 15:41:35.211875 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e6228d01-72c0-4088-a51d-e90dc686a41a-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"e6228d01-72c0-4088-a51d-e90dc686a41a\") " pod="openstack/ovsdbserver-nb-0" Dec 10 15:41:35 crc kubenswrapper[4755]: I1210 15:41:35.213977 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6228d01-72c0-4088-a51d-e90dc686a41a-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"e6228d01-72c0-4088-a51d-e90dc686a41a\") " pod="openstack/ovsdbserver-nb-0" Dec 10 15:41:35 crc kubenswrapper[4755]: I1210 15:41:35.214862 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6228d01-72c0-4088-a51d-e90dc686a41a-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"e6228d01-72c0-4088-a51d-e90dc686a41a\") " pod="openstack/ovsdbserver-nb-0" Dec 10 15:41:35 crc kubenswrapper[4755]: I1210 15:41:35.221648 4755 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 10 15:41:35 crc kubenswrapper[4755]: I1210 15:41:35.221693 4755 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-903083b2-5697-4370-a9c2-b668b2912f2c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-903083b2-5697-4370-a9c2-b668b2912f2c\") pod \"ovsdbserver-nb-0\" (UID: \"e6228d01-72c0-4088-a51d-e90dc686a41a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/2deb1b0df24d0e63ee65d6f27aa458d3d0d6e7c87a26765299dc36097ca88151/globalmount\"" pod="openstack/ovsdbserver-nb-0" Dec 10 15:41:35 crc kubenswrapper[4755]: I1210 15:41:35.224936 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6228d01-72c0-4088-a51d-e90dc686a41a-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"e6228d01-72c0-4088-a51d-e90dc686a41a\") " pod="openstack/ovsdbserver-nb-0" Dec 10 15:41:35 crc kubenswrapper[4755]: I1210 15:41:35.228969 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fbkw\" (UniqueName: \"kubernetes.io/projected/e6228d01-72c0-4088-a51d-e90dc686a41a-kube-api-access-5fbkw\") pod \"ovsdbserver-nb-0\" (UID: \"e6228d01-72c0-4088-a51d-e90dc686a41a\") " pod="openstack/ovsdbserver-nb-0" Dec 10 15:41:35 crc kubenswrapper[4755]: I1210 15:41:35.256395 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-903083b2-5697-4370-a9c2-b668b2912f2c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-903083b2-5697-4370-a9c2-b668b2912f2c\") pod \"ovsdbserver-nb-0\" (UID: \"e6228d01-72c0-4088-a51d-e90dc686a41a\") " pod="openstack/ovsdbserver-nb-0" Dec 10 15:41:35 crc kubenswrapper[4755]: I1210 15:41:35.299507 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Dec 10 15:41:37 crc kubenswrapper[4755]: I1210 15:41:37.810312 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-distributor-664b687b54-qvsww"] Dec 10 15:41:37 crc kubenswrapper[4755]: I1210 15:41:37.815303 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-distributor-664b687b54-qvsww" Dec 10 15:41:37 crc kubenswrapper[4755]: I1210 15:41:37.818477 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-distributor-http" Dec 10 15:41:37 crc kubenswrapper[4755]: I1210 15:41:37.818716 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-ca-bundle" Dec 10 15:41:37 crc kubenswrapper[4755]: I1210 15:41:37.818847 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-dockercfg-m9xt8" Dec 10 15:41:37 crc kubenswrapper[4755]: I1210 15:41:37.823655 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-distributor-grpc" Dec 10 15:41:37 crc kubenswrapper[4755]: I1210 15:41:37.823820 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-config" Dec 10 15:41:37 crc kubenswrapper[4755]: I1210 15:41:37.831993 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-distributor-664b687b54-qvsww"] Dec 10 15:41:37 crc kubenswrapper[4755]: I1210 15:41:37.961619 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlp25\" (UniqueName: \"kubernetes.io/projected/ad77f530-dc0b-44ec-b4e2-c580cfe568fe-kube-api-access-hlp25\") pod \"cloudkitty-lokistack-distributor-664b687b54-qvsww\" (UID: \"ad77f530-dc0b-44ec-b4e2-c580cfe568fe\") " pod="openstack/cloudkitty-lokistack-distributor-664b687b54-qvsww" Dec 10 15:41:37 crc kubenswrapper[4755]: I1210 15:41:37.961680 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-distributor-http\" (UniqueName: \"kubernetes.io/secret/ad77f530-dc0b-44ec-b4e2-c580cfe568fe-cloudkitty-lokistack-distributor-http\") pod \"cloudkitty-lokistack-distributor-664b687b54-qvsww\" (UID: \"ad77f530-dc0b-44ec-b4e2-c580cfe568fe\") " pod="openstack/cloudkitty-lokistack-distributor-664b687b54-qvsww" Dec 10 15:41:37 crc kubenswrapper[4755]: I1210 15:41:37.961729 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad77f530-dc0b-44ec-b4e2-c580cfe568fe-config\") pod \"cloudkitty-lokistack-distributor-664b687b54-qvsww\" (UID: \"ad77f530-dc0b-44ec-b4e2-c580cfe568fe\") " pod="openstack/cloudkitty-lokistack-distributor-664b687b54-qvsww" Dec 10 15:41:37 crc kubenswrapper[4755]: I1210 15:41:37.961760 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ad77f530-dc0b-44ec-b4e2-c580cfe568fe-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-distributor-664b687b54-qvsww\" (UID: \"ad77f530-dc0b-44ec-b4e2-c580cfe568fe\") " pod="openstack/cloudkitty-lokistack-distributor-664b687b54-qvsww" Dec 10 15:41:37 crc kubenswrapper[4755]: I1210 15:41:37.961781 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/ad77f530-dc0b-44ec-b4e2-c580cfe568fe-cloudkitty-lokistack-distributor-grpc\") pod \"cloudkitty-lokistack-distributor-664b687b54-qvsww\" (UID: \"ad77f530-dc0b-44ec-b4e2-c580cfe568fe\") " pod="openstack/cloudkitty-lokistack-distributor-664b687b54-qvsww" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.041203 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-querier-5467947bf7-qpg72"] Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.046133 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-querier-5467947bf7-qpg72" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.052697 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-loki-s3" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.054438 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-querier-grpc" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.057245 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-querier-http" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.063755 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlp25\" (UniqueName: \"kubernetes.io/projected/ad77f530-dc0b-44ec-b4e2-c580cfe568fe-kube-api-access-hlp25\") pod \"cloudkitty-lokistack-distributor-664b687b54-qvsww\" (UID: \"ad77f530-dc0b-44ec-b4e2-c580cfe568fe\") " pod="openstack/cloudkitty-lokistack-distributor-664b687b54-qvsww" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.064599 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-distributor-http\" (UniqueName: \"kubernetes.io/secret/ad77f530-dc0b-44ec-b4e2-c580cfe568fe-cloudkitty-lokistack-distributor-http\") pod \"cloudkitty-lokistack-distributor-664b687b54-qvsww\" (UID: \"ad77f530-dc0b-44ec-b4e2-c580cfe568fe\") " pod="openstack/cloudkitty-lokistack-distributor-664b687b54-qvsww" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.064827 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad77f530-dc0b-44ec-b4e2-c580cfe568fe-config\") pod \"cloudkitty-lokistack-distributor-664b687b54-qvsww\" (UID: \"ad77f530-dc0b-44ec-b4e2-c580cfe568fe\") " pod="openstack/cloudkitty-lokistack-distributor-664b687b54-qvsww" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.066900 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ad77f530-dc0b-44ec-b4e2-c580cfe568fe-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-distributor-664b687b54-qvsww\" (UID: \"ad77f530-dc0b-44ec-b4e2-c580cfe568fe\") " pod="openstack/cloudkitty-lokistack-distributor-664b687b54-qvsww" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.067126 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/ad77f530-dc0b-44ec-b4e2-c580cfe568fe-cloudkitty-lokistack-distributor-grpc\") pod \"cloudkitty-lokistack-distributor-664b687b54-qvsww\" (UID: \"ad77f530-dc0b-44ec-b4e2-c580cfe568fe\") " pod="openstack/cloudkitty-lokistack-distributor-664b687b54-qvsww" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.066664 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad77f530-dc0b-44ec-b4e2-c580cfe568fe-config\") pod \"cloudkitty-lokistack-distributor-664b687b54-qvsww\" (UID: \"ad77f530-dc0b-44ec-b4e2-c580cfe568fe\") " pod="openstack/cloudkitty-lokistack-distributor-664b687b54-qvsww" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.066229 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-querier-5467947bf7-qpg72"] Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.068222 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ad77f530-dc0b-44ec-b4e2-c580cfe568fe-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-distributor-664b687b54-qvsww\" (UID: \"ad77f530-dc0b-44ec-b4e2-c580cfe568fe\") " pod="openstack/cloudkitty-lokistack-distributor-664b687b54-qvsww" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.079200 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-distributor-http\" (UniqueName: \"kubernetes.io/secret/ad77f530-dc0b-44ec-b4e2-c580cfe568fe-cloudkitty-lokistack-distributor-http\") pod \"cloudkitty-lokistack-distributor-664b687b54-qvsww\" (UID: \"ad77f530-dc0b-44ec-b4e2-c580cfe568fe\") " pod="openstack/cloudkitty-lokistack-distributor-664b687b54-qvsww" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.083180 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/ad77f530-dc0b-44ec-b4e2-c580cfe568fe-cloudkitty-lokistack-distributor-grpc\") pod \"cloudkitty-lokistack-distributor-664b687b54-qvsww\" (UID: \"ad77f530-dc0b-44ec-b4e2-c580cfe568fe\") " pod="openstack/cloudkitty-lokistack-distributor-664b687b54-qvsww" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.115645 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlp25\" (UniqueName: \"kubernetes.io/projected/ad77f530-dc0b-44ec-b4e2-c580cfe568fe-kube-api-access-hlp25\") pod \"cloudkitty-lokistack-distributor-664b687b54-qvsww\" (UID: \"ad77f530-dc0b-44ec-b4e2-c580cfe568fe\") " pod="openstack/cloudkitty-lokistack-distributor-664b687b54-qvsww" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.139590 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-dr466"] Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.143686 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-dr466" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.145194 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-distributor-664b687b54-qvsww" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.149849 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-query-frontend-grpc" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.150142 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-query-frontend-http" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.166891 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-dr466"] Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.170123 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9c583d4-e5d0-4c13-9989-dea15920e9e6-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-querier-5467947bf7-qpg72\" (UID: \"f9c583d4-e5d0-4c13-9989-dea15920e9e6\") " pod="openstack/cloudkitty-lokistack-querier-5467947bf7-qpg72" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.170209 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-querier-grpc\" (UniqueName: \"kubernetes.io/secret/f9c583d4-e5d0-4c13-9989-dea15920e9e6-cloudkitty-lokistack-querier-grpc\") pod \"cloudkitty-lokistack-querier-5467947bf7-qpg72\" (UID: \"f9c583d4-e5d0-4c13-9989-dea15920e9e6\") " pod="openstack/cloudkitty-lokistack-querier-5467947bf7-qpg72" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.170242 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9c583d4-e5d0-4c13-9989-dea15920e9e6-config\") pod \"cloudkitty-lokistack-querier-5467947bf7-qpg72\" (UID: \"f9c583d4-e5d0-4c13-9989-dea15920e9e6\") " pod="openstack/cloudkitty-lokistack-querier-5467947bf7-qpg72" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.170297 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqp2z\" (UniqueName: \"kubernetes.io/projected/f9c583d4-e5d0-4c13-9989-dea15920e9e6-kube-api-access-hqp2z\") pod \"cloudkitty-lokistack-querier-5467947bf7-qpg72\" (UID: \"f9c583d4-e5d0-4c13-9989-dea15920e9e6\") " pod="openstack/cloudkitty-lokistack-querier-5467947bf7-qpg72" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.170355 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-querier-http\" (UniqueName: \"kubernetes.io/secret/f9c583d4-e5d0-4c13-9989-dea15920e9e6-cloudkitty-lokistack-querier-http\") pod \"cloudkitty-lokistack-querier-5467947bf7-qpg72\" (UID: \"f9c583d4-e5d0-4c13-9989-dea15920e9e6\") " pod="openstack/cloudkitty-lokistack-querier-5467947bf7-qpg72" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.170409 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/f9c583d4-e5d0-4c13-9989-dea15920e9e6-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-querier-5467947bf7-qpg72\" (UID: \"f9c583d4-e5d0-4c13-9989-dea15920e9e6\") " pod="openstack/cloudkitty-lokistack-querier-5467947bf7-qpg72" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.280959 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/8fbdd63a-fd88-4a37-85fb-08e7d21af574-cloudkitty-lokistack-query-frontend-http\") pod \"cloudkitty-lokistack-query-frontend-7c8cd744d9-dr466\" (UID: \"8fbdd63a-fd88-4a37-85fb-08e7d21af574\") " pod="openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-dr466" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.281043 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/8fbdd63a-fd88-4a37-85fb-08e7d21af574-cloudkitty-lokistack-query-frontend-grpc\") pod \"cloudkitty-lokistack-query-frontend-7c8cd744d9-dr466\" (UID: \"8fbdd63a-fd88-4a37-85fb-08e7d21af574\") " pod="openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-dr466" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.281088 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9c583d4-e5d0-4c13-9989-dea15920e9e6-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-querier-5467947bf7-qpg72\" (UID: \"f9c583d4-e5d0-4c13-9989-dea15920e9e6\") " pod="openstack/cloudkitty-lokistack-querier-5467947bf7-qpg72" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.281297 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8fbdd63a-fd88-4a37-85fb-08e7d21af574-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-query-frontend-7c8cd744d9-dr466\" (UID: \"8fbdd63a-fd88-4a37-85fb-08e7d21af574\") " pod="openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-dr466" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.281498 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8fbdd63a-fd88-4a37-85fb-08e7d21af574-config\") pod \"cloudkitty-lokistack-query-frontend-7c8cd744d9-dr466\" (UID: \"8fbdd63a-fd88-4a37-85fb-08e7d21af574\") " pod="openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-dr466" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.281612 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-querier-grpc\" (UniqueName: \"kubernetes.io/secret/f9c583d4-e5d0-4c13-9989-dea15920e9e6-cloudkitty-lokistack-querier-grpc\") pod \"cloudkitty-lokistack-querier-5467947bf7-qpg72\" (UID: \"f9c583d4-e5d0-4c13-9989-dea15920e9e6\") " pod="openstack/cloudkitty-lokistack-querier-5467947bf7-qpg72" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.281677 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9c583d4-e5d0-4c13-9989-dea15920e9e6-config\") pod \"cloudkitty-lokistack-querier-5467947bf7-qpg72\" (UID: \"f9c583d4-e5d0-4c13-9989-dea15920e9e6\") " pod="openstack/cloudkitty-lokistack-querier-5467947bf7-qpg72" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.281869 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqp2z\" (UniqueName: \"kubernetes.io/projected/f9c583d4-e5d0-4c13-9989-dea15920e9e6-kube-api-access-hqp2z\") pod \"cloudkitty-lokistack-querier-5467947bf7-qpg72\" (UID: \"f9c583d4-e5d0-4c13-9989-dea15920e9e6\") " pod="openstack/cloudkitty-lokistack-querier-5467947bf7-qpg72" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.281948 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfj4t\" (UniqueName: \"kubernetes.io/projected/8fbdd63a-fd88-4a37-85fb-08e7d21af574-kube-api-access-cfj4t\") pod \"cloudkitty-lokistack-query-frontend-7c8cd744d9-dr466\" (UID: \"8fbdd63a-fd88-4a37-85fb-08e7d21af574\") " pod="openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-dr466" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.281983 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-querier-http\" (UniqueName: \"kubernetes.io/secret/f9c583d4-e5d0-4c13-9989-dea15920e9e6-cloudkitty-lokistack-querier-http\") pod \"cloudkitty-lokistack-querier-5467947bf7-qpg72\" (UID: \"f9c583d4-e5d0-4c13-9989-dea15920e9e6\") " pod="openstack/cloudkitty-lokistack-querier-5467947bf7-qpg72" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.282136 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/f9c583d4-e5d0-4c13-9989-dea15920e9e6-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-querier-5467947bf7-qpg72\" (UID: \"f9c583d4-e5d0-4c13-9989-dea15920e9e6\") " pod="openstack/cloudkitty-lokistack-querier-5467947bf7-qpg72" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.282268 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9c583d4-e5d0-4c13-9989-dea15920e9e6-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-querier-5467947bf7-qpg72\" (UID: \"f9c583d4-e5d0-4c13-9989-dea15920e9e6\") " pod="openstack/cloudkitty-lokistack-querier-5467947bf7-qpg72" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.282894 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9c583d4-e5d0-4c13-9989-dea15920e9e6-config\") pod \"cloudkitty-lokistack-querier-5467947bf7-qpg72\" (UID: \"f9c583d4-e5d0-4c13-9989-dea15920e9e6\") " pod="openstack/cloudkitty-lokistack-querier-5467947bf7-qpg72" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.296123 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-gateway-bc75944f-sxgmq"] Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.299438 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-querier-grpc\" (UniqueName: \"kubernetes.io/secret/f9c583d4-e5d0-4c13-9989-dea15920e9e6-cloudkitty-lokistack-querier-grpc\") pod \"cloudkitty-lokistack-querier-5467947bf7-qpg72\" (UID: \"f9c583d4-e5d0-4c13-9989-dea15920e9e6\") " pod="openstack/cloudkitty-lokistack-querier-5467947bf7-qpg72" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.301321 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-gateway-bc75944f-sxgmq" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.304491 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-querier-http\" (UniqueName: \"kubernetes.io/secret/f9c583d4-e5d0-4c13-9989-dea15920e9e6-cloudkitty-lokistack-querier-http\") pod \"cloudkitty-lokistack-querier-5467947bf7-qpg72\" (UID: \"f9c583d4-e5d0-4c13-9989-dea15920e9e6\") " pod="openstack/cloudkitty-lokistack-querier-5467947bf7-qpg72" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.304905 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/f9c583d4-e5d0-4c13-9989-dea15920e9e6-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-querier-5467947bf7-qpg72\" (UID: \"f9c583d4-e5d0-4c13-9989-dea15920e9e6\") " pod="openstack/cloudkitty-lokistack-querier-5467947bf7-qpg72" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.313378 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-gateway-ca-bundle" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.313561 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-gateway" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.314253 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-ca" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.314324 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-gateway" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.314715 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-gateway-http" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.314884 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-gateway-client-http" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.316192 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqp2z\" (UniqueName: \"kubernetes.io/projected/f9c583d4-e5d0-4c13-9989-dea15920e9e6-kube-api-access-hqp2z\") pod \"cloudkitty-lokistack-querier-5467947bf7-qpg72\" (UID: \"f9c583d4-e5d0-4c13-9989-dea15920e9e6\") " pod="openstack/cloudkitty-lokistack-querier-5467947bf7-qpg72" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.324852 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-gateway-bc75944f-hm8ng"] Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.326416 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-gateway-bc75944f-hm8ng" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.332413 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-gateway-dockercfg-sbjp4" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.359252 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-gateway-bc75944f-hm8ng"] Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.381327 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-gateway-bc75944f-sxgmq"] Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.383361 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/8fbdd63a-fd88-4a37-85fb-08e7d21af574-cloudkitty-lokistack-query-frontend-http\") pod \"cloudkitty-lokistack-query-frontend-7c8cd744d9-dr466\" (UID: \"8fbdd63a-fd88-4a37-85fb-08e7d21af574\") " pod="openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-dr466" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.383413 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/8fbdd63a-fd88-4a37-85fb-08e7d21af574-cloudkitty-lokistack-query-frontend-grpc\") pod \"cloudkitty-lokistack-query-frontend-7c8cd744d9-dr466\" (UID: \"8fbdd63a-fd88-4a37-85fb-08e7d21af574\") " pod="openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-dr466" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.383457 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8fbdd63a-fd88-4a37-85fb-08e7d21af574-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-query-frontend-7c8cd744d9-dr466\" (UID: \"8fbdd63a-fd88-4a37-85fb-08e7d21af574\") " pod="openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-dr466" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.383500 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8fbdd63a-fd88-4a37-85fb-08e7d21af574-config\") pod \"cloudkitty-lokistack-query-frontend-7c8cd744d9-dr466\" (UID: \"8fbdd63a-fd88-4a37-85fb-08e7d21af574\") " pod="openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-dr466" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.383566 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cfj4t\" (UniqueName: \"kubernetes.io/projected/8fbdd63a-fd88-4a37-85fb-08e7d21af574-kube-api-access-cfj4t\") pod \"cloudkitty-lokistack-query-frontend-7c8cd744d9-dr466\" (UID: \"8fbdd63a-fd88-4a37-85fb-08e7d21af574\") " pod="openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-dr466" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.385643 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8fbdd63a-fd88-4a37-85fb-08e7d21af574-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-query-frontend-7c8cd744d9-dr466\" (UID: \"8fbdd63a-fd88-4a37-85fb-08e7d21af574\") " pod="openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-dr466" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.387255 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-querier-5467947bf7-qpg72" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.387656 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8fbdd63a-fd88-4a37-85fb-08e7d21af574-config\") pod \"cloudkitty-lokistack-query-frontend-7c8cd744d9-dr466\" (UID: \"8fbdd63a-fd88-4a37-85fb-08e7d21af574\") " pod="openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-dr466" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.390049 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/8fbdd63a-fd88-4a37-85fb-08e7d21af574-cloudkitty-lokistack-query-frontend-http\") pod \"cloudkitty-lokistack-query-frontend-7c8cd744d9-dr466\" (UID: \"8fbdd63a-fd88-4a37-85fb-08e7d21af574\") " pod="openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-dr466" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.394080 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/8fbdd63a-fd88-4a37-85fb-08e7d21af574-cloudkitty-lokistack-query-frontend-grpc\") pod \"cloudkitty-lokistack-query-frontend-7c8cd744d9-dr466\" (UID: \"8fbdd63a-fd88-4a37-85fb-08e7d21af574\") " pod="openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-dr466" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.422848 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfj4t\" (UniqueName: \"kubernetes.io/projected/8fbdd63a-fd88-4a37-85fb-08e7d21af574-kube-api-access-cfj4t\") pod \"cloudkitty-lokistack-query-frontend-7c8cd744d9-dr466\" (UID: \"8fbdd63a-fd88-4a37-85fb-08e7d21af574\") " pod="openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-dr466" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.485381 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/145f8d2b-2e95-4227-8c76-1f3ee8eab754-rbac\") pod \"cloudkitty-lokistack-gateway-bc75944f-hm8ng\" (UID: \"145f8d2b-2e95-4227-8c76-1f3ee8eab754\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-hm8ng" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.485438 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/614e240f-4195-4915-9e2e-d142c9df25bc-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-bc75944f-sxgmq\" (UID: \"614e240f-4195-4915-9e2e-d142c9df25bc\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-sxgmq" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.485456 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/145f8d2b-2e95-4227-8c76-1f3ee8eab754-tenants\") pod \"cloudkitty-lokistack-gateway-bc75944f-hm8ng\" (UID: \"145f8d2b-2e95-4227-8c76-1f3ee8eab754\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-hm8ng" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.485490 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/614e240f-4195-4915-9e2e-d142c9df25bc-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-bc75944f-sxgmq\" (UID: \"614e240f-4195-4915-9e2e-d142c9df25bc\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-sxgmq" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.485512 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsqc6\" (UniqueName: \"kubernetes.io/projected/145f8d2b-2e95-4227-8c76-1f3ee8eab754-kube-api-access-jsqc6\") pod \"cloudkitty-lokistack-gateway-bc75944f-hm8ng\" (UID: \"145f8d2b-2e95-4227-8c76-1f3ee8eab754\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-hm8ng" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.485536 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/145f8d2b-2e95-4227-8c76-1f3ee8eab754-tls-secret\") pod \"cloudkitty-lokistack-gateway-bc75944f-hm8ng\" (UID: \"145f8d2b-2e95-4227-8c76-1f3ee8eab754\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-hm8ng" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.485561 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/145f8d2b-2e95-4227-8c76-1f3ee8eab754-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-bc75944f-hm8ng\" (UID: \"145f8d2b-2e95-4227-8c76-1f3ee8eab754\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-hm8ng" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.485587 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/145f8d2b-2e95-4227-8c76-1f3ee8eab754-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-bc75944f-hm8ng\" (UID: \"145f8d2b-2e95-4227-8c76-1f3ee8eab754\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-hm8ng" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.485606 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/614e240f-4195-4915-9e2e-d142c9df25bc-rbac\") pod \"cloudkitty-lokistack-gateway-bc75944f-sxgmq\" (UID: \"614e240f-4195-4915-9e2e-d142c9df25bc\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-sxgmq" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.485626 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/614e240f-4195-4915-9e2e-d142c9df25bc-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-bc75944f-sxgmq\" (UID: \"614e240f-4195-4915-9e2e-d142c9df25bc\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-sxgmq" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.485664 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/145f8d2b-2e95-4227-8c76-1f3ee8eab754-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-bc75944f-hm8ng\" (UID: \"145f8d2b-2e95-4227-8c76-1f3ee8eab754\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-hm8ng" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.485680 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/614e240f-4195-4915-9e2e-d142c9df25bc-tenants\") pod \"cloudkitty-lokistack-gateway-bc75944f-sxgmq\" (UID: \"614e240f-4195-4915-9e2e-d142c9df25bc\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-sxgmq" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.485705 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-495wc\" (UniqueName: \"kubernetes.io/projected/614e240f-4195-4915-9e2e-d142c9df25bc-kube-api-access-495wc\") pod \"cloudkitty-lokistack-gateway-bc75944f-sxgmq\" (UID: \"614e240f-4195-4915-9e2e-d142c9df25bc\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-sxgmq" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.485723 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/614e240f-4195-4915-9e2e-d142c9df25bc-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-bc75944f-sxgmq\" (UID: \"614e240f-4195-4915-9e2e-d142c9df25bc\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-sxgmq" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.485755 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/614e240f-4195-4915-9e2e-d142c9df25bc-tls-secret\") pod \"cloudkitty-lokistack-gateway-bc75944f-sxgmq\" (UID: \"614e240f-4195-4915-9e2e-d142c9df25bc\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-sxgmq" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.485772 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/614e240f-4195-4915-9e2e-d142c9df25bc-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-bc75944f-sxgmq\" (UID: \"614e240f-4195-4915-9e2e-d142c9df25bc\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-sxgmq" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.485791 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/145f8d2b-2e95-4227-8c76-1f3ee8eab754-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-bc75944f-hm8ng\" (UID: \"145f8d2b-2e95-4227-8c76-1f3ee8eab754\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-hm8ng" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.485807 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/145f8d2b-2e95-4227-8c76-1f3ee8eab754-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-bc75944f-hm8ng\" (UID: \"145f8d2b-2e95-4227-8c76-1f3ee8eab754\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-hm8ng" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.499517 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-dr466" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.626640 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/145f8d2b-2e95-4227-8c76-1f3ee8eab754-rbac\") pod \"cloudkitty-lokistack-gateway-bc75944f-hm8ng\" (UID: \"145f8d2b-2e95-4227-8c76-1f3ee8eab754\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-hm8ng" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.626700 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/614e240f-4195-4915-9e2e-d142c9df25bc-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-bc75944f-sxgmq\" (UID: \"614e240f-4195-4915-9e2e-d142c9df25bc\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-sxgmq" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.626720 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/145f8d2b-2e95-4227-8c76-1f3ee8eab754-tenants\") pod \"cloudkitty-lokistack-gateway-bc75944f-hm8ng\" (UID: \"145f8d2b-2e95-4227-8c76-1f3ee8eab754\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-hm8ng" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.626740 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/614e240f-4195-4915-9e2e-d142c9df25bc-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-bc75944f-sxgmq\" (UID: \"614e240f-4195-4915-9e2e-d142c9df25bc\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-sxgmq" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.626759 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jsqc6\" (UniqueName: \"kubernetes.io/projected/145f8d2b-2e95-4227-8c76-1f3ee8eab754-kube-api-access-jsqc6\") pod \"cloudkitty-lokistack-gateway-bc75944f-hm8ng\" (UID: \"145f8d2b-2e95-4227-8c76-1f3ee8eab754\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-hm8ng" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.626786 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/145f8d2b-2e95-4227-8c76-1f3ee8eab754-tls-secret\") pod \"cloudkitty-lokistack-gateway-bc75944f-hm8ng\" (UID: \"145f8d2b-2e95-4227-8c76-1f3ee8eab754\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-hm8ng" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.626819 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/145f8d2b-2e95-4227-8c76-1f3ee8eab754-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-bc75944f-hm8ng\" (UID: \"145f8d2b-2e95-4227-8c76-1f3ee8eab754\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-hm8ng" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.626852 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/145f8d2b-2e95-4227-8c76-1f3ee8eab754-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-bc75944f-hm8ng\" (UID: \"145f8d2b-2e95-4227-8c76-1f3ee8eab754\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-hm8ng" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.626872 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/614e240f-4195-4915-9e2e-d142c9df25bc-rbac\") pod \"cloudkitty-lokistack-gateway-bc75944f-sxgmq\" (UID: \"614e240f-4195-4915-9e2e-d142c9df25bc\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-sxgmq" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.626893 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/614e240f-4195-4915-9e2e-d142c9df25bc-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-bc75944f-sxgmq\" (UID: \"614e240f-4195-4915-9e2e-d142c9df25bc\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-sxgmq" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.626933 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/145f8d2b-2e95-4227-8c76-1f3ee8eab754-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-bc75944f-hm8ng\" (UID: \"145f8d2b-2e95-4227-8c76-1f3ee8eab754\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-hm8ng" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.626948 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/614e240f-4195-4915-9e2e-d142c9df25bc-tenants\") pod \"cloudkitty-lokistack-gateway-bc75944f-sxgmq\" (UID: \"614e240f-4195-4915-9e2e-d142c9df25bc\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-sxgmq" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.626971 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-495wc\" (UniqueName: \"kubernetes.io/projected/614e240f-4195-4915-9e2e-d142c9df25bc-kube-api-access-495wc\") pod \"cloudkitty-lokistack-gateway-bc75944f-sxgmq\" (UID: \"614e240f-4195-4915-9e2e-d142c9df25bc\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-sxgmq" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.626990 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/614e240f-4195-4915-9e2e-d142c9df25bc-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-bc75944f-sxgmq\" (UID: \"614e240f-4195-4915-9e2e-d142c9df25bc\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-sxgmq" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.627031 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/614e240f-4195-4915-9e2e-d142c9df25bc-tls-secret\") pod \"cloudkitty-lokistack-gateway-bc75944f-sxgmq\" (UID: \"614e240f-4195-4915-9e2e-d142c9df25bc\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-sxgmq" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.627060 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/145f8d2b-2e95-4227-8c76-1f3ee8eab754-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-bc75944f-hm8ng\" (UID: \"145f8d2b-2e95-4227-8c76-1f3ee8eab754\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-hm8ng" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.627081 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/614e240f-4195-4915-9e2e-d142c9df25bc-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-bc75944f-sxgmq\" (UID: \"614e240f-4195-4915-9e2e-d142c9df25bc\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-sxgmq" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.627105 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/145f8d2b-2e95-4227-8c76-1f3ee8eab754-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-bc75944f-hm8ng\" (UID: \"145f8d2b-2e95-4227-8c76-1f3ee8eab754\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-hm8ng" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.627977 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/145f8d2b-2e95-4227-8c76-1f3ee8eab754-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-bc75944f-hm8ng\" (UID: \"145f8d2b-2e95-4227-8c76-1f3ee8eab754\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-hm8ng" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.628313 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/145f8d2b-2e95-4227-8c76-1f3ee8eab754-rbac\") pod \"cloudkitty-lokistack-gateway-bc75944f-hm8ng\" (UID: \"145f8d2b-2e95-4227-8c76-1f3ee8eab754\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-hm8ng" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.628547 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/614e240f-4195-4915-9e2e-d142c9df25bc-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-bc75944f-sxgmq\" (UID: \"614e240f-4195-4915-9e2e-d142c9df25bc\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-sxgmq" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.628961 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/614e240f-4195-4915-9e2e-d142c9df25bc-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-bc75944f-sxgmq\" (UID: \"614e240f-4195-4915-9e2e-d142c9df25bc\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-sxgmq" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.629563 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/145f8d2b-2e95-4227-8c76-1f3ee8eab754-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-bc75944f-hm8ng\" (UID: \"145f8d2b-2e95-4227-8c76-1f3ee8eab754\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-hm8ng" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.634821 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/614e240f-4195-4915-9e2e-d142c9df25bc-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-bc75944f-sxgmq\" (UID: \"614e240f-4195-4915-9e2e-d142c9df25bc\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-sxgmq" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.634821 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/614e240f-4195-4915-9e2e-d142c9df25bc-rbac\") pod \"cloudkitty-lokistack-gateway-bc75944f-sxgmq\" (UID: \"614e240f-4195-4915-9e2e-d142c9df25bc\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-sxgmq" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.635626 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/145f8d2b-2e95-4227-8c76-1f3ee8eab754-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-bc75944f-hm8ng\" (UID: \"145f8d2b-2e95-4227-8c76-1f3ee8eab754\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-hm8ng" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.635995 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/145f8d2b-2e95-4227-8c76-1f3ee8eab754-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-bc75944f-hm8ng\" (UID: \"145f8d2b-2e95-4227-8c76-1f3ee8eab754\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-hm8ng" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.636189 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/614e240f-4195-4915-9e2e-d142c9df25bc-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-bc75944f-sxgmq\" (UID: \"614e240f-4195-4915-9e2e-d142c9df25bc\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-sxgmq" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.643106 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/145f8d2b-2e95-4227-8c76-1f3ee8eab754-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-bc75944f-hm8ng\" (UID: \"145f8d2b-2e95-4227-8c76-1f3ee8eab754\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-hm8ng" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.646194 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/145f8d2b-2e95-4227-8c76-1f3ee8eab754-tenants\") pod \"cloudkitty-lokistack-gateway-bc75944f-hm8ng\" (UID: \"145f8d2b-2e95-4227-8c76-1f3ee8eab754\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-hm8ng" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.676737 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/614e240f-4195-4915-9e2e-d142c9df25bc-tenants\") pod \"cloudkitty-lokistack-gateway-bc75944f-sxgmq\" (UID: \"614e240f-4195-4915-9e2e-d142c9df25bc\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-sxgmq" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.677086 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/145f8d2b-2e95-4227-8c76-1f3ee8eab754-tls-secret\") pod \"cloudkitty-lokistack-gateway-bc75944f-hm8ng\" (UID: \"145f8d2b-2e95-4227-8c76-1f3ee8eab754\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-hm8ng" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.677200 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/614e240f-4195-4915-9e2e-d142c9df25bc-tls-secret\") pod \"cloudkitty-lokistack-gateway-bc75944f-sxgmq\" (UID: \"614e240f-4195-4915-9e2e-d142c9df25bc\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-sxgmq" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.678312 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/614e240f-4195-4915-9e2e-d142c9df25bc-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-bc75944f-sxgmq\" (UID: \"614e240f-4195-4915-9e2e-d142c9df25bc\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-sxgmq" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.678442 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jsqc6\" (UniqueName: \"kubernetes.io/projected/145f8d2b-2e95-4227-8c76-1f3ee8eab754-kube-api-access-jsqc6\") pod \"cloudkitty-lokistack-gateway-bc75944f-hm8ng\" (UID: \"145f8d2b-2e95-4227-8c76-1f3ee8eab754\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-hm8ng" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.680728 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-495wc\" (UniqueName: \"kubernetes.io/projected/614e240f-4195-4915-9e2e-d142c9df25bc-kube-api-access-495wc\") pod \"cloudkitty-lokistack-gateway-bc75944f-sxgmq\" (UID: \"614e240f-4195-4915-9e2e-d142c9df25bc\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-sxgmq" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.685651 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-gateway-bc75944f-sxgmq" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.701904 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-gateway-bc75944f-hm8ng" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.982981 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-ingester-0"] Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.984111 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.988172 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-ingester-http" Dec 10 15:41:38 crc kubenswrapper[4755]: I1210 15:41:38.988253 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-ingester-grpc" Dec 10 15:41:39 crc kubenswrapper[4755]: I1210 15:41:39.008965 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-ingester-0"] Dec 10 15:41:39 crc kubenswrapper[4755]: I1210 15:41:39.032808 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5a3871d-6b81-4b3d-9044-fcbcf437effb-config\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"e5a3871d-6b81-4b3d-9044-fcbcf437effb\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 15:41:39 crc kubenswrapper[4755]: I1210 15:41:39.033087 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rj9jd\" (UniqueName: \"kubernetes.io/projected/e5a3871d-6b81-4b3d-9044-fcbcf437effb-kube-api-access-rj9jd\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"e5a3871d-6b81-4b3d-9044-fcbcf437effb\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 15:41:39 crc kubenswrapper[4755]: I1210 15:41:39.033203 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e5a3871d-6b81-4b3d-9044-fcbcf437effb-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"e5a3871d-6b81-4b3d-9044-fcbcf437effb\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 15:41:39 crc kubenswrapper[4755]: I1210 15:41:39.033325 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"e5a3871d-6b81-4b3d-9044-fcbcf437effb\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 15:41:39 crc kubenswrapper[4755]: I1210 15:41:39.033644 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/e5a3871d-6b81-4b3d-9044-fcbcf437effb-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"e5a3871d-6b81-4b3d-9044-fcbcf437effb\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 15:41:39 crc kubenswrapper[4755]: I1210 15:41:39.033746 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"e5a3871d-6b81-4b3d-9044-fcbcf437effb\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 15:41:39 crc kubenswrapper[4755]: I1210 15:41:39.034241 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/e5a3871d-6b81-4b3d-9044-fcbcf437effb-cloudkitty-lokistack-ingester-grpc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"e5a3871d-6b81-4b3d-9044-fcbcf437effb\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 15:41:39 crc kubenswrapper[4755]: I1210 15:41:39.034374 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ingester-http\" (UniqueName: \"kubernetes.io/secret/e5a3871d-6b81-4b3d-9044-fcbcf437effb-cloudkitty-lokistack-ingester-http\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"e5a3871d-6b81-4b3d-9044-fcbcf437effb\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 15:41:39 crc kubenswrapper[4755]: I1210 15:41:39.104849 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-compactor-0"] Dec 10 15:41:39 crc kubenswrapper[4755]: I1210 15:41:39.114095 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-compactor-0" Dec 10 15:41:39 crc kubenswrapper[4755]: I1210 15:41:39.123020 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-compactor-grpc" Dec 10 15:41:39 crc kubenswrapper[4755]: I1210 15:41:39.123070 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-compactor-http" Dec 10 15:41:39 crc kubenswrapper[4755]: I1210 15:41:39.136656 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/e5a3871d-6b81-4b3d-9044-fcbcf437effb-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"e5a3871d-6b81-4b3d-9044-fcbcf437effb\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 15:41:39 crc kubenswrapper[4755]: I1210 15:41:39.136700 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"e5a3871d-6b81-4b3d-9044-fcbcf437effb\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 15:41:39 crc kubenswrapper[4755]: I1210 15:41:39.136740 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/e5a3871d-6b81-4b3d-9044-fcbcf437effb-cloudkitty-lokistack-ingester-grpc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"e5a3871d-6b81-4b3d-9044-fcbcf437effb\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 15:41:39 crc kubenswrapper[4755]: I1210 15:41:39.136780 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ingester-http\" (UniqueName: \"kubernetes.io/secret/e5a3871d-6b81-4b3d-9044-fcbcf437effb-cloudkitty-lokistack-ingester-http\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"e5a3871d-6b81-4b3d-9044-fcbcf437effb\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 15:41:39 crc kubenswrapper[4755]: I1210 15:41:39.136818 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5a3871d-6b81-4b3d-9044-fcbcf437effb-config\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"e5a3871d-6b81-4b3d-9044-fcbcf437effb\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 15:41:39 crc kubenswrapper[4755]: I1210 15:41:39.136862 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rj9jd\" (UniqueName: \"kubernetes.io/projected/e5a3871d-6b81-4b3d-9044-fcbcf437effb-kube-api-access-rj9jd\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"e5a3871d-6b81-4b3d-9044-fcbcf437effb\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 15:41:39 crc kubenswrapper[4755]: I1210 15:41:39.136885 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e5a3871d-6b81-4b3d-9044-fcbcf437effb-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"e5a3871d-6b81-4b3d-9044-fcbcf437effb\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 15:41:39 crc kubenswrapper[4755]: I1210 15:41:39.136908 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"e5a3871d-6b81-4b3d-9044-fcbcf437effb\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 15:41:39 crc kubenswrapper[4755]: I1210 15:41:39.137413 4755 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"e5a3871d-6b81-4b3d-9044-fcbcf437effb\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 15:41:39 crc kubenswrapper[4755]: I1210 15:41:39.137900 4755 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"e5a3871d-6b81-4b3d-9044-fcbcf437effb\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 15:41:39 crc kubenswrapper[4755]: I1210 15:41:39.138771 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5a3871d-6b81-4b3d-9044-fcbcf437effb-config\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"e5a3871d-6b81-4b3d-9044-fcbcf437effb\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 15:41:39 crc kubenswrapper[4755]: I1210 15:41:39.138783 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e5a3871d-6b81-4b3d-9044-fcbcf437effb-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"e5a3871d-6b81-4b3d-9044-fcbcf437effb\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 15:41:39 crc kubenswrapper[4755]: I1210 15:41:39.143814 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ingester-http\" (UniqueName: \"kubernetes.io/secret/e5a3871d-6b81-4b3d-9044-fcbcf437effb-cloudkitty-lokistack-ingester-http\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"e5a3871d-6b81-4b3d-9044-fcbcf437effb\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 15:41:39 crc kubenswrapper[4755]: I1210 15:41:39.144841 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/e5a3871d-6b81-4b3d-9044-fcbcf437effb-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"e5a3871d-6b81-4b3d-9044-fcbcf437effb\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 15:41:39 crc kubenswrapper[4755]: I1210 15:41:39.147482 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-compactor-0"] Dec 10 15:41:39 crc kubenswrapper[4755]: I1210 15:41:39.158054 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rj9jd\" (UniqueName: \"kubernetes.io/projected/e5a3871d-6b81-4b3d-9044-fcbcf437effb-kube-api-access-rj9jd\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"e5a3871d-6b81-4b3d-9044-fcbcf437effb\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 15:41:39 crc kubenswrapper[4755]: I1210 15:41:39.159147 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/e5a3871d-6b81-4b3d-9044-fcbcf437effb-cloudkitty-lokistack-ingester-grpc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"e5a3871d-6b81-4b3d-9044-fcbcf437effb\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 15:41:39 crc kubenswrapper[4755]: I1210 15:41:39.185272 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"e5a3871d-6b81-4b3d-9044-fcbcf437effb\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 15:41:39 crc kubenswrapper[4755]: I1210 15:41:39.212048 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"e5a3871d-6b81-4b3d-9044-fcbcf437effb\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 15:41:39 crc kubenswrapper[4755]: I1210 15:41:39.218724 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-index-gateway-0"] Dec 10 15:41:39 crc kubenswrapper[4755]: I1210 15:41:39.220004 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 10 15:41:39 crc kubenswrapper[4755]: I1210 15:41:39.221498 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-index-gateway-grpc" Dec 10 15:41:39 crc kubenswrapper[4755]: I1210 15:41:39.222776 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-index-gateway-http" Dec 10 15:41:39 crc kubenswrapper[4755]: I1210 15:41:39.228275 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-index-gateway-0"] Dec 10 15:41:39 crc kubenswrapper[4755]: I1210 15:41:39.240808 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5l645\" (UniqueName: \"kubernetes.io/projected/31bbbf2c-5266-4ea7-8428-ed2607013a35-kube-api-access-5l645\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"31bbbf2c-5266-4ea7-8428-ed2607013a35\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 10 15:41:39 crc kubenswrapper[4755]: I1210 15:41:39.241194 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-compactor-http\" (UniqueName: \"kubernetes.io/secret/31bbbf2c-5266-4ea7-8428-ed2607013a35-cloudkitty-lokistack-compactor-http\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"31bbbf2c-5266-4ea7-8428-ed2607013a35\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 10 15:41:39 crc kubenswrapper[4755]: I1210 15:41:39.241324 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/31bbbf2c-5266-4ea7-8428-ed2607013a35-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"31bbbf2c-5266-4ea7-8428-ed2607013a35\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 10 15:41:39 crc kubenswrapper[4755]: I1210 15:41:39.241542 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"31bbbf2c-5266-4ea7-8428-ed2607013a35\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 10 15:41:39 crc kubenswrapper[4755]: I1210 15:41:39.241666 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31bbbf2c-5266-4ea7-8428-ed2607013a35-config\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"31bbbf2c-5266-4ea7-8428-ed2607013a35\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 10 15:41:39 crc kubenswrapper[4755]: I1210 15:41:39.241797 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/31bbbf2c-5266-4ea7-8428-ed2607013a35-cloudkitty-lokistack-compactor-grpc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"31bbbf2c-5266-4ea7-8428-ed2607013a35\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 10 15:41:39 crc kubenswrapper[4755]: I1210 15:41:39.242073 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/31bbbf2c-5266-4ea7-8428-ed2607013a35-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"31bbbf2c-5266-4ea7-8428-ed2607013a35\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 10 15:41:39 crc kubenswrapper[4755]: I1210 15:41:39.344749 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/31bbbf2c-5266-4ea7-8428-ed2607013a35-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"31bbbf2c-5266-4ea7-8428-ed2607013a35\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 10 15:41:39 crc kubenswrapper[4755]: I1210 15:41:39.344824 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/4e702de9-8dda-4370-b806-41083a70ac41-cloudkitty-lokistack-index-gateway-grpc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"4e702de9-8dda-4370-b806-41083a70ac41\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 10 15:41:39 crc kubenswrapper[4755]: I1210 15:41:39.344851 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nxq6\" (UniqueName: \"kubernetes.io/projected/4e702de9-8dda-4370-b806-41083a70ac41-kube-api-access-6nxq6\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"4e702de9-8dda-4370-b806-41083a70ac41\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 10 15:41:39 crc kubenswrapper[4755]: I1210 15:41:39.344922 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/4e702de9-8dda-4370-b806-41083a70ac41-cloudkitty-lokistack-index-gateway-http\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"4e702de9-8dda-4370-b806-41083a70ac41\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 10 15:41:39 crc kubenswrapper[4755]: I1210 15:41:39.344974 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"4e702de9-8dda-4370-b806-41083a70ac41\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 10 15:41:39 crc kubenswrapper[4755]: I1210 15:41:39.345007 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4e702de9-8dda-4370-b806-41083a70ac41-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"4e702de9-8dda-4370-b806-41083a70ac41\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 10 15:41:39 crc kubenswrapper[4755]: I1210 15:41:39.345035 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5l645\" (UniqueName: \"kubernetes.io/projected/31bbbf2c-5266-4ea7-8428-ed2607013a35-kube-api-access-5l645\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"31bbbf2c-5266-4ea7-8428-ed2607013a35\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 10 15:41:39 crc kubenswrapper[4755]: I1210 15:41:39.345060 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-compactor-http\" (UniqueName: \"kubernetes.io/secret/31bbbf2c-5266-4ea7-8428-ed2607013a35-cloudkitty-lokistack-compactor-http\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"31bbbf2c-5266-4ea7-8428-ed2607013a35\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 10 15:41:39 crc kubenswrapper[4755]: I1210 15:41:39.345092 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/31bbbf2c-5266-4ea7-8428-ed2607013a35-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"31bbbf2c-5266-4ea7-8428-ed2607013a35\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 10 15:41:39 crc kubenswrapper[4755]: I1210 15:41:39.345130 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e702de9-8dda-4370-b806-41083a70ac41-config\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"4e702de9-8dda-4370-b806-41083a70ac41\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 10 15:41:39 crc kubenswrapper[4755]: I1210 15:41:39.345172 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"31bbbf2c-5266-4ea7-8428-ed2607013a35\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 10 15:41:39 crc kubenswrapper[4755]: I1210 15:41:39.345245 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31bbbf2c-5266-4ea7-8428-ed2607013a35-config\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"31bbbf2c-5266-4ea7-8428-ed2607013a35\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 10 15:41:39 crc kubenswrapper[4755]: I1210 15:41:39.345293 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/4e702de9-8dda-4370-b806-41083a70ac41-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"4e702de9-8dda-4370-b806-41083a70ac41\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 10 15:41:39 crc kubenswrapper[4755]: I1210 15:41:39.345322 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/31bbbf2c-5266-4ea7-8428-ed2607013a35-cloudkitty-lokistack-compactor-grpc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"31bbbf2c-5266-4ea7-8428-ed2607013a35\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 10 15:41:39 crc kubenswrapper[4755]: I1210 15:41:39.347009 4755 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"31bbbf2c-5266-4ea7-8428-ed2607013a35\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/cloudkitty-lokistack-compactor-0" Dec 10 15:41:39 crc kubenswrapper[4755]: I1210 15:41:39.347092 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/31bbbf2c-5266-4ea7-8428-ed2607013a35-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"31bbbf2c-5266-4ea7-8428-ed2607013a35\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 10 15:41:39 crc kubenswrapper[4755]: I1210 15:41:39.348349 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31bbbf2c-5266-4ea7-8428-ed2607013a35-config\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"31bbbf2c-5266-4ea7-8428-ed2607013a35\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 10 15:41:39 crc kubenswrapper[4755]: I1210 15:41:39.348629 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/31bbbf2c-5266-4ea7-8428-ed2607013a35-cloudkitty-lokistack-compactor-grpc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"31bbbf2c-5266-4ea7-8428-ed2607013a35\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 10 15:41:39 crc kubenswrapper[4755]: I1210 15:41:39.348742 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 15:41:39 crc kubenswrapper[4755]: I1210 15:41:39.357980 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/31bbbf2c-5266-4ea7-8428-ed2607013a35-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"31bbbf2c-5266-4ea7-8428-ed2607013a35\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 10 15:41:39 crc kubenswrapper[4755]: I1210 15:41:39.360696 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-compactor-http\" (UniqueName: \"kubernetes.io/secret/31bbbf2c-5266-4ea7-8428-ed2607013a35-cloudkitty-lokistack-compactor-http\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"31bbbf2c-5266-4ea7-8428-ed2607013a35\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 10 15:41:39 crc kubenswrapper[4755]: I1210 15:41:39.361816 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5l645\" (UniqueName: \"kubernetes.io/projected/31bbbf2c-5266-4ea7-8428-ed2607013a35-kube-api-access-5l645\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"31bbbf2c-5266-4ea7-8428-ed2607013a35\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 10 15:41:39 crc kubenswrapper[4755]: I1210 15:41:39.373187 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"31bbbf2c-5266-4ea7-8428-ed2607013a35\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 10 15:41:39 crc kubenswrapper[4755]: I1210 15:41:39.447507 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4e702de9-8dda-4370-b806-41083a70ac41-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"4e702de9-8dda-4370-b806-41083a70ac41\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 10 15:41:39 crc kubenswrapper[4755]: I1210 15:41:39.447593 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e702de9-8dda-4370-b806-41083a70ac41-config\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"4e702de9-8dda-4370-b806-41083a70ac41\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 10 15:41:39 crc kubenswrapper[4755]: I1210 15:41:39.447661 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/4e702de9-8dda-4370-b806-41083a70ac41-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"4e702de9-8dda-4370-b806-41083a70ac41\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 10 15:41:39 crc kubenswrapper[4755]: I1210 15:41:39.447718 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/4e702de9-8dda-4370-b806-41083a70ac41-cloudkitty-lokistack-index-gateway-grpc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"4e702de9-8dda-4370-b806-41083a70ac41\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 10 15:41:39 crc kubenswrapper[4755]: I1210 15:41:39.447733 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nxq6\" (UniqueName: \"kubernetes.io/projected/4e702de9-8dda-4370-b806-41083a70ac41-kube-api-access-6nxq6\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"4e702de9-8dda-4370-b806-41083a70ac41\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 10 15:41:39 crc kubenswrapper[4755]: I1210 15:41:39.447779 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/4e702de9-8dda-4370-b806-41083a70ac41-cloudkitty-lokistack-index-gateway-http\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"4e702de9-8dda-4370-b806-41083a70ac41\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 10 15:41:39 crc kubenswrapper[4755]: I1210 15:41:39.447817 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"4e702de9-8dda-4370-b806-41083a70ac41\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 10 15:41:39 crc kubenswrapper[4755]: I1210 15:41:39.447949 4755 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"4e702de9-8dda-4370-b806-41083a70ac41\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 10 15:41:39 crc kubenswrapper[4755]: I1210 15:41:39.449062 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e702de9-8dda-4370-b806-41083a70ac41-config\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"4e702de9-8dda-4370-b806-41083a70ac41\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 10 15:41:39 crc kubenswrapper[4755]: I1210 15:41:39.449125 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4e702de9-8dda-4370-b806-41083a70ac41-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"4e702de9-8dda-4370-b806-41083a70ac41\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 10 15:41:39 crc kubenswrapper[4755]: I1210 15:41:39.452940 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/4e702de9-8dda-4370-b806-41083a70ac41-cloudkitty-lokistack-index-gateway-grpc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"4e702de9-8dda-4370-b806-41083a70ac41\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 10 15:41:39 crc kubenswrapper[4755]: I1210 15:41:39.454389 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/4e702de9-8dda-4370-b806-41083a70ac41-cloudkitty-lokistack-index-gateway-http\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"4e702de9-8dda-4370-b806-41083a70ac41\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 10 15:41:39 crc kubenswrapper[4755]: I1210 15:41:39.455196 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/4e702de9-8dda-4370-b806-41083a70ac41-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"4e702de9-8dda-4370-b806-41083a70ac41\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 10 15:41:39 crc kubenswrapper[4755]: I1210 15:41:39.466043 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nxq6\" (UniqueName: \"kubernetes.io/projected/4e702de9-8dda-4370-b806-41083a70ac41-kube-api-access-6nxq6\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"4e702de9-8dda-4370-b806-41083a70ac41\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 10 15:41:39 crc kubenswrapper[4755]: I1210 15:41:39.469765 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"4e702de9-8dda-4370-b806-41083a70ac41\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 10 15:41:39 crc kubenswrapper[4755]: I1210 15:41:39.560454 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-compactor-0" Dec 10 15:41:39 crc kubenswrapper[4755]: I1210 15:41:39.567789 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 10 15:41:41 crc kubenswrapper[4755]: E1210 15:41:41.491065 4755 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Dec 10 15:41:41 crc kubenswrapper[4755]: E1210 15:41:41.493714 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b5rp2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-zm77c_openstack(7e6c07e9-7f64-4a32-88f0-301723cb221b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 10 15:41:41 crc kubenswrapper[4755]: E1210 15:41:41.495121 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-zm77c" podUID="7e6c07e9-7f64-4a32-88f0-301723cb221b" Dec 10 15:41:41 crc kubenswrapper[4755]: E1210 15:41:41.580338 4755 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Dec 10 15:41:41 crc kubenswrapper[4755]: E1210 15:41:41.580864 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jdwdt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-frv75_openstack(2a56f7f7-a7ff-44b8-8253-2cbdaa9f0e91): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 10 15:41:41 crc kubenswrapper[4755]: E1210 15:41:41.582016 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-frv75" podUID="2a56f7f7-a7ff-44b8-8253-2cbdaa9f0e91" Dec 10 15:41:41 crc kubenswrapper[4755]: I1210 15:41:41.962726 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Dec 10 15:41:42 crc kubenswrapper[4755]: I1210 15:41:42.163259 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"b6fc3b5b-a2c7-404f-8435-6c2a72d2c4a8","Type":"ContainerStarted","Data":"a5f9dc7b0221b1eb8e916d1f53f4b5787a927e69836f5f49a5862e0bce1e9a62"} Dec 10 15:41:42 crc kubenswrapper[4755]: I1210 15:41:42.510750 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-prnhr"] Dec 10 15:41:42 crc kubenswrapper[4755]: W1210 15:41:42.512682 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5bf05873_62f3_4a0f_b58e_ec6346b5f057.slice/crio-ee523c1a4d94bf209f45bd57aeaccdd1efe224f3f4353ff77cc3ff30068f80a8 WatchSource:0}: Error finding container ee523c1a4d94bf209f45bd57aeaccdd1efe224f3f4353ff77cc3ff30068f80a8: Status 404 returned error can't find the container with id ee523c1a4d94bf209f45bd57aeaccdd1efe224f3f4353ff77cc3ff30068f80a8 Dec 10 15:41:42 crc kubenswrapper[4755]: I1210 15:41:42.538840 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 10 15:41:42 crc kubenswrapper[4755]: I1210 15:41:42.594221 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Dec 10 15:41:42 crc kubenswrapper[4755]: W1210 15:41:42.596879 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod48b9cc99_2595_445c_aca6_b13972e95324.slice/crio-95ddfccb91aec1a255d8c328cf2dbe0af92f822557cd8dad5499c40b236fbc64 WatchSource:0}: Error finding container 95ddfccb91aec1a255d8c328cf2dbe0af92f822557cd8dad5499c40b236fbc64: Status 404 returned error can't find the container with id 95ddfccb91aec1a255d8c328cf2dbe0af92f822557cd8dad5499c40b236fbc64 Dec 10 15:41:42 crc kubenswrapper[4755]: I1210 15:41:42.968383 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-zm77c" Dec 10 15:41:42 crc kubenswrapper[4755]: I1210 15:41:42.968677 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-frv75" Dec 10 15:41:42 crc kubenswrapper[4755]: I1210 15:41:42.970494 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 10 15:41:42 crc kubenswrapper[4755]: W1210 15:41:42.973649 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod376461c9_8e89_4c8c_bcef_6a873320a293.slice/crio-9723480bf7f2c7b2d87d2489169687d7094d842b5e0f6f9953722bd544ea1391 WatchSource:0}: Error finding container 9723480bf7f2c7b2d87d2489169687d7094d842b5e0f6f9953722bd544ea1391: Status 404 returned error can't find the container with id 9723480bf7f2c7b2d87d2489169687d7094d842b5e0f6f9953722bd544ea1391 Dec 10 15:41:42 crc kubenswrapper[4755]: I1210 15:41:42.980161 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Dec 10 15:41:43 crc kubenswrapper[4755]: I1210 15:41:43.027654 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jdwdt\" (UniqueName: \"kubernetes.io/projected/2a56f7f7-a7ff-44b8-8253-2cbdaa9f0e91-kube-api-access-jdwdt\") pod \"2a56f7f7-a7ff-44b8-8253-2cbdaa9f0e91\" (UID: \"2a56f7f7-a7ff-44b8-8253-2cbdaa9f0e91\") " Dec 10 15:41:43 crc kubenswrapper[4755]: I1210 15:41:43.027722 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a56f7f7-a7ff-44b8-8253-2cbdaa9f0e91-config\") pod \"2a56f7f7-a7ff-44b8-8253-2cbdaa9f0e91\" (UID: \"2a56f7f7-a7ff-44b8-8253-2cbdaa9f0e91\") " Dec 10 15:41:43 crc kubenswrapper[4755]: I1210 15:41:43.027750 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2a56f7f7-a7ff-44b8-8253-2cbdaa9f0e91-dns-svc\") pod \"2a56f7f7-a7ff-44b8-8253-2cbdaa9f0e91\" (UID: \"2a56f7f7-a7ff-44b8-8253-2cbdaa9f0e91\") " Dec 10 15:41:43 crc kubenswrapper[4755]: I1210 15:41:43.027847 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e6c07e9-7f64-4a32-88f0-301723cb221b-config\") pod \"7e6c07e9-7f64-4a32-88f0-301723cb221b\" (UID: \"7e6c07e9-7f64-4a32-88f0-301723cb221b\") " Dec 10 15:41:43 crc kubenswrapper[4755]: I1210 15:41:43.027935 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b5rp2\" (UniqueName: \"kubernetes.io/projected/7e6c07e9-7f64-4a32-88f0-301723cb221b-kube-api-access-b5rp2\") pod \"7e6c07e9-7f64-4a32-88f0-301723cb221b\" (UID: \"7e6c07e9-7f64-4a32-88f0-301723cb221b\") " Dec 10 15:41:43 crc kubenswrapper[4755]: I1210 15:41:43.029011 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a56f7f7-a7ff-44b8-8253-2cbdaa9f0e91-config" (OuterVolumeSpecName: "config") pod "2a56f7f7-a7ff-44b8-8253-2cbdaa9f0e91" (UID: "2a56f7f7-a7ff-44b8-8253-2cbdaa9f0e91"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:41:43 crc kubenswrapper[4755]: I1210 15:41:43.029461 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a56f7f7-a7ff-44b8-8253-2cbdaa9f0e91-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2a56f7f7-a7ff-44b8-8253-2cbdaa9f0e91" (UID: "2a56f7f7-a7ff-44b8-8253-2cbdaa9f0e91"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:41:43 crc kubenswrapper[4755]: I1210 15:41:43.029879 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e6c07e9-7f64-4a32-88f0-301723cb221b-config" (OuterVolumeSpecName: "config") pod "7e6c07e9-7f64-4a32-88f0-301723cb221b" (UID: "7e6c07e9-7f64-4a32-88f0-301723cb221b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:41:43 crc kubenswrapper[4755]: I1210 15:41:43.033224 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e6c07e9-7f64-4a32-88f0-301723cb221b-kube-api-access-b5rp2" (OuterVolumeSpecName: "kube-api-access-b5rp2") pod "7e6c07e9-7f64-4a32-88f0-301723cb221b" (UID: "7e6c07e9-7f64-4a32-88f0-301723cb221b"). InnerVolumeSpecName "kube-api-access-b5rp2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:41:43 crc kubenswrapper[4755]: I1210 15:41:43.034510 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a56f7f7-a7ff-44b8-8253-2cbdaa9f0e91-kube-api-access-jdwdt" (OuterVolumeSpecName: "kube-api-access-jdwdt") pod "2a56f7f7-a7ff-44b8-8253-2cbdaa9f0e91" (UID: "2a56f7f7-a7ff-44b8-8253-2cbdaa9f0e91"). InnerVolumeSpecName "kube-api-access-jdwdt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:41:43 crc kubenswrapper[4755]: I1210 15:41:43.063821 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 10 15:41:43 crc kubenswrapper[4755]: W1210 15:41:43.064804 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfb480bc7_6936_4208_964b_44cffd08f907.slice/crio-8dbf06f3fe3aa83390c14d7b774f44242e2307287001500d0d87e698af07714f WatchSource:0}: Error finding container 8dbf06f3fe3aa83390c14d7b774f44242e2307287001500d0d87e698af07714f: Status 404 returned error can't find the container with id 8dbf06f3fe3aa83390c14d7b774f44242e2307287001500d0d87e698af07714f Dec 10 15:41:43 crc kubenswrapper[4755]: I1210 15:41:43.130740 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jdwdt\" (UniqueName: \"kubernetes.io/projected/2a56f7f7-a7ff-44b8-8253-2cbdaa9f0e91-kube-api-access-jdwdt\") on node \"crc\" DevicePath \"\"" Dec 10 15:41:43 crc kubenswrapper[4755]: I1210 15:41:43.130832 4755 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a56f7f7-a7ff-44b8-8253-2cbdaa9f0e91-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:41:43 crc kubenswrapper[4755]: I1210 15:41:43.130847 4755 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2a56f7f7-a7ff-44b8-8253-2cbdaa9f0e91-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 10 15:41:43 crc kubenswrapper[4755]: I1210 15:41:43.130879 4755 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e6c07e9-7f64-4a32-88f0-301723cb221b-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:41:43 crc kubenswrapper[4755]: I1210 15:41:43.130889 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b5rp2\" (UniqueName: \"kubernetes.io/projected/7e6c07e9-7f64-4a32-88f0-301723cb221b-kube-api-access-b5rp2\") on node \"crc\" DevicePath \"\"" Dec 10 15:41:43 crc kubenswrapper[4755]: I1210 15:41:43.173604 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-prnhr" event={"ID":"5bf05873-62f3-4a0f-b58e-ec6346b5f057","Type":"ContainerStarted","Data":"ee523c1a4d94bf209f45bd57aeaccdd1efe224f3f4353ff77cc3ff30068f80a8"} Dec 10 15:41:43 crc kubenswrapper[4755]: I1210 15:41:43.175113 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"376461c9-8e89-4c8c-bcef-6a873320a293","Type":"ContainerStarted","Data":"9723480bf7f2c7b2d87d2489169687d7094d842b5e0f6f9953722bd544ea1391"} Dec 10 15:41:43 crc kubenswrapper[4755]: I1210 15:41:43.176276 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"89e8722f-e9fc-4850-bb96-e51f9859805e","Type":"ContainerStarted","Data":"ac185573f47c8f3f8602e9b28556958174e3d080e2d98f1b571d4f62b583f2fe"} Dec 10 15:41:43 crc kubenswrapper[4755]: I1210 15:41:43.177229 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"48b9cc99-2595-445c-aca6-b13972e95324","Type":"ContainerStarted","Data":"95ddfccb91aec1a255d8c328cf2dbe0af92f822557cd8dad5499c40b236fbc64"} Dec 10 15:41:43 crc kubenswrapper[4755]: I1210 15:41:43.180103 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-zm77c" event={"ID":"7e6c07e9-7f64-4a32-88f0-301723cb221b","Type":"ContainerDied","Data":"627d1a2723990ba512e2e57be3e0c054b618944b99abfc67b013bdda6676596e"} Dec 10 15:41:43 crc kubenswrapper[4755]: I1210 15:41:43.180137 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-zm77c" Dec 10 15:41:43 crc kubenswrapper[4755]: I1210 15:41:43.183317 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"c0cfe6de-3c35-486c-a767-35484b3a0f3d","Type":"ContainerStarted","Data":"5fde6de447c94fd2adf935dfa3cfe29df823446de76f5d7515cc0766ecb2e8f3"} Dec 10 15:41:43 crc kubenswrapper[4755]: I1210 15:41:43.184579 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"fb480bc7-6936-4208-964b-44cffd08f907","Type":"ContainerStarted","Data":"8dbf06f3fe3aa83390c14d7b774f44242e2307287001500d0d87e698af07714f"} Dec 10 15:41:43 crc kubenswrapper[4755]: I1210 15:41:43.186452 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-frv75" event={"ID":"2a56f7f7-a7ff-44b8-8253-2cbdaa9f0e91","Type":"ContainerDied","Data":"d65cf09873ebf3ece30094293de4bc1df10173929fb94670f9aca3897c151b2d"} Dec 10 15:41:43 crc kubenswrapper[4755]: I1210 15:41:43.186608 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-frv75" Dec 10 15:41:43 crc kubenswrapper[4755]: I1210 15:41:43.262643 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-zm77c"] Dec 10 15:41:43 crc kubenswrapper[4755]: I1210 15:41:43.281618 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-zm77c"] Dec 10 15:41:43 crc kubenswrapper[4755]: I1210 15:41:43.299556 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-frv75"] Dec 10 15:41:43 crc kubenswrapper[4755]: I1210 15:41:43.306911 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-frv75"] Dec 10 15:41:43 crc kubenswrapper[4755]: I1210 15:41:43.482135 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-querier-5467947bf7-qpg72"] Dec 10 15:41:43 crc kubenswrapper[4755]: W1210 15:41:43.496169 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod251dc547_e1a7_418e_95fd_6b7e8e5c5d35.slice/crio-dc452af1c9a792fdf759eae41abe656ab9a955c8c54ebc6ae21f33337f32131e WatchSource:0}: Error finding container dc452af1c9a792fdf759eae41abe656ab9a955c8c54ebc6ae21f33337f32131e: Status 404 returned error can't find the container with id dc452af1c9a792fdf759eae41abe656ab9a955c8c54ebc6ae21f33337f32131e Dec 10 15:41:43 crc kubenswrapper[4755]: W1210 15:41:43.503230 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod60343e12_2433_4e98_9759_09d5e2b9d82b.slice/crio-4f9ec82ee4f1e523df2e79fb89440f8a104bd0cbc1654fd9de4ae3617654aebd WatchSource:0}: Error finding container 4f9ec82ee4f1e523df2e79fb89440f8a104bd0cbc1654fd9de4ae3617654aebd: Status 404 returned error can't find the container with id 4f9ec82ee4f1e523df2e79fb89440f8a104bd0cbc1654fd9de4ae3617654aebd Dec 10 15:41:43 crc kubenswrapper[4755]: I1210 15:41:43.503279 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 10 15:41:43 crc kubenswrapper[4755]: W1210 15:41:43.510107 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podad77f530_dc0b_44ec_b4e2_c580cfe568fe.slice/crio-126166107f5d54b220a517f51d29771518c6a3d2213cab3a9ed72878151b4100 WatchSource:0}: Error finding container 126166107f5d54b220a517f51d29771518c6a3d2213cab3a9ed72878151b4100: Status 404 returned error can't find the container with id 126166107f5d54b220a517f51d29771518c6a3d2213cab3a9ed72878151b4100 Dec 10 15:41:43 crc kubenswrapper[4755]: I1210 15:41:43.520125 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 10 15:41:43 crc kubenswrapper[4755]: W1210 15:41:43.529767 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf9c583d4_e5d0_4c13_9989_dea15920e9e6.slice/crio-30fa51463bb53c3292d4b01384154150079c0596efdb9017176712ee8530aaf5 WatchSource:0}: Error finding container 30fa51463bb53c3292d4b01384154150079c0596efdb9017176712ee8530aaf5: Status 404 returned error can't find the container with id 30fa51463bb53c3292d4b01384154150079c0596efdb9017176712ee8530aaf5 Dec 10 15:41:43 crc kubenswrapper[4755]: I1210 15:41:43.546410 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-distributor-664b687b54-qvsww"] Dec 10 15:41:43 crc kubenswrapper[4755]: I1210 15:41:43.555320 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-x9c7j"] Dec 10 15:41:43 crc kubenswrapper[4755]: I1210 15:41:43.663161 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-x972h"] Dec 10 15:41:43 crc kubenswrapper[4755]: W1210 15:41:43.665707 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7b79f2f6_2414_4403_8c2e_b58f114d941a.slice/crio-d67933c1f8a1613176bffd0ad2b770321c9576e27d6b80fbe4eac794a1658246 WatchSource:0}: Error finding container d67933c1f8a1613176bffd0ad2b770321c9576e27d6b80fbe4eac794a1658246: Status 404 returned error can't find the container with id d67933c1f8a1613176bffd0ad2b770321c9576e27d6b80fbe4eac794a1658246 Dec 10 15:41:43 crc kubenswrapper[4755]: I1210 15:41:43.771587 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a56f7f7-a7ff-44b8-8253-2cbdaa9f0e91" path="/var/lib/kubelet/pods/2a56f7f7-a7ff-44b8-8253-2cbdaa9f0e91/volumes" Dec 10 15:41:43 crc kubenswrapper[4755]: I1210 15:41:43.772283 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e6c07e9-7f64-4a32-88f0-301723cb221b" path="/var/lib/kubelet/pods/7e6c07e9-7f64-4a32-88f0-301723cb221b/volumes" Dec 10 15:41:44 crc kubenswrapper[4755]: I1210 15:41:44.044244 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-gateway-bc75944f-sxgmq"] Dec 10 15:41:44 crc kubenswrapper[4755]: W1210 15:41:44.047626 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4e702de9_8dda_4370_b806_41083a70ac41.slice/crio-9ba179190b0b67cfb0cef24758b39736a4a2c9e4cefa71bda319413c31f2a208 WatchSource:0}: Error finding container 9ba179190b0b67cfb0cef24758b39736a4a2c9e4cefa71bda319413c31f2a208: Status 404 returned error can't find the container with id 9ba179190b0b67cfb0cef24758b39736a4a2c9e4cefa71bda319413c31f2a208 Dec 10 15:41:44 crc kubenswrapper[4755]: I1210 15:41:44.068748 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-gateway-bc75944f-hm8ng"] Dec 10 15:41:44 crc kubenswrapper[4755]: W1210 15:41:44.075076 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode5a3871d_6b81_4b3d_9044_fcbcf437effb.slice/crio-984d8c9245c63dc4cbff19124f644f9e4fa34f4b3b6fb15b603ed23debeb4c7f WatchSource:0}: Error finding container 984d8c9245c63dc4cbff19124f644f9e4fa34f4b3b6fb15b603ed23debeb4c7f: Status 404 returned error can't find the container with id 984d8c9245c63dc4cbff19124f644f9e4fa34f4b3b6fb15b603ed23debeb4c7f Dec 10 15:41:44 crc kubenswrapper[4755]: E1210 15:41:44.075200 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:loki-query-frontend,Image:registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:06b83c3cbf0c5db4dd9812e046ca14189d18cf7b3c7f2f2c37aa705cc5f5deb7,Command:[],Args:[-target=query-frontend -config.file=/etc/loki/config/config.yaml -runtime-config.file=/etc/loki/config/runtime-config.yaml -config.expand-env=true],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:3100,Protocol:TCP,HostIP:,},ContainerPort{Name:grpclb,HostPort:0,ContainerPort:9095,Protocol:TCP,HostIP:,},ContainerPort{Name:healthchecks,HostPort:0,ContainerPort:3101,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/etc/loki/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-query-frontend-http,ReadOnly:false,MountPath:/var/run/tls/http/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-query-frontend-grpc,ReadOnly:false,MountPath:/var/run/tls/grpc/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-ca-bundle,ReadOnly:false,MountPath:/var/run/ca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cfj4t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/loki/api/v1/status/buildinfo,Port:{0 3101 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:2,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/loki/api/v1/status/buildinfo,Port:{0 3101 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-lokistack-query-frontend-7c8cd744d9-dr466_openstack(8fbdd63a-fd88-4a37-85fb-08e7d21af574): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 10 15:41:44 crc kubenswrapper[4755]: E1210 15:41:44.077533 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-query-frontend\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-dr466" podUID="8fbdd63a-fd88-4a37-85fb-08e7d21af574" Dec 10 15:41:44 crc kubenswrapper[4755]: E1210 15:41:44.080248 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:loki-ingester,Image:registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:06b83c3cbf0c5db4dd9812e046ca14189d18cf7b3c7f2f2c37aa705cc5f5deb7,Command:[],Args:[-target=ingester -config.file=/etc/loki/config/config.yaml -runtime-config.file=/etc/loki/config/runtime-config.yaml -config.expand-env=true],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:3100,Protocol:TCP,HostIP:,},ContainerPort{Name:grpclb,HostPort:0,ContainerPort:9095,Protocol:TCP,HostIP:,},ContainerPort{Name:gossip-ring,HostPort:0,ContainerPort:7946,Protocol:TCP,HostIP:,},ContainerPort{Name:healthchecks,HostPort:0,ContainerPort:3101,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:AWS_ACCESS_KEY_ID,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:cloudkitty-loki-s3,},Key:access_key_id,Optional:nil,},},},EnvVar{Name:AWS_ACCESS_KEY_SECRET,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:cloudkitty-loki-s3,},Key:access_key_secret,Optional:nil,},},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/etc/loki/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:storage,ReadOnly:false,MountPath:/tmp/loki,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:wal,ReadOnly:false,MountPath:/tmp/wal,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-ingester-http,ReadOnly:false,MountPath:/var/run/tls/http/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-loki-s3,ReadOnly:false,MountPath:/etc/storage/secrets,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-ingester-grpc,ReadOnly:false,MountPath:/var/run/tls/grpc/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-ca-bundle,ReadOnly:false,MountPath:/var/run/ca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rj9jd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/loki/api/v1/status/buildinfo,Port:{0 3101 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:2,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:{0 3101 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-lokistack-ingester-0_openstack(e5a3871d-6b81-4b3d-9044-fcbcf437effb): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 10 15:41:44 crc kubenswrapper[4755]: E1210 15:41:44.081443 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-ingester\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="e5a3871d-6b81-4b3d-9044-fcbcf437effb" Dec 10 15:41:44 crc kubenswrapper[4755]: I1210 15:41:44.087347 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-q6n4p"] Dec 10 15:41:44 crc kubenswrapper[4755]: I1210 15:41:44.098778 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-index-gateway-0"] Dec 10 15:41:44 crc kubenswrapper[4755]: I1210 15:41:44.111057 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-compactor-0"] Dec 10 15:41:44 crc kubenswrapper[4755]: I1210 15:41:44.127974 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-dr466"] Dec 10 15:41:44 crc kubenswrapper[4755]: W1210 15:41:44.129137 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd586c26d_c444_4202_b286_522cfc372f16.slice/crio-8cf5629344d90387ae0c3117d5fb73546ae5ad5d4c93d08fffc0425c256cbeeb WatchSource:0}: Error finding container 8cf5629344d90387ae0c3117d5fb73546ae5ad5d4c93d08fffc0425c256cbeeb: Status 404 returned error can't find the container with id 8cf5629344d90387ae0c3117d5fb73546ae5ad5d4c93d08fffc0425c256cbeeb Dec 10 15:41:44 crc kubenswrapper[4755]: I1210 15:41:44.133566 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-ingester-0"] Dec 10 15:41:44 crc kubenswrapper[4755]: I1210 15:41:44.148390 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 10 15:41:44 crc kubenswrapper[4755]: I1210 15:41:44.197804 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"251dc547-e1a7-418e-95fd-6b7e8e5c5d35","Type":"ContainerStarted","Data":"dc452af1c9a792fdf759eae41abe656ab9a955c8c54ebc6ae21f33337f32131e"} Dec 10 15:41:44 crc kubenswrapper[4755]: I1210 15:41:44.199802 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-index-gateway-0" event={"ID":"4e702de9-8dda-4370-b806-41083a70ac41","Type":"ContainerStarted","Data":"9ba179190b0b67cfb0cef24758b39736a4a2c9e4cefa71bda319413c31f2a208"} Dec 10 15:41:44 crc kubenswrapper[4755]: I1210 15:41:44.201037 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-q6n4p" event={"ID":"46b6df85-96b1-4583-a80f-97a5d980cc72","Type":"ContainerStarted","Data":"06d7bdfd495ffafab9ea863a38ef644dddcdaa7bc36a7ef41f60cc602d0400a6"} Dec 10 15:41:44 crc kubenswrapper[4755]: I1210 15:41:44.202389 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-ingester-0" event={"ID":"e5a3871d-6b81-4b3d-9044-fcbcf437effb","Type":"ContainerStarted","Data":"984d8c9245c63dc4cbff19124f644f9e4fa34f4b3b6fb15b603ed23debeb4c7f"} Dec 10 15:41:44 crc kubenswrapper[4755]: E1210 15:41:44.204949 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-ingester\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:06b83c3cbf0c5db4dd9812e046ca14189d18cf7b3c7f2f2c37aa705cc5f5deb7\\\"\"" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="e5a3871d-6b81-4b3d-9044-fcbcf437effb" Dec 10 15:41:44 crc kubenswrapper[4755]: I1210 15:41:44.205004 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-dr466" event={"ID":"8fbdd63a-fd88-4a37-85fb-08e7d21af574","Type":"ContainerStarted","Data":"253c31853c8d3267cc701ee660fc372b8a8d5745f09a3dffb2474a8dc57ba1a4"} Dec 10 15:41:44 crc kubenswrapper[4755]: I1210 15:41:44.206787 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-x972h" event={"ID":"7b79f2f6-2414-4403-8c2e-b58f114d941a","Type":"ContainerStarted","Data":"d67933c1f8a1613176bffd0ad2b770321c9576e27d6b80fbe4eac794a1658246"} Dec 10 15:41:44 crc kubenswrapper[4755]: E1210 15:41:44.208268 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-query-frontend\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:06b83c3cbf0c5db4dd9812e046ca14189d18cf7b3c7f2f2c37aa705cc5f5deb7\\\"\"" pod="openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-dr466" podUID="8fbdd63a-fd88-4a37-85fb-08e7d21af574" Dec 10 15:41:44 crc kubenswrapper[4755]: I1210 15:41:44.209158 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-gateway-bc75944f-sxgmq" event={"ID":"614e240f-4195-4915-9e2e-d142c9df25bc","Type":"ContainerStarted","Data":"1f7ac041a8355516479dd9b6caf9b2fe987070eca13c3855e6099fee67198e2d"} Dec 10 15:41:44 crc kubenswrapper[4755]: I1210 15:41:44.214372 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-querier-5467947bf7-qpg72" event={"ID":"f9c583d4-e5d0-4c13-9989-dea15920e9e6","Type":"ContainerStarted","Data":"30fa51463bb53c3292d4b01384154150079c0596efdb9017176712ee8530aaf5"} Dec 10 15:41:44 crc kubenswrapper[4755]: I1210 15:41:44.216413 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-x9c7j" event={"ID":"64c097c3-2f2a-4d0d-b7cb-8d1c88ffd3eb","Type":"ContainerStarted","Data":"acfa5dacb4ede8923df9406faa907211fcd8f9813e4b5616d33ddfda0ba736b3"} Dec 10 15:41:44 crc kubenswrapper[4755]: I1210 15:41:44.220432 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-compactor-0" event={"ID":"31bbbf2c-5266-4ea7-8428-ed2607013a35","Type":"ContainerStarted","Data":"48cda1d5cfdee2309eec18d83883dabc9750c36459e282383ae0c2ea33194482"} Dec 10 15:41:44 crc kubenswrapper[4755]: I1210 15:41:44.239890 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"60343e12-2433-4e98-9759-09d5e2b9d82b","Type":"ContainerStarted","Data":"4f9ec82ee4f1e523df2e79fb89440f8a104bd0cbc1654fd9de4ae3617654aebd"} Dec 10 15:41:44 crc kubenswrapper[4755]: I1210 15:41:44.241631 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"d586c26d-c444-4202-b286-522cfc372f16","Type":"ContainerStarted","Data":"8cf5629344d90387ae0c3117d5fb73546ae5ad5d4c93d08fffc0425c256cbeeb"} Dec 10 15:41:44 crc kubenswrapper[4755]: I1210 15:41:44.243555 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-gateway-bc75944f-hm8ng" event={"ID":"145f8d2b-2e95-4227-8c76-1f3ee8eab754","Type":"ContainerStarted","Data":"14683c9e4e9c948b3cc4b13dfd85b47e6c8dabb8603f156fe6163d73a492cc15"} Dec 10 15:41:44 crc kubenswrapper[4755]: I1210 15:41:44.244830 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-distributor-664b687b54-qvsww" event={"ID":"ad77f530-dc0b-44ec-b4e2-c580cfe568fe","Type":"ContainerStarted","Data":"126166107f5d54b220a517f51d29771518c6a3d2213cab3a9ed72878151b4100"} Dec 10 15:41:44 crc kubenswrapper[4755]: I1210 15:41:44.295832 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 10 15:41:44 crc kubenswrapper[4755]: E1210 15:41:44.303549 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovsdbserver-nb,Image:quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified,Command:[/usr/bin/dumb-init],Args:[/usr/local/bin/container-scripts/setup.sh],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5f9h54bh67bh5d7h564h549h84h5c5h57fh64hfhb7h95h9dh5ddh79h6bh55dh699h554h58fh5cch7fh576h5b7h574h678h567h668h54chcdh66dq,ValueFrom:nil,},EnvVar{Name:OVN_LOGDIR,Value:/tmp,ValueFrom:nil,},EnvVar{Name:OVN_RUNDIR,Value:/tmp,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovndbcluster-nb-etc-ovn,ReadOnly:false,MountPath:/etc/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdb-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5fbkw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/cleanup.sh],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:20,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovsdbserver-nb-0_openstack(e6228d01-72c0-4088-a51d-e90dc686a41a): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 10 15:41:44 crc kubenswrapper[4755]: E1210 15:41:44.305628 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstack-network-exporter,Image:quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified,Command:[/app/openstack-network-exporter],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:OPENSTACK_NETWORK_EXPORTER_YAML,Value:/etc/config/openstack-network-exporter.yaml,ValueFrom:nil,},EnvVar{Name:CONFIG_HASH,Value:n5f9h54bh67bh5d7h564h549h84h5c5h57fh64hfhb7h95h9dh5ddh79h6bh55dh699h554h58fh5cch7fh576h5b7h574h678h567h668h54chcdh66dq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovsdb-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovnmetrics.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovnmetrics.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5fbkw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovsdbserver-nb-0_openstack(e6228d01-72c0-4088-a51d-e90dc686a41a): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 10 15:41:44 crc kubenswrapper[4755]: E1210 15:41:44.306846 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"ovsdbserver-nb\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"openstack-network-exporter\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack/ovsdbserver-nb-0" podUID="e6228d01-72c0-4088-a51d-e90dc686a41a" Dec 10 15:41:45 crc kubenswrapper[4755]: I1210 15:41:45.256787 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"e6228d01-72c0-4088-a51d-e90dc686a41a","Type":"ContainerStarted","Data":"c77685bf7355c8064312bc6c617df13d53fc1720126793fe45c64cc3a5f29bd1"} Dec 10 15:41:45 crc kubenswrapper[4755]: E1210 15:41:45.259435 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"ovsdbserver-nb\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified\\\"\", failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified\\\"\"]" pod="openstack/ovsdbserver-nb-0" podUID="e6228d01-72c0-4088-a51d-e90dc686a41a" Dec 10 15:41:45 crc kubenswrapper[4755]: E1210 15:41:45.260782 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-query-frontend\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:06b83c3cbf0c5db4dd9812e046ca14189d18cf7b3c7f2f2c37aa705cc5f5deb7\\\"\"" pod="openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-dr466" podUID="8fbdd63a-fd88-4a37-85fb-08e7d21af574" Dec 10 15:41:45 crc kubenswrapper[4755]: E1210 15:41:45.260832 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-ingester\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:06b83c3cbf0c5db4dd9812e046ca14189d18cf7b3c7f2f2c37aa705cc5f5deb7\\\"\"" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="e5a3871d-6b81-4b3d-9044-fcbcf437effb" Dec 10 15:41:46 crc kubenswrapper[4755]: I1210 15:41:46.291304 4755 generic.go:334] "Generic (PLEG): container finished" podID="5bf05873-62f3-4a0f-b58e-ec6346b5f057" containerID="4c5db85f5ea66d6d2a0147da188f893b9281638c0e2cd6e59060f23e27505081" exitCode=0 Dec 10 15:41:46 crc kubenswrapper[4755]: I1210 15:41:46.292389 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-prnhr" event={"ID":"5bf05873-62f3-4a0f-b58e-ec6346b5f057","Type":"ContainerDied","Data":"4c5db85f5ea66d6d2a0147da188f893b9281638c0e2cd6e59060f23e27505081"} Dec 10 15:41:46 crc kubenswrapper[4755]: E1210 15:41:46.296168 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"ovsdbserver-nb\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified\\\"\", failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified\\\"\"]" pod="openstack/ovsdbserver-nb-0" podUID="e6228d01-72c0-4088-a51d-e90dc686a41a" Dec 10 15:41:58 crc kubenswrapper[4755]: E1210 15:41:58.085581 4755 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/openshift-logging/lokistack-gateway-rhel9@sha256:129dbfaf84e687adb93f670d2b46754fd2562513f6a45f79b37c7cc4c622f53e" Dec 10 15:41:58 crc kubenswrapper[4755]: E1210 15:41:58.086716 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:gateway,Image:registry.redhat.io/openshift-logging/lokistack-gateway-rhel9@sha256:129dbfaf84e687adb93f670d2b46754fd2562513f6a45f79b37c7cc4c622f53e,Command:[],Args:[--debug.name=lokistack-gateway --web.listen=0.0.0.0:8080 --web.internal.listen=0.0.0.0:8081 --web.healthchecks.url=https://localhost:8080 --log.level=warn --logs.read.endpoint=https://cloudkitty-lokistack-query-frontend-http.openstack.svc.cluster.local:3100 --logs.tail.endpoint=https://cloudkitty-lokistack-query-frontend-http.openstack.svc.cluster.local:3100 --logs.write.endpoint=https://cloudkitty-lokistack-distributor-http.openstack.svc.cluster.local:3100 --logs.write-timeout=4m0s --rbac.config=/etc/lokistack-gateway/rbac.yaml --tenants.config=/etc/lokistack-gateway/tenants.yaml --server.read-timeout=48s --server.write-timeout=6m0s --tls.min-version=VersionTLS12 --tls.server.cert-file=/var/run/tls/http/server/tls.crt --tls.server.key-file=/var/run/tls/http/server/tls.key --tls.healthchecks.server-ca-file=/var/run/ca/server/service-ca.crt --tls.healthchecks.server-name=cloudkitty-lokistack-gateway-http.openstack.svc.cluster.local --tls.internal.server.cert-file=/var/run/tls/http/server/tls.crt --tls.internal.server.key-file=/var/run/tls/http/server/tls.key --tls.min-version=VersionTLS12 --tls.cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --logs.tls.ca-file=/var/run/ca/upstream/service-ca.crt --logs.tls.cert-file=/var/run/tls/http/upstream/tls.crt --logs.tls.key-file=/var/run/tls/http/upstream/tls.key --tls.client-auth-type=RequestClientCert],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},ContainerPort{Name:public,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rbac,ReadOnly:true,MountPath:/etc/lokistack-gateway/rbac.yaml,SubPath:rbac.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tenants,ReadOnly:true,MountPath:/etc/lokistack-gateway/tenants.yaml,SubPath:tenants.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:lokistack-gateway,ReadOnly:true,MountPath:/etc/lokistack-gateway/lokistack-gateway.rego,SubPath:lokistack-gateway.rego,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tls-secret,ReadOnly:true,MountPath:/var/run/tls/http/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-gateway-client-http,ReadOnly:true,MountPath:/var/run/tls/http/upstream,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-ca-bundle,ReadOnly:true,MountPath:/var/run/ca/upstream,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-gateway-ca-bundle,ReadOnly:true,MountPath:/var/run/ca/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-ca-bundle,ReadOnly:false,MountPath:/var/run/tenants-ca/cloudkitty,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-495wc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/live,Port:{0 8081 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:2,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:{0 8081 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:12,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-lokistack-gateway-bc75944f-sxgmq_openstack(614e240f-4195-4915-9e2e-d142c9df25bc): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 10 15:41:58 crc kubenswrapper[4755]: E1210 15:41:58.088634 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gateway\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/cloudkitty-lokistack-gateway-bc75944f-sxgmq" podUID="614e240f-4195-4915-9e2e-d142c9df25bc" Dec 10 15:41:58 crc kubenswrapper[4755]: E1210 15:41:58.383740 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gateway\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift-logging/lokistack-gateway-rhel9@sha256:129dbfaf84e687adb93f670d2b46754fd2562513f6a45f79b37c7cc4c622f53e\\\"\"" pod="openstack/cloudkitty-lokistack-gateway-bc75944f-sxgmq" podUID="614e240f-4195-4915-9e2e-d142c9df25bc" Dec 10 15:41:58 crc kubenswrapper[4755]: E1210 15:41:58.512152 4755 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-prometheus-config-reloader-rhel9@sha256:1133c973c7472c665f910a722e19c8e2e27accb34b90fab67f14548627ce9c62" Dec 10 15:41:58 crc kubenswrapper[4755]: E1210 15:41:58.512710 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init-config-reloader,Image:registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-prometheus-config-reloader-rhel9@sha256:1133c973c7472c665f910a722e19c8e2e27accb34b90fab67f14548627ce9c62,Command:[/bin/prometheus-config-reloader],Args:[--watch-interval=0 --listen-address=:8081 --config-file=/etc/prometheus/config/prometheus.yaml.gz --config-envsubst-file=/etc/prometheus/config_out/prometheus.env.yaml --watched-dir=/etc/prometheus/rules/prometheus-metric-storage-rulefiles-0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:reloader-init,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:SHARD,Value:0,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/etc/prometheus/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-out,ReadOnly:false,MountPath:/etc/prometheus/config_out,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-rulefiles-0,ReadOnly:false,MountPath:/etc/prometheus/rules/prometheus-metric-storage-rulefiles-0,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-54p6w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod prometheus-metric-storage-0_openstack(251dc547-e1a7-418e-95fd-6b7e8e5c5d35): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 10 15:41:58 crc kubenswrapper[4755]: E1210 15:41:58.516683 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init-config-reloader\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/prometheus-metric-storage-0" podUID="251dc547-e1a7-418e-95fd-6b7e8e5c5d35" Dec 10 15:41:58 crc kubenswrapper[4755]: E1210 15:41:58.521761 4755 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/openshift-logging/lokistack-gateway-rhel9@sha256:129dbfaf84e687adb93f670d2b46754fd2562513f6a45f79b37c7cc4c622f53e" Dec 10 15:41:58 crc kubenswrapper[4755]: E1210 15:41:58.521943 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:gateway,Image:registry.redhat.io/openshift-logging/lokistack-gateway-rhel9@sha256:129dbfaf84e687adb93f670d2b46754fd2562513f6a45f79b37c7cc4c622f53e,Command:[],Args:[--debug.name=lokistack-gateway --web.listen=0.0.0.0:8080 --web.internal.listen=0.0.0.0:8081 --web.healthchecks.url=https://localhost:8080 --log.level=warn --logs.read.endpoint=https://cloudkitty-lokistack-query-frontend-http.openstack.svc.cluster.local:3100 --logs.tail.endpoint=https://cloudkitty-lokistack-query-frontend-http.openstack.svc.cluster.local:3100 --logs.write.endpoint=https://cloudkitty-lokistack-distributor-http.openstack.svc.cluster.local:3100 --logs.write-timeout=4m0s --rbac.config=/etc/lokistack-gateway/rbac.yaml --tenants.config=/etc/lokistack-gateway/tenants.yaml --server.read-timeout=48s --server.write-timeout=6m0s --tls.min-version=VersionTLS12 --tls.server.cert-file=/var/run/tls/http/server/tls.crt --tls.server.key-file=/var/run/tls/http/server/tls.key --tls.healthchecks.server-ca-file=/var/run/ca/server/service-ca.crt --tls.healthchecks.server-name=cloudkitty-lokistack-gateway-http.openstack.svc.cluster.local --tls.internal.server.cert-file=/var/run/tls/http/server/tls.crt --tls.internal.server.key-file=/var/run/tls/http/server/tls.key --tls.min-version=VersionTLS12 --tls.cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --logs.tls.ca-file=/var/run/ca/upstream/service-ca.crt --logs.tls.cert-file=/var/run/tls/http/upstream/tls.crt --logs.tls.key-file=/var/run/tls/http/upstream/tls.key --tls.client-auth-type=RequestClientCert],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},ContainerPort{Name:public,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rbac,ReadOnly:true,MountPath:/etc/lokistack-gateway/rbac.yaml,SubPath:rbac.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tenants,ReadOnly:true,MountPath:/etc/lokistack-gateway/tenants.yaml,SubPath:tenants.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:lokistack-gateway,ReadOnly:true,MountPath:/etc/lokistack-gateway/lokistack-gateway.rego,SubPath:lokistack-gateway.rego,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tls-secret,ReadOnly:true,MountPath:/var/run/tls/http/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-gateway-client-http,ReadOnly:true,MountPath:/var/run/tls/http/upstream,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-ca-bundle,ReadOnly:true,MountPath:/var/run/ca/upstream,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-gateway-ca-bundle,ReadOnly:true,MountPath:/var/run/ca/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-ca-bundle,ReadOnly:false,MountPath:/var/run/tenants-ca/cloudkitty,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jsqc6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/live,Port:{0 8081 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:2,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:{0 8081 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:12,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-lokistack-gateway-bc75944f-hm8ng_openstack(145f8d2b-2e95-4227-8c76-1f3ee8eab754): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 10 15:41:58 crc kubenswrapper[4755]: E1210 15:41:58.523229 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gateway\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/cloudkitty-lokistack-gateway-bc75944f-hm8ng" podUID="145f8d2b-2e95-4227-8c76-1f3ee8eab754" Dec 10 15:41:58 crc kubenswrapper[4755]: E1210 15:41:58.535846 4755 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Dec 10 15:41:58 crc kubenswrapper[4755]: E1210 15:41:58.536046 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g9z6s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(48b9cc99-2595-445c-aca6-b13972e95324): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 10 15:41:58 crc kubenswrapper[4755]: E1210 15:41:58.537416 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="48b9cc99-2595-445c-aca6-b13972e95324" Dec 10 15:41:58 crc kubenswrapper[4755]: E1210 15:41:58.548166 4755 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:06b83c3cbf0c5db4dd9812e046ca14189d18cf7b3c7f2f2c37aa705cc5f5deb7" Dec 10 15:41:58 crc kubenswrapper[4755]: E1210 15:41:58.548378 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:loki-querier,Image:registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:06b83c3cbf0c5db4dd9812e046ca14189d18cf7b3c7f2f2c37aa705cc5f5deb7,Command:[],Args:[-target=querier -config.file=/etc/loki/config/config.yaml -runtime-config.file=/etc/loki/config/runtime-config.yaml -config.expand-env=true],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:3100,Protocol:TCP,HostIP:,},ContainerPort{Name:grpclb,HostPort:0,ContainerPort:9095,Protocol:TCP,HostIP:,},ContainerPort{Name:gossip-ring,HostPort:0,ContainerPort:7946,Protocol:TCP,HostIP:,},ContainerPort{Name:healthchecks,HostPort:0,ContainerPort:3101,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:AWS_ACCESS_KEY_ID,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:cloudkitty-loki-s3,},Key:access_key_id,Optional:nil,},},},EnvVar{Name:AWS_ACCESS_KEY_SECRET,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:cloudkitty-loki-s3,},Key:access_key_secret,Optional:nil,},},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/etc/loki/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-querier-http,ReadOnly:false,MountPath:/var/run/tls/http/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-loki-s3,ReadOnly:false,MountPath:/etc/storage/secrets,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-querier-grpc,ReadOnly:false,MountPath:/var/run/tls/grpc/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-ca-bundle,ReadOnly:false,MountPath:/var/run/ca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hqp2z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/loki/api/v1/status/buildinfo,Port:{0 3101 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:2,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:{0 3101 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-lokistack-querier-5467947bf7-qpg72_openstack(f9c583d4-e5d0-4c13-9989-dea15920e9e6): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 10 15:41:58 crc kubenswrapper[4755]: E1210 15:41:58.549785 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-querier\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/cloudkitty-lokistack-querier-5467947bf7-qpg72" podUID="f9c583d4-e5d0-4c13-9989-dea15920e9e6" Dec 10 15:41:59 crc kubenswrapper[4755]: E1210 15:41:59.200644 4755 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-memcached:current-podified" Dec 10 15:41:59 crc kubenswrapper[4755]: E1210 15:41:59.201150 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:memcached,Image:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,Command:[/usr/bin/dumb-init -- /usr/local/bin/kolla_start],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:memcached,HostPort:0,ContainerPort:11211,Protocol:TCP,HostIP:,},ContainerPort{Name:memcached-tls,HostPort:0,ContainerPort:11212,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:POD_IPS,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIPs,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CONFIG_HASH,Value:n5dch669h589h667hcfh56hfh665h694hdbh87h5f4hf5h557h5d4h658h699h67ch669h59h57ch8h5fbh68bh59hbch566h559hf9h7h564h5b4q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/src,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/certs/memcached.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/private/memcached.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-56xq6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42457,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42457,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod memcached-0_openstack(b6fc3b5b-a2c7-404f-8435-6c2a72d2c4a8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 10 15:41:59 crc kubenswrapper[4755]: E1210 15:41:59.202646 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/memcached-0" podUID="b6fc3b5b-a2c7-404f-8435-6c2a72d2c4a8" Dec 10 15:41:59 crc kubenswrapper[4755]: E1210 15:41:59.386742 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-galera-0" podUID="48b9cc99-2595-445c-aca6-b13972e95324" Dec 10 15:41:59 crc kubenswrapper[4755]: E1210 15:41:59.386844 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gateway\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift-logging/lokistack-gateway-rhel9@sha256:129dbfaf84e687adb93f670d2b46754fd2562513f6a45f79b37c7cc4c622f53e\\\"\"" pod="openstack/cloudkitty-lokistack-gateway-bc75944f-hm8ng" podUID="145f8d2b-2e95-4227-8c76-1f3ee8eab754" Dec 10 15:41:59 crc kubenswrapper[4755]: E1210 15:41:59.386953 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-memcached:current-podified\\\"\"" pod="openstack/memcached-0" podUID="b6fc3b5b-a2c7-404f-8435-6c2a72d2c4a8" Dec 10 15:41:59 crc kubenswrapper[4755]: E1210 15:41:59.387225 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init-config-reloader\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-prometheus-config-reloader-rhel9@sha256:1133c973c7472c665f910a722e19c8e2e27accb34b90fab67f14548627ce9c62\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="251dc547-e1a7-418e-95fd-6b7e8e5c5d35" Dec 10 15:41:59 crc kubenswrapper[4755]: E1210 15:41:59.387821 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-querier\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:06b83c3cbf0c5db4dd9812e046ca14189d18cf7b3c7f2f2c37aa705cc5f5deb7\\\"\"" pod="openstack/cloudkitty-lokistack-querier-5467947bf7-qpg72" podUID="f9c583d4-e5d0-4c13-9989-dea15920e9e6" Dec 10 15:42:00 crc kubenswrapper[4755]: E1210 15:42:00.468905 4755 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ovn-base:current-podified" Dec 10 15:42:00 crc kubenswrapper[4755]: E1210 15:42:00.469070 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:ovsdb-server-init,Image:quay.io/podified-antelope-centos9/openstack-ovn-base:current-podified,Command:[/usr/local/bin/container-scripts/init-ovsdb-server.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68bh79h648h59dh547h544h75h5bbh5dbh5c5h5dchdfhc8h5b8h545h68ch684hb6h5d8h54fh98hb9h5bdhb5h7h86h688h5f8h594h76hcbh685q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-ovs,ReadOnly:false,MountPath:/etc/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run,ReadOnly:false,MountPath:/var/run/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-log,ReadOnly:false,MountPath:/var/log/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-lib,ReadOnly:false,MountPath:/var/lib/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gx5gh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN SYS_ADMIN SYS_NICE],Drop:[],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-controller-ovs-x972h_openstack(7b79f2f6-2414-4403-8c2e-b58f114d941a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 10 15:42:00 crc kubenswrapper[4755]: E1210 15:42:00.471202 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdb-server-init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovn-controller-ovs-x972h" podUID="7b79f2f6-2414-4403-8c2e-b58f114d941a" Dec 10 15:42:00 crc kubenswrapper[4755]: E1210 15:42:00.730451 4755 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server:current-podified" Dec 10 15:42:00 crc kubenswrapper[4755]: E1210 15:42:00.730728 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovsdbserver-sb,Image:quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server:current-podified,Command:[/usr/bin/dumb-init],Args:[/usr/local/bin/container-scripts/setup.sh],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n54h9dhd8h5c7h654h54ch7dh584h7fh55hb5h5d7hfdh5ddh575h654hcbh565hd4h544h7fh5cbh68dh5b6h59h5f4h9hcdh596hc8h587hd6q,ValueFrom:nil,},EnvVar{Name:OVN_LOGDIR,Value:/tmp,ValueFrom:nil,},EnvVar{Name:OVN_RUNDIR,Value:/tmp,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovndbcluster-sb-etc-ovn,ReadOnly:false,MountPath:/etc/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdb-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2tccl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/cleanup.sh],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:20,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovsdbserver-sb-0_openstack(d586c26d-c444-4202-b286-522cfc372f16): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 10 15:42:00 crc kubenswrapper[4755]: E1210 15:42:00.906367 4755 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified" Dec 10 15:42:00 crc kubenswrapper[4755]: E1210 15:42:00.906876 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovn-controller,Image:quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified,Command:[ovn-controller --pidfile unix:/run/openvswitch/db.sock --certificate=/etc/pki/tls/certs/ovndb.crt --private-key=/etc/pki/tls/private/ovndb.key --ca-cert=/etc/pki/tls/certs/ovndbca.crt],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68bh79h648h59dh547h544h75h5bbh5dbh5c5h5dchdfhc8h5b8h545h68ch684hb6h5d8h54fh98hb9h5bdhb5h7h86h688h5f8h594h76hcbh685q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:var-run,ReadOnly:false,MountPath:/var/run/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run-ovn,ReadOnly:false,MountPath:/var/run/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-log-ovn,ReadOnly:false,MountPath:/var/log/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7kgf8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/ovn_controller_liveness.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/ovn_controller_readiness.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/share/ovn/scripts/ovn-ctl stop_controller],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN SYS_ADMIN SYS_NICE],Drop:[],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-controller-q6n4p_openstack(46b6df85-96b1-4583-a80f-97a5d980cc72): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 10 15:42:00 crc kubenswrapper[4755]: E1210 15:42:00.908542 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-controller\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovn-controller-q6n4p" podUID="46b6df85-96b1-4583-a80f-97a5d980cc72" Dec 10 15:42:01 crc kubenswrapper[4755]: E1210 15:42:01.400753 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdb-server-init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-base:current-podified\\\"\"" pod="openstack/ovn-controller-ovs-x972h" podUID="7b79f2f6-2414-4403-8c2e-b58f114d941a" Dec 10 15:42:01 crc kubenswrapper[4755]: E1210 15:42:01.401021 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-controller\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified\\\"\"" pod="openstack/ovn-controller-q6n4p" podUID="46b6df85-96b1-4583-a80f-97a5d980cc72" Dec 10 15:42:02 crc kubenswrapper[4755]: I1210 15:42:02.408914 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-prnhr" event={"ID":"5bf05873-62f3-4a0f-b58e-ec6346b5f057","Type":"ContainerStarted","Data":"993dec3ddbb9b3b290a9f0c0de760c18ef25843f76c6b98ae78c790d10369069"} Dec 10 15:42:02 crc kubenswrapper[4755]: I1210 15:42:02.409200 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-prnhr" Dec 10 15:42:02 crc kubenswrapper[4755]: I1210 15:42:02.410698 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-index-gateway-0" event={"ID":"4e702de9-8dda-4370-b806-41083a70ac41","Type":"ContainerStarted","Data":"7be77a14baab6fbe2e6434c6e42e63b8dd0c8bc56ea6074c47b153b2a21ed53e"} Dec 10 15:42:02 crc kubenswrapper[4755]: I1210 15:42:02.410836 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 10 15:42:02 crc kubenswrapper[4755]: I1210 15:42:02.432602 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-prnhr" podStartSLOduration=39.808315188 podStartE2EDuration="42.432585872s" podCreationTimestamp="2025-12-10 15:41:20 +0000 UTC" firstStartedPulling="2025-12-10 15:41:42.518307624 +0000 UTC m=+1099.119191256" lastFinishedPulling="2025-12-10 15:41:45.142578308 +0000 UTC m=+1101.743461940" observedRunningTime="2025-12-10 15:42:02.423057822 +0000 UTC m=+1119.023941464" watchObservedRunningTime="2025-12-10 15:42:02.432585872 +0000 UTC m=+1119.033469504" Dec 10 15:42:02 crc kubenswrapper[4755]: I1210 15:42:02.454110 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-index-gateway-0" podStartSLOduration=7.79247442 podStartE2EDuration="24.454085847s" podCreationTimestamp="2025-12-10 15:41:38 +0000 UTC" firstStartedPulling="2025-12-10 15:41:44.055086312 +0000 UTC m=+1100.655969944" lastFinishedPulling="2025-12-10 15:42:00.716697739 +0000 UTC m=+1117.317581371" observedRunningTime="2025-12-10 15:42:02.444692921 +0000 UTC m=+1119.045576583" watchObservedRunningTime="2025-12-10 15:42:02.454085847 +0000 UTC m=+1119.054969489" Dec 10 15:42:03 crc kubenswrapper[4755]: E1210 15:42:03.064833 4755 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Dec 10 15:42:03 crc kubenswrapper[4755]: E1210 15:42:03.064897 4755 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Dec 10 15:42:03 crc kubenswrapper[4755]: E1210 15:42:03.065056 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-state-metrics,Image:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,Command:[],Args:[--resources=pods --namespaces=openstack],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},ContainerPort{Name:telemetry,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ltt7r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-state-metrics-0_openstack(60343e12-2433-4e98-9759-09d5e2b9d82b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 10 15:42:03 crc kubenswrapper[4755]: E1210 15:42:03.066518 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/kube-state-metrics-0" podUID="60343e12-2433-4e98-9759-09d5e2b9d82b" Dec 10 15:42:03 crc kubenswrapper[4755]: I1210 15:42:03.424078 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-distributor-664b687b54-qvsww" event={"ID":"ad77f530-dc0b-44ec-b4e2-c580cfe568fe","Type":"ContainerStarted","Data":"e118e9654dcdb3ccbeee519bdf5ed84674da1321f88271483d3036cac60db4fe"} Dec 10 15:42:03 crc kubenswrapper[4755]: I1210 15:42:03.425202 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-distributor-664b687b54-qvsww" Dec 10 15:42:03 crc kubenswrapper[4755]: I1210 15:42:03.428244 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"c0cfe6de-3c35-486c-a767-35484b3a0f3d","Type":"ContainerStarted","Data":"d7848130230035e9ec4b5820d112e255ca28b1faab962e2b6c3a43e27740157c"} Dec 10 15:42:03 crc kubenswrapper[4755]: I1210 15:42:03.430821 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-compactor-0" event={"ID":"31bbbf2c-5266-4ea7-8428-ed2607013a35","Type":"ContainerStarted","Data":"37de70e4c1ac2932d37f73fa7dba2bcd1de89ae938c16517de1bd1feac16cf52"} Dec 10 15:42:03 crc kubenswrapper[4755]: I1210 15:42:03.431267 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-compactor-0" Dec 10 15:42:03 crc kubenswrapper[4755]: I1210 15:42:03.434014 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-ingester-0" event={"ID":"e5a3871d-6b81-4b3d-9044-fcbcf437effb","Type":"ContainerStarted","Data":"3c409743e4ab358dd29fc43502060ffc2ead257951f6999bad0e97dcba14f061"} Dec 10 15:42:03 crc kubenswrapper[4755]: I1210 15:42:03.435282 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 15:42:03 crc kubenswrapper[4755]: I1210 15:42:03.440135 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-dr466" event={"ID":"8fbdd63a-fd88-4a37-85fb-08e7d21af574","Type":"ContainerStarted","Data":"4e315c5d02bd4b65abbb32f80d628db448bc67df9957d09b0d80d70ea9b98178"} Dec 10 15:42:03 crc kubenswrapper[4755]: I1210 15:42:03.441606 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-dr466" Dec 10 15:42:03 crc kubenswrapper[4755]: I1210 15:42:03.448527 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-distributor-664b687b54-qvsww" podStartSLOduration=8.458124378 podStartE2EDuration="26.448504759s" podCreationTimestamp="2025-12-10 15:41:37 +0000 UTC" firstStartedPulling="2025-12-10 15:41:43.52582934 +0000 UTC m=+1100.126712972" lastFinishedPulling="2025-12-10 15:42:01.516209721 +0000 UTC m=+1118.117093353" observedRunningTime="2025-12-10 15:42:03.444400221 +0000 UTC m=+1120.045283853" watchObservedRunningTime="2025-12-10 15:42:03.448504759 +0000 UTC m=+1120.049388391" Dec 10 15:42:03 crc kubenswrapper[4755]: I1210 15:42:03.455649 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"e6228d01-72c0-4088-a51d-e90dc686a41a","Type":"ContainerStarted","Data":"1901208028a1da4014e908ce23c92119e5d4d74bfeb581657eb989b8a67362c2"} Dec 10 15:42:03 crc kubenswrapper[4755]: E1210 15:42:03.456722 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0\\\"\"" pod="openstack/kube-state-metrics-0" podUID="60343e12-2433-4e98-9759-09d5e2b9d82b" Dec 10 15:42:03 crc kubenswrapper[4755]: I1210 15:42:03.471209 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-dr466" podStartSLOduration=-9223372011.383587 podStartE2EDuration="25.471188985s" podCreationTimestamp="2025-12-10 15:41:38 +0000 UTC" firstStartedPulling="2025-12-10 15:41:44.075075417 +0000 UTC m=+1100.675959049" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:42:03.465797753 +0000 UTC m=+1120.066681395" watchObservedRunningTime="2025-12-10 15:42:03.471188985 +0000 UTC m=+1120.072072617" Dec 10 15:42:03 crc kubenswrapper[4755]: I1210 15:42:03.498090 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-compactor-0" podStartSLOduration=8.052905852 podStartE2EDuration="25.497975309s" podCreationTimestamp="2025-12-10 15:41:38 +0000 UTC" firstStartedPulling="2025-12-10 15:41:44.071188375 +0000 UTC m=+1100.672072007" lastFinishedPulling="2025-12-10 15:42:01.516257832 +0000 UTC m=+1118.117141464" observedRunningTime="2025-12-10 15:42:03.494577079 +0000 UTC m=+1120.095460721" watchObservedRunningTime="2025-12-10 15:42:03.497975309 +0000 UTC m=+1120.098858931" Dec 10 15:42:03 crc kubenswrapper[4755]: I1210 15:42:03.545830 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-ingester-0" podStartSLOduration=-9223372010.308966 podStartE2EDuration="26.545809976s" podCreationTimestamp="2025-12-10 15:41:37 +0000 UTC" firstStartedPulling="2025-12-10 15:41:44.080100339 +0000 UTC m=+1100.680983971" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:42:03.534834547 +0000 UTC m=+1120.135718179" watchObservedRunningTime="2025-12-10 15:42:03.545809976 +0000 UTC m=+1120.146693618" Dec 10 15:42:04 crc kubenswrapper[4755]: I1210 15:42:04.470589 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"fb480bc7-6936-4208-964b-44cffd08f907","Type":"ContainerStarted","Data":"05050464a8e3abb66cfa4fb28127a52df0fde7cacd0e16b4c4b9c9d38958867c"} Dec 10 15:42:04 crc kubenswrapper[4755]: I1210 15:42:04.473097 4755 generic.go:334] "Generic (PLEG): container finished" podID="64c097c3-2f2a-4d0d-b7cb-8d1c88ffd3eb" containerID="eaf1407b902788e875592cc2bf3fbc8c5ec07157c1bf147ffc0f7d9859716ee6" exitCode=0 Dec 10 15:42:04 crc kubenswrapper[4755]: I1210 15:42:04.473170 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-x9c7j" event={"ID":"64c097c3-2f2a-4d0d-b7cb-8d1c88ffd3eb","Type":"ContainerDied","Data":"eaf1407b902788e875592cc2bf3fbc8c5ec07157c1bf147ffc0f7d9859716ee6"} Dec 10 15:42:05 crc kubenswrapper[4755]: I1210 15:42:05.481836 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"376461c9-8e89-4c8c-bcef-6a873320a293","Type":"ContainerStarted","Data":"24ca303001748c15f28e9354e34cbb0de1a5b506c259876b8178ca415dce3162"} Dec 10 15:42:05 crc kubenswrapper[4755]: I1210 15:42:05.485813 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"89e8722f-e9fc-4850-bb96-e51f9859805e","Type":"ContainerStarted","Data":"2737c09a7a60eb8a396709c9839d92a46f9c8e8d9ca2c58a8da58c76ff81fbda"} Dec 10 15:42:06 crc kubenswrapper[4755]: I1210 15:42:06.148645 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-666b6646f7-prnhr" Dec 10 15:42:08 crc kubenswrapper[4755]: E1210 15:42:08.316250 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-sb\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovsdbserver-sb-0" podUID="d586c26d-c444-4202-b286-522cfc372f16" Dec 10 15:42:08 crc kubenswrapper[4755]: I1210 15:42:08.515338 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"e6228d01-72c0-4088-a51d-e90dc686a41a","Type":"ContainerStarted","Data":"766f191e23b056eba27fb67685c7f906b1d93fc750371fef31dc2a90c1489f64"} Dec 10 15:42:08 crc kubenswrapper[4755]: I1210 15:42:08.517537 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-x9c7j" event={"ID":"64c097c3-2f2a-4d0d-b7cb-8d1c88ffd3eb","Type":"ContainerStarted","Data":"6c9dbd85753e1e8337f64c26697544072559ef133d2501464562af1aefbb80bd"} Dec 10 15:42:08 crc kubenswrapper[4755]: I1210 15:42:08.517866 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-x9c7j" Dec 10 15:42:08 crc kubenswrapper[4755]: I1210 15:42:08.519718 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"d586c26d-c444-4202-b286-522cfc372f16","Type":"ContainerStarted","Data":"e9e6f4e1383d65de06a290b0736f2242cc7a923694b1c29a9abcb704d0ef2a7d"} Dec 10 15:42:08 crc kubenswrapper[4755]: E1210 15:42:08.521535 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-sb\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server:current-podified\\\"\"" pod="openstack/ovsdbserver-sb-0" podUID="d586c26d-c444-4202-b286-522cfc372f16" Dec 10 15:42:08 crc kubenswrapper[4755]: I1210 15:42:08.539395 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=11.682757789 podStartE2EDuration="35.539380948s" podCreationTimestamp="2025-12-10 15:41:33 +0000 UTC" firstStartedPulling="2025-12-10 15:41:44.303399765 +0000 UTC m=+1100.904283397" lastFinishedPulling="2025-12-10 15:42:08.160022924 +0000 UTC m=+1124.760906556" observedRunningTime="2025-12-10 15:42:08.537189361 +0000 UTC m=+1125.138073003" watchObservedRunningTime="2025-12-10 15:42:08.539380948 +0000 UTC m=+1125.140264580" Dec 10 15:42:08 crc kubenswrapper[4755]: I1210 15:42:08.566145 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-x9c7j" podStartSLOduration=29.423678195 podStartE2EDuration="47.566123541s" podCreationTimestamp="2025-12-10 15:41:21 +0000 UTC" firstStartedPulling="2025-12-10 15:41:43.543202166 +0000 UTC m=+1100.144085798" lastFinishedPulling="2025-12-10 15:42:01.685647512 +0000 UTC m=+1118.286531144" observedRunningTime="2025-12-10 15:42:08.556326334 +0000 UTC m=+1125.157209966" watchObservedRunningTime="2025-12-10 15:42:08.566123541 +0000 UTC m=+1125.167007173" Dec 10 15:42:09 crc kubenswrapper[4755]: I1210 15:42:09.531869 4755 generic.go:334] "Generic (PLEG): container finished" podID="c0cfe6de-3c35-486c-a767-35484b3a0f3d" containerID="d7848130230035e9ec4b5820d112e255ca28b1faab962e2b6c3a43e27740157c" exitCode=0 Dec 10 15:42:09 crc kubenswrapper[4755]: I1210 15:42:09.531954 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"c0cfe6de-3c35-486c-a767-35484b3a0f3d","Type":"ContainerDied","Data":"d7848130230035e9ec4b5820d112e255ca28b1faab962e2b6c3a43e27740157c"} Dec 10 15:42:09 crc kubenswrapper[4755]: E1210 15:42:09.535487 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-sb\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server:current-podified\\\"\"" pod="openstack/ovsdbserver-sb-0" podUID="d586c26d-c444-4202-b286-522cfc372f16" Dec 10 15:42:10 crc kubenswrapper[4755]: I1210 15:42:10.300699 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Dec 10 15:42:10 crc kubenswrapper[4755]: I1210 15:42:10.359302 4755 patch_prober.go:28] interesting pod/machine-config-daemon-ggt8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 15:42:10 crc kubenswrapper[4755]: I1210 15:42:10.359380 4755 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 15:42:10 crc kubenswrapper[4755]: I1210 15:42:10.546119 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"c0cfe6de-3c35-486c-a767-35484b3a0f3d","Type":"ContainerStarted","Data":"6479e5ec0334e05b41fc4cd3f1884de15c86c34b1625f1a5cdd6b4253b28df02"} Dec 10 15:42:10 crc kubenswrapper[4755]: I1210 15:42:10.792282 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=29.884508447 podStartE2EDuration="47.792255918s" podCreationTimestamp="2025-12-10 15:41:23 +0000 UTC" firstStartedPulling="2025-12-10 15:41:42.979498348 +0000 UTC m=+1099.580381980" lastFinishedPulling="2025-12-10 15:42:00.887245819 +0000 UTC m=+1117.488129451" observedRunningTime="2025-12-10 15:42:10.571926801 +0000 UTC m=+1127.172810463" watchObservedRunningTime="2025-12-10 15:42:10.792255918 +0000 UTC m=+1127.393139580" Dec 10 15:42:11 crc kubenswrapper[4755]: I1210 15:42:11.301407 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Dec 10 15:42:11 crc kubenswrapper[4755]: I1210 15:42:11.345646 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Dec 10 15:42:11 crc kubenswrapper[4755]: I1210 15:42:11.592501 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Dec 10 15:42:11 crc kubenswrapper[4755]: I1210 15:42:11.910794 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-x9c7j"] Dec 10 15:42:11 crc kubenswrapper[4755]: I1210 15:42:11.911328 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-x9c7j" podUID="64c097c3-2f2a-4d0d-b7cb-8d1c88ffd3eb" containerName="dnsmasq-dns" containerID="cri-o://6c9dbd85753e1e8337f64c26697544072559ef133d2501464562af1aefbb80bd" gracePeriod=10 Dec 10 15:42:11 crc kubenswrapper[4755]: I1210 15:42:11.961878 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-m8hr9"] Dec 10 15:42:11 crc kubenswrapper[4755]: I1210 15:42:11.964035 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-m8hr9" Dec 10 15:42:11 crc kubenswrapper[4755]: I1210 15:42:11.969139 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Dec 10 15:42:12 crc kubenswrapper[4755]: I1210 15:42:12.029209 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-m8hr9"] Dec 10 15:42:12 crc kubenswrapper[4755]: I1210 15:42:12.037220 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/129874e9-8e1c-4660-a365-5496f3148968-config\") pod \"dnsmasq-dns-5bf47b49b7-m8hr9\" (UID: \"129874e9-8e1c-4660-a365-5496f3148968\") " pod="openstack/dnsmasq-dns-5bf47b49b7-m8hr9" Dec 10 15:42:12 crc kubenswrapper[4755]: I1210 15:42:12.037350 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/129874e9-8e1c-4660-a365-5496f3148968-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-m8hr9\" (UID: \"129874e9-8e1c-4660-a365-5496f3148968\") " pod="openstack/dnsmasq-dns-5bf47b49b7-m8hr9" Dec 10 15:42:12 crc kubenswrapper[4755]: I1210 15:42:12.037391 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/129874e9-8e1c-4660-a365-5496f3148968-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-m8hr9\" (UID: \"129874e9-8e1c-4660-a365-5496f3148968\") " pod="openstack/dnsmasq-dns-5bf47b49b7-m8hr9" Dec 10 15:42:12 crc kubenswrapper[4755]: I1210 15:42:12.037416 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ljn5\" (UniqueName: \"kubernetes.io/projected/129874e9-8e1c-4660-a365-5496f3148968-kube-api-access-8ljn5\") pod \"dnsmasq-dns-5bf47b49b7-m8hr9\" (UID: \"129874e9-8e1c-4660-a365-5496f3148968\") " pod="openstack/dnsmasq-dns-5bf47b49b7-m8hr9" Dec 10 15:42:12 crc kubenswrapper[4755]: I1210 15:42:12.051636 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-lxfpz"] Dec 10 15:42:12 crc kubenswrapper[4755]: I1210 15:42:12.052870 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-lxfpz" Dec 10 15:42:12 crc kubenswrapper[4755]: I1210 15:42:12.055812 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Dec 10 15:42:12 crc kubenswrapper[4755]: I1210 15:42:12.089237 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-lxfpz"] Dec 10 15:42:12 crc kubenswrapper[4755]: I1210 15:42:12.146764 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/129874e9-8e1c-4660-a365-5496f3148968-config\") pod \"dnsmasq-dns-5bf47b49b7-m8hr9\" (UID: \"129874e9-8e1c-4660-a365-5496f3148968\") " pod="openstack/dnsmasq-dns-5bf47b49b7-m8hr9" Dec 10 15:42:12 crc kubenswrapper[4755]: I1210 15:42:12.146942 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bg2x6\" (UniqueName: \"kubernetes.io/projected/a2d233ea-7ff9-4ce1-ada7-40d66f801cea-kube-api-access-bg2x6\") pod \"ovn-controller-metrics-lxfpz\" (UID: \"a2d233ea-7ff9-4ce1-ada7-40d66f801cea\") " pod="openstack/ovn-controller-metrics-lxfpz" Dec 10 15:42:12 crc kubenswrapper[4755]: I1210 15:42:12.147104 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2d233ea-7ff9-4ce1-ada7-40d66f801cea-config\") pod \"ovn-controller-metrics-lxfpz\" (UID: \"a2d233ea-7ff9-4ce1-ada7-40d66f801cea\") " pod="openstack/ovn-controller-metrics-lxfpz" Dec 10 15:42:12 crc kubenswrapper[4755]: I1210 15:42:12.147204 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/a2d233ea-7ff9-4ce1-ada7-40d66f801cea-ovn-rundir\") pod \"ovn-controller-metrics-lxfpz\" (UID: \"a2d233ea-7ff9-4ce1-ada7-40d66f801cea\") " pod="openstack/ovn-controller-metrics-lxfpz" Dec 10 15:42:12 crc kubenswrapper[4755]: I1210 15:42:12.147320 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2d233ea-7ff9-4ce1-ada7-40d66f801cea-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-lxfpz\" (UID: \"a2d233ea-7ff9-4ce1-ada7-40d66f801cea\") " pod="openstack/ovn-controller-metrics-lxfpz" Dec 10 15:42:12 crc kubenswrapper[4755]: I1210 15:42:12.147487 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/129874e9-8e1c-4660-a365-5496f3148968-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-m8hr9\" (UID: \"129874e9-8e1c-4660-a365-5496f3148968\") " pod="openstack/dnsmasq-dns-5bf47b49b7-m8hr9" Dec 10 15:42:12 crc kubenswrapper[4755]: I1210 15:42:12.147598 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/a2d233ea-7ff9-4ce1-ada7-40d66f801cea-ovs-rundir\") pod \"ovn-controller-metrics-lxfpz\" (UID: \"a2d233ea-7ff9-4ce1-ada7-40d66f801cea\") " pod="openstack/ovn-controller-metrics-lxfpz" Dec 10 15:42:12 crc kubenswrapper[4755]: I1210 15:42:12.147701 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2d233ea-7ff9-4ce1-ada7-40d66f801cea-combined-ca-bundle\") pod \"ovn-controller-metrics-lxfpz\" (UID: \"a2d233ea-7ff9-4ce1-ada7-40d66f801cea\") " pod="openstack/ovn-controller-metrics-lxfpz" Dec 10 15:42:12 crc kubenswrapper[4755]: I1210 15:42:12.147822 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/129874e9-8e1c-4660-a365-5496f3148968-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-m8hr9\" (UID: \"129874e9-8e1c-4660-a365-5496f3148968\") " pod="openstack/dnsmasq-dns-5bf47b49b7-m8hr9" Dec 10 15:42:12 crc kubenswrapper[4755]: I1210 15:42:12.147975 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/129874e9-8e1c-4660-a365-5496f3148968-config\") pod \"dnsmasq-dns-5bf47b49b7-m8hr9\" (UID: \"129874e9-8e1c-4660-a365-5496f3148968\") " pod="openstack/dnsmasq-dns-5bf47b49b7-m8hr9" Dec 10 15:42:12 crc kubenswrapper[4755]: I1210 15:42:12.147987 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ljn5\" (UniqueName: \"kubernetes.io/projected/129874e9-8e1c-4660-a365-5496f3148968-kube-api-access-8ljn5\") pod \"dnsmasq-dns-5bf47b49b7-m8hr9\" (UID: \"129874e9-8e1c-4660-a365-5496f3148968\") " pod="openstack/dnsmasq-dns-5bf47b49b7-m8hr9" Dec 10 15:42:12 crc kubenswrapper[4755]: I1210 15:42:12.148696 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/129874e9-8e1c-4660-a365-5496f3148968-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-m8hr9\" (UID: \"129874e9-8e1c-4660-a365-5496f3148968\") " pod="openstack/dnsmasq-dns-5bf47b49b7-m8hr9" Dec 10 15:42:12 crc kubenswrapper[4755]: I1210 15:42:12.148989 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/129874e9-8e1c-4660-a365-5496f3148968-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-m8hr9\" (UID: \"129874e9-8e1c-4660-a365-5496f3148968\") " pod="openstack/dnsmasq-dns-5bf47b49b7-m8hr9" Dec 10 15:42:12 crc kubenswrapper[4755]: I1210 15:42:12.167906 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ljn5\" (UniqueName: \"kubernetes.io/projected/129874e9-8e1c-4660-a365-5496f3148968-kube-api-access-8ljn5\") pod \"dnsmasq-dns-5bf47b49b7-m8hr9\" (UID: \"129874e9-8e1c-4660-a365-5496f3148968\") " pod="openstack/dnsmasq-dns-5bf47b49b7-m8hr9" Dec 10 15:42:12 crc kubenswrapper[4755]: I1210 15:42:12.243124 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-m8hr9"] Dec 10 15:42:12 crc kubenswrapper[4755]: I1210 15:42:12.244857 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-m8hr9" Dec 10 15:42:12 crc kubenswrapper[4755]: I1210 15:42:12.250719 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2d233ea-7ff9-4ce1-ada7-40d66f801cea-config\") pod \"ovn-controller-metrics-lxfpz\" (UID: \"a2d233ea-7ff9-4ce1-ada7-40d66f801cea\") " pod="openstack/ovn-controller-metrics-lxfpz" Dec 10 15:42:12 crc kubenswrapper[4755]: I1210 15:42:12.250756 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/a2d233ea-7ff9-4ce1-ada7-40d66f801cea-ovn-rundir\") pod \"ovn-controller-metrics-lxfpz\" (UID: \"a2d233ea-7ff9-4ce1-ada7-40d66f801cea\") " pod="openstack/ovn-controller-metrics-lxfpz" Dec 10 15:42:12 crc kubenswrapper[4755]: I1210 15:42:12.250795 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2d233ea-7ff9-4ce1-ada7-40d66f801cea-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-lxfpz\" (UID: \"a2d233ea-7ff9-4ce1-ada7-40d66f801cea\") " pod="openstack/ovn-controller-metrics-lxfpz" Dec 10 15:42:12 crc kubenswrapper[4755]: I1210 15:42:12.250854 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2d233ea-7ff9-4ce1-ada7-40d66f801cea-combined-ca-bundle\") pod \"ovn-controller-metrics-lxfpz\" (UID: \"a2d233ea-7ff9-4ce1-ada7-40d66f801cea\") " pod="openstack/ovn-controller-metrics-lxfpz" Dec 10 15:42:12 crc kubenswrapper[4755]: I1210 15:42:12.250873 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/a2d233ea-7ff9-4ce1-ada7-40d66f801cea-ovs-rundir\") pod \"ovn-controller-metrics-lxfpz\" (UID: \"a2d233ea-7ff9-4ce1-ada7-40d66f801cea\") " pod="openstack/ovn-controller-metrics-lxfpz" Dec 10 15:42:12 crc kubenswrapper[4755]: I1210 15:42:12.250929 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bg2x6\" (UniqueName: \"kubernetes.io/projected/a2d233ea-7ff9-4ce1-ada7-40d66f801cea-kube-api-access-bg2x6\") pod \"ovn-controller-metrics-lxfpz\" (UID: \"a2d233ea-7ff9-4ce1-ada7-40d66f801cea\") " pod="openstack/ovn-controller-metrics-lxfpz" Dec 10 15:42:12 crc kubenswrapper[4755]: I1210 15:42:12.251908 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2d233ea-7ff9-4ce1-ada7-40d66f801cea-config\") pod \"ovn-controller-metrics-lxfpz\" (UID: \"a2d233ea-7ff9-4ce1-ada7-40d66f801cea\") " pod="openstack/ovn-controller-metrics-lxfpz" Dec 10 15:42:12 crc kubenswrapper[4755]: I1210 15:42:12.252225 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/a2d233ea-7ff9-4ce1-ada7-40d66f801cea-ovs-rundir\") pod \"ovn-controller-metrics-lxfpz\" (UID: \"a2d233ea-7ff9-4ce1-ada7-40d66f801cea\") " pod="openstack/ovn-controller-metrics-lxfpz" Dec 10 15:42:12 crc kubenswrapper[4755]: I1210 15:42:12.252591 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/a2d233ea-7ff9-4ce1-ada7-40d66f801cea-ovn-rundir\") pod \"ovn-controller-metrics-lxfpz\" (UID: \"a2d233ea-7ff9-4ce1-ada7-40d66f801cea\") " pod="openstack/ovn-controller-metrics-lxfpz" Dec 10 15:42:12 crc kubenswrapper[4755]: I1210 15:42:12.255540 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2d233ea-7ff9-4ce1-ada7-40d66f801cea-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-lxfpz\" (UID: \"a2d233ea-7ff9-4ce1-ada7-40d66f801cea\") " pod="openstack/ovn-controller-metrics-lxfpz" Dec 10 15:42:12 crc kubenswrapper[4755]: I1210 15:42:12.256121 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2d233ea-7ff9-4ce1-ada7-40d66f801cea-combined-ca-bundle\") pod \"ovn-controller-metrics-lxfpz\" (UID: \"a2d233ea-7ff9-4ce1-ada7-40d66f801cea\") " pod="openstack/ovn-controller-metrics-lxfpz" Dec 10 15:42:12 crc kubenswrapper[4755]: I1210 15:42:12.283086 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bg2x6\" (UniqueName: \"kubernetes.io/projected/a2d233ea-7ff9-4ce1-ada7-40d66f801cea-kube-api-access-bg2x6\") pod \"ovn-controller-metrics-lxfpz\" (UID: \"a2d233ea-7ff9-4ce1-ada7-40d66f801cea\") " pod="openstack/ovn-controller-metrics-lxfpz" Dec 10 15:42:12 crc kubenswrapper[4755]: I1210 15:42:12.286557 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8554648995-mztj5"] Dec 10 15:42:12 crc kubenswrapper[4755]: I1210 15:42:12.288297 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-mztj5" Dec 10 15:42:12 crc kubenswrapper[4755]: I1210 15:42:12.290105 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Dec 10 15:42:12 crc kubenswrapper[4755]: I1210 15:42:12.299605 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-mztj5"] Dec 10 15:42:12 crc kubenswrapper[4755]: I1210 15:42:12.402704 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/32d65463-88f8-4c3f-910a-0c8c13a39013-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-mztj5\" (UID: \"32d65463-88f8-4c3f-910a-0c8c13a39013\") " pod="openstack/dnsmasq-dns-8554648995-mztj5" Dec 10 15:42:12 crc kubenswrapper[4755]: I1210 15:42:12.402955 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/32d65463-88f8-4c3f-910a-0c8c13a39013-dns-svc\") pod \"dnsmasq-dns-8554648995-mztj5\" (UID: \"32d65463-88f8-4c3f-910a-0c8c13a39013\") " pod="openstack/dnsmasq-dns-8554648995-mztj5" Dec 10 15:42:12 crc kubenswrapper[4755]: I1210 15:42:12.402982 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32d65463-88f8-4c3f-910a-0c8c13a39013-config\") pod \"dnsmasq-dns-8554648995-mztj5\" (UID: \"32d65463-88f8-4c3f-910a-0c8c13a39013\") " pod="openstack/dnsmasq-dns-8554648995-mztj5" Dec 10 15:42:12 crc kubenswrapper[4755]: I1210 15:42:12.402997 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/32d65463-88f8-4c3f-910a-0c8c13a39013-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-mztj5\" (UID: \"32d65463-88f8-4c3f-910a-0c8c13a39013\") " pod="openstack/dnsmasq-dns-8554648995-mztj5" Dec 10 15:42:12 crc kubenswrapper[4755]: I1210 15:42:12.403112 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2tcr\" (UniqueName: \"kubernetes.io/projected/32d65463-88f8-4c3f-910a-0c8c13a39013-kube-api-access-t2tcr\") pod \"dnsmasq-dns-8554648995-mztj5\" (UID: \"32d65463-88f8-4c3f-910a-0c8c13a39013\") " pod="openstack/dnsmasq-dns-8554648995-mztj5" Dec 10 15:42:12 crc kubenswrapper[4755]: I1210 15:42:12.504428 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2tcr\" (UniqueName: \"kubernetes.io/projected/32d65463-88f8-4c3f-910a-0c8c13a39013-kube-api-access-t2tcr\") pod \"dnsmasq-dns-8554648995-mztj5\" (UID: \"32d65463-88f8-4c3f-910a-0c8c13a39013\") " pod="openstack/dnsmasq-dns-8554648995-mztj5" Dec 10 15:42:12 crc kubenswrapper[4755]: I1210 15:42:12.504573 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/32d65463-88f8-4c3f-910a-0c8c13a39013-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-mztj5\" (UID: \"32d65463-88f8-4c3f-910a-0c8c13a39013\") " pod="openstack/dnsmasq-dns-8554648995-mztj5" Dec 10 15:42:12 crc kubenswrapper[4755]: I1210 15:42:12.504594 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/32d65463-88f8-4c3f-910a-0c8c13a39013-dns-svc\") pod \"dnsmasq-dns-8554648995-mztj5\" (UID: \"32d65463-88f8-4c3f-910a-0c8c13a39013\") " pod="openstack/dnsmasq-dns-8554648995-mztj5" Dec 10 15:42:12 crc kubenswrapper[4755]: I1210 15:42:12.504616 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32d65463-88f8-4c3f-910a-0c8c13a39013-config\") pod \"dnsmasq-dns-8554648995-mztj5\" (UID: \"32d65463-88f8-4c3f-910a-0c8c13a39013\") " pod="openstack/dnsmasq-dns-8554648995-mztj5" Dec 10 15:42:12 crc kubenswrapper[4755]: I1210 15:42:12.504634 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/32d65463-88f8-4c3f-910a-0c8c13a39013-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-mztj5\" (UID: \"32d65463-88f8-4c3f-910a-0c8c13a39013\") " pod="openstack/dnsmasq-dns-8554648995-mztj5" Dec 10 15:42:12 crc kubenswrapper[4755]: I1210 15:42:12.506169 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/32d65463-88f8-4c3f-910a-0c8c13a39013-dns-svc\") pod \"dnsmasq-dns-8554648995-mztj5\" (UID: \"32d65463-88f8-4c3f-910a-0c8c13a39013\") " pod="openstack/dnsmasq-dns-8554648995-mztj5" Dec 10 15:42:12 crc kubenswrapper[4755]: I1210 15:42:12.506322 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/32d65463-88f8-4c3f-910a-0c8c13a39013-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-mztj5\" (UID: \"32d65463-88f8-4c3f-910a-0c8c13a39013\") " pod="openstack/dnsmasq-dns-8554648995-mztj5" Dec 10 15:42:12 crc kubenswrapper[4755]: I1210 15:42:12.506495 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32d65463-88f8-4c3f-910a-0c8c13a39013-config\") pod \"dnsmasq-dns-8554648995-mztj5\" (UID: \"32d65463-88f8-4c3f-910a-0c8c13a39013\") " pod="openstack/dnsmasq-dns-8554648995-mztj5" Dec 10 15:42:12 crc kubenswrapper[4755]: I1210 15:42:12.506970 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/32d65463-88f8-4c3f-910a-0c8c13a39013-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-mztj5\" (UID: \"32d65463-88f8-4c3f-910a-0c8c13a39013\") " pod="openstack/dnsmasq-dns-8554648995-mztj5" Dec 10 15:42:12 crc kubenswrapper[4755]: I1210 15:42:12.523658 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2tcr\" (UniqueName: \"kubernetes.io/projected/32d65463-88f8-4c3f-910a-0c8c13a39013-kube-api-access-t2tcr\") pod \"dnsmasq-dns-8554648995-mztj5\" (UID: \"32d65463-88f8-4c3f-910a-0c8c13a39013\") " pod="openstack/dnsmasq-dns-8554648995-mztj5" Dec 10 15:42:12 crc kubenswrapper[4755]: I1210 15:42:12.531270 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-lxfpz" Dec 10 15:42:12 crc kubenswrapper[4755]: I1210 15:42:12.565449 4755 generic.go:334] "Generic (PLEG): container finished" podID="376461c9-8e89-4c8c-bcef-6a873320a293" containerID="24ca303001748c15f28e9354e34cbb0de1a5b506c259876b8178ca415dce3162" exitCode=0 Dec 10 15:42:12 crc kubenswrapper[4755]: I1210 15:42:12.565551 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"376461c9-8e89-4c8c-bcef-6a873320a293","Type":"ContainerDied","Data":"24ca303001748c15f28e9354e34cbb0de1a5b506c259876b8178ca415dce3162"} Dec 10 15:42:12 crc kubenswrapper[4755]: I1210 15:42:12.571842 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-x9c7j" event={"ID":"64c097c3-2f2a-4d0d-b7cb-8d1c88ffd3eb","Type":"ContainerDied","Data":"6c9dbd85753e1e8337f64c26697544072559ef133d2501464562af1aefbb80bd"} Dec 10 15:42:12 crc kubenswrapper[4755]: I1210 15:42:12.571742 4755 generic.go:334] "Generic (PLEG): container finished" podID="64c097c3-2f2a-4d0d-b7cb-8d1c88ffd3eb" containerID="6c9dbd85753e1e8337f64c26697544072559ef133d2501464562af1aefbb80bd" exitCode=0 Dec 10 15:42:12 crc kubenswrapper[4755]: I1210 15:42:12.577060 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"48b9cc99-2595-445c-aca6-b13972e95324","Type":"ContainerStarted","Data":"3f0cc9b840e97ce9262208d09cf9f22729f5bbe00e1f45cb47b31e4ded2d02b0"} Dec 10 15:42:12 crc kubenswrapper[4755]: I1210 15:42:12.632450 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-mztj5" Dec 10 15:42:12 crc kubenswrapper[4755]: I1210 15:42:12.840450 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-m8hr9"] Dec 10 15:42:13 crc kubenswrapper[4755]: I1210 15:42:13.298484 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-x9c7j" Dec 10 15:42:13 crc kubenswrapper[4755]: I1210 15:42:13.316292 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-lxfpz"] Dec 10 15:42:13 crc kubenswrapper[4755]: I1210 15:42:13.331333 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64c097c3-2f2a-4d0d-b7cb-8d1c88ffd3eb-config\") pod \"64c097c3-2f2a-4d0d-b7cb-8d1c88ffd3eb\" (UID: \"64c097c3-2f2a-4d0d-b7cb-8d1c88ffd3eb\") " Dec 10 15:42:13 crc kubenswrapper[4755]: I1210 15:42:13.331457 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64c097c3-2f2a-4d0d-b7cb-8d1c88ffd3eb-dns-svc\") pod \"64c097c3-2f2a-4d0d-b7cb-8d1c88ffd3eb\" (UID: \"64c097c3-2f2a-4d0d-b7cb-8d1c88ffd3eb\") " Dec 10 15:42:13 crc kubenswrapper[4755]: I1210 15:42:13.331574 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jgrjp\" (UniqueName: \"kubernetes.io/projected/64c097c3-2f2a-4d0d-b7cb-8d1c88ffd3eb-kube-api-access-jgrjp\") pod \"64c097c3-2f2a-4d0d-b7cb-8d1c88ffd3eb\" (UID: \"64c097c3-2f2a-4d0d-b7cb-8d1c88ffd3eb\") " Dec 10 15:42:13 crc kubenswrapper[4755]: W1210 15:42:13.333720 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda2d233ea_7ff9_4ce1_ada7_40d66f801cea.slice/crio-77ba000858a0806481351cd1e2af0c5078a01e58b685cd71f019a59907c6479e WatchSource:0}: Error finding container 77ba000858a0806481351cd1e2af0c5078a01e58b685cd71f019a59907c6479e: Status 404 returned error can't find the container with id 77ba000858a0806481351cd1e2af0c5078a01e58b685cd71f019a59907c6479e Dec 10 15:42:13 crc kubenswrapper[4755]: I1210 15:42:13.339748 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64c097c3-2f2a-4d0d-b7cb-8d1c88ffd3eb-kube-api-access-jgrjp" (OuterVolumeSpecName: "kube-api-access-jgrjp") pod "64c097c3-2f2a-4d0d-b7cb-8d1c88ffd3eb" (UID: "64c097c3-2f2a-4d0d-b7cb-8d1c88ffd3eb"). InnerVolumeSpecName "kube-api-access-jgrjp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:42:13 crc kubenswrapper[4755]: I1210 15:42:13.433760 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jgrjp\" (UniqueName: \"kubernetes.io/projected/64c097c3-2f2a-4d0d-b7cb-8d1c88ffd3eb-kube-api-access-jgrjp\") on node \"crc\" DevicePath \"\"" Dec 10 15:42:13 crc kubenswrapper[4755]: I1210 15:42:13.438035 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64c097c3-2f2a-4d0d-b7cb-8d1c88ffd3eb-config" (OuterVolumeSpecName: "config") pod "64c097c3-2f2a-4d0d-b7cb-8d1c88ffd3eb" (UID: "64c097c3-2f2a-4d0d-b7cb-8d1c88ffd3eb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:42:13 crc kubenswrapper[4755]: I1210 15:42:13.442304 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64c097c3-2f2a-4d0d-b7cb-8d1c88ffd3eb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "64c097c3-2f2a-4d0d-b7cb-8d1c88ffd3eb" (UID: "64c097c3-2f2a-4d0d-b7cb-8d1c88ffd3eb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:42:13 crc kubenswrapper[4755]: I1210 15:42:13.536128 4755 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64c097c3-2f2a-4d0d-b7cb-8d1c88ffd3eb-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:42:13 crc kubenswrapper[4755]: I1210 15:42:13.536164 4755 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64c097c3-2f2a-4d0d-b7cb-8d1c88ffd3eb-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 10 15:42:13 crc kubenswrapper[4755]: I1210 15:42:13.741536 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-mztj5"] Dec 10 15:42:13 crc kubenswrapper[4755]: I1210 15:42:13.742098 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-gateway-bc75944f-sxgmq" event={"ID":"614e240f-4195-4915-9e2e-d142c9df25bc","Type":"ContainerStarted","Data":"fac4c67f3712b705b2d5b858ede9fd435f66dc6c2b919dfb7e17194b94910a9a"} Dec 10 15:42:13 crc kubenswrapper[4755]: I1210 15:42:13.742728 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-gateway-bc75944f-sxgmq" Dec 10 15:42:13 crc kubenswrapper[4755]: I1210 15:42:13.743639 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"251dc547-e1a7-418e-95fd-6b7e8e5c5d35","Type":"ContainerStarted","Data":"e4e9be106e57560a42f428ea3a4d1a14f2562c6147b323f800eb0df17b6b3319"} Dec 10 15:42:13 crc kubenswrapper[4755]: I1210 15:42:13.746128 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"b6fc3b5b-a2c7-404f-8435-6c2a72d2c4a8","Type":"ContainerStarted","Data":"f156a5371b9bc57cb097d13efc152a43d525f061a557b3571f38b8c6ad1be2f3"} Dec 10 15:42:13 crc kubenswrapper[4755]: I1210 15:42:13.746741 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Dec 10 15:42:13 crc kubenswrapper[4755]: I1210 15:42:13.749750 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-x9c7j" event={"ID":"64c097c3-2f2a-4d0d-b7cb-8d1c88ffd3eb","Type":"ContainerDied","Data":"acfa5dacb4ede8923df9406faa907211fcd8f9813e4b5616d33ddfda0ba736b3"} Dec 10 15:42:13 crc kubenswrapper[4755]: I1210 15:42:13.749787 4755 scope.go:117] "RemoveContainer" containerID="6c9dbd85753e1e8337f64c26697544072559ef133d2501464562af1aefbb80bd" Dec 10 15:42:13 crc kubenswrapper[4755]: I1210 15:42:13.749880 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-x9c7j" Dec 10 15:42:13 crc kubenswrapper[4755]: I1210 15:42:13.753425 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-lxfpz" event={"ID":"a2d233ea-7ff9-4ce1-ada7-40d66f801cea","Type":"ContainerStarted","Data":"77ba000858a0806481351cd1e2af0c5078a01e58b685cd71f019a59907c6479e"} Dec 10 15:42:13 crc kubenswrapper[4755]: W1210 15:42:13.754268 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod32d65463_88f8_4c3f_910a_0c8c13a39013.slice/crio-788f0d37bdd31bccef41d5da3b75f720bf0a5efd178f9fe726e6fd7149a22ddc WatchSource:0}: Error finding container 788f0d37bdd31bccef41d5da3b75f720bf0a5efd178f9fe726e6fd7149a22ddc: Status 404 returned error can't find the container with id 788f0d37bdd31bccef41d5da3b75f720bf0a5efd178f9fe726e6fd7149a22ddc Dec 10 15:42:13 crc kubenswrapper[4755]: I1210 15:42:13.772343 4755 generic.go:334] "Generic (PLEG): container finished" podID="129874e9-8e1c-4660-a365-5496f3148968" containerID="1d104395ac3258c8eedaf11b9ed257b1b33644e83481d1b21ded13e5816bd02c" exitCode=0 Dec 10 15:42:13 crc kubenswrapper[4755]: I1210 15:42:13.793969 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-gateway-bc75944f-sxgmq" podStartSLOduration=6.994101399 podStartE2EDuration="35.793946828s" podCreationTimestamp="2025-12-10 15:41:38 +0000 UTC" firstStartedPulling="2025-12-10 15:41:44.037731237 +0000 UTC m=+1100.638614869" lastFinishedPulling="2025-12-10 15:42:12.837576666 +0000 UTC m=+1129.438460298" observedRunningTime="2025-12-10 15:42:13.765668965 +0000 UTC m=+1130.366552597" watchObservedRunningTime="2025-12-10 15:42:13.793946828 +0000 UTC m=+1130.394830460" Dec 10 15:42:13 crc kubenswrapper[4755]: I1210 15:42:13.795411 4755 scope.go:117] "RemoveContainer" containerID="eaf1407b902788e875592cc2bf3fbc8c5ec07157c1bf147ffc0f7d9859716ee6" Dec 10 15:42:13 crc kubenswrapper[4755]: I1210 15:42:13.798942 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-gateway-bc75944f-hm8ng" event={"ID":"145f8d2b-2e95-4227-8c76-1f3ee8eab754","Type":"ContainerStarted","Data":"6dbc5523bfe7760f1e4d754bba1ea83694ed3f21bd02f816ade743f508382e9a"} Dec 10 15:42:13 crc kubenswrapper[4755]: I1210 15:42:13.798985 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-m8hr9" event={"ID":"129874e9-8e1c-4660-a365-5496f3148968","Type":"ContainerDied","Data":"1d104395ac3258c8eedaf11b9ed257b1b33644e83481d1b21ded13e5816bd02c"} Dec 10 15:42:13 crc kubenswrapper[4755]: I1210 15:42:13.799000 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-m8hr9" event={"ID":"129874e9-8e1c-4660-a365-5496f3148968","Type":"ContainerStarted","Data":"96e511371c42c6b71cff2eb467410a74c998a923599bdb86137711e01f25a62a"} Dec 10 15:42:13 crc kubenswrapper[4755]: I1210 15:42:13.799032 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-gateway-bc75944f-sxgmq" Dec 10 15:42:13 crc kubenswrapper[4755]: I1210 15:42:13.800404 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-gateway-bc75944f-hm8ng" Dec 10 15:42:13 crc kubenswrapper[4755]: I1210 15:42:13.820322 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=18.455398259 podStartE2EDuration="48.820280509s" podCreationTimestamp="2025-12-10 15:41:25 +0000 UTC" firstStartedPulling="2025-12-10 15:41:42.132372036 +0000 UTC m=+1098.733255668" lastFinishedPulling="2025-12-10 15:42:12.497254286 +0000 UTC m=+1129.098137918" observedRunningTime="2025-12-10 15:42:13.810458142 +0000 UTC m=+1130.411341774" watchObservedRunningTime="2025-12-10 15:42:13.820280509 +0000 UTC m=+1130.421164141" Dec 10 15:42:13 crc kubenswrapper[4755]: I1210 15:42:13.826046 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-gateway-bc75944f-hm8ng" Dec 10 15:42:13 crc kubenswrapper[4755]: I1210 15:42:13.951704 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-x9c7j"] Dec 10 15:42:13 crc kubenswrapper[4755]: I1210 15:42:13.973008 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-x9c7j"] Dec 10 15:42:13 crc kubenswrapper[4755]: I1210 15:42:13.996228 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-gateway-bc75944f-hm8ng" podStartSLOduration=-9223372000.85857 podStartE2EDuration="35.996206331s" podCreationTimestamp="2025-12-10 15:41:38 +0000 UTC" firstStartedPulling="2025-12-10 15:41:44.047656677 +0000 UTC m=+1100.648540309" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:42:13.951925928 +0000 UTC m=+1130.552809560" watchObservedRunningTime="2025-12-10 15:42:13.996206331 +0000 UTC m=+1130.597089953" Dec 10 15:42:14 crc kubenswrapper[4755]: I1210 15:42:14.430814 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-m8hr9" Dec 10 15:42:14 crc kubenswrapper[4755]: I1210 15:42:14.538134 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8ljn5\" (UniqueName: \"kubernetes.io/projected/129874e9-8e1c-4660-a365-5496f3148968-kube-api-access-8ljn5\") pod \"129874e9-8e1c-4660-a365-5496f3148968\" (UID: \"129874e9-8e1c-4660-a365-5496f3148968\") " Dec 10 15:42:14 crc kubenswrapper[4755]: I1210 15:42:14.538213 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/129874e9-8e1c-4660-a365-5496f3148968-config\") pod \"129874e9-8e1c-4660-a365-5496f3148968\" (UID: \"129874e9-8e1c-4660-a365-5496f3148968\") " Dec 10 15:42:14 crc kubenswrapper[4755]: I1210 15:42:14.538283 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/129874e9-8e1c-4660-a365-5496f3148968-ovsdbserver-nb\") pod \"129874e9-8e1c-4660-a365-5496f3148968\" (UID: \"129874e9-8e1c-4660-a365-5496f3148968\") " Dec 10 15:42:14 crc kubenswrapper[4755]: I1210 15:42:14.538384 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/129874e9-8e1c-4660-a365-5496f3148968-dns-svc\") pod \"129874e9-8e1c-4660-a365-5496f3148968\" (UID: \"129874e9-8e1c-4660-a365-5496f3148968\") " Dec 10 15:42:14 crc kubenswrapper[4755]: I1210 15:42:14.544001 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/129874e9-8e1c-4660-a365-5496f3148968-kube-api-access-8ljn5" (OuterVolumeSpecName: "kube-api-access-8ljn5") pod "129874e9-8e1c-4660-a365-5496f3148968" (UID: "129874e9-8e1c-4660-a365-5496f3148968"). InnerVolumeSpecName "kube-api-access-8ljn5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:42:14 crc kubenswrapper[4755]: I1210 15:42:14.557626 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/129874e9-8e1c-4660-a365-5496f3148968-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "129874e9-8e1c-4660-a365-5496f3148968" (UID: "129874e9-8e1c-4660-a365-5496f3148968"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:42:14 crc kubenswrapper[4755]: I1210 15:42:14.560936 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/129874e9-8e1c-4660-a365-5496f3148968-config" (OuterVolumeSpecName: "config") pod "129874e9-8e1c-4660-a365-5496f3148968" (UID: "129874e9-8e1c-4660-a365-5496f3148968"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:42:14 crc kubenswrapper[4755]: I1210 15:42:14.563952 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/129874e9-8e1c-4660-a365-5496f3148968-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "129874e9-8e1c-4660-a365-5496f3148968" (UID: "129874e9-8e1c-4660-a365-5496f3148968"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:42:14 crc kubenswrapper[4755]: I1210 15:42:14.641622 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8ljn5\" (UniqueName: \"kubernetes.io/projected/129874e9-8e1c-4660-a365-5496f3148968-kube-api-access-8ljn5\") on node \"crc\" DevicePath \"\"" Dec 10 15:42:14 crc kubenswrapper[4755]: I1210 15:42:14.641674 4755 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/129874e9-8e1c-4660-a365-5496f3148968-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:42:14 crc kubenswrapper[4755]: I1210 15:42:14.641684 4755 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/129874e9-8e1c-4660-a365-5496f3148968-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 10 15:42:14 crc kubenswrapper[4755]: I1210 15:42:14.641692 4755 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/129874e9-8e1c-4660-a365-5496f3148968-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 10 15:42:14 crc kubenswrapper[4755]: I1210 15:42:14.793174 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-lxfpz" event={"ID":"a2d233ea-7ff9-4ce1-ada7-40d66f801cea","Type":"ContainerStarted","Data":"c06c2c0f914cfe3b447976e0ba762732842225d087f8ead6ccca80c0d79588e5"} Dec 10 15:42:14 crc kubenswrapper[4755]: I1210 15:42:14.804696 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-q6n4p" event={"ID":"46b6df85-96b1-4583-a80f-97a5d980cc72","Type":"ContainerStarted","Data":"01f35b66a4d7dfb7e5fa03fef96b1dda621cf58f96f343c2e34cfd32c2095c88"} Dec 10 15:42:14 crc kubenswrapper[4755]: I1210 15:42:14.805494 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-q6n4p" Dec 10 15:42:14 crc kubenswrapper[4755]: I1210 15:42:14.837780 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-m8hr9" event={"ID":"129874e9-8e1c-4660-a365-5496f3148968","Type":"ContainerDied","Data":"96e511371c42c6b71cff2eb467410a74c998a923599bdb86137711e01f25a62a"} Dec 10 15:42:14 crc kubenswrapper[4755]: I1210 15:42:14.838123 4755 scope.go:117] "RemoveContainer" containerID="1d104395ac3258c8eedaf11b9ed257b1b33644e83481d1b21ded13e5816bd02c" Dec 10 15:42:14 crc kubenswrapper[4755]: I1210 15:42:14.838333 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-m8hr9" Dec 10 15:42:14 crc kubenswrapper[4755]: I1210 15:42:14.840097 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-lxfpz" podStartSLOduration=3.8400783880000002 podStartE2EDuration="3.840078388s" podCreationTimestamp="2025-12-10 15:42:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:42:14.833874745 +0000 UTC m=+1131.434758387" watchObservedRunningTime="2025-12-10 15:42:14.840078388 +0000 UTC m=+1131.440962020" Dec 10 15:42:14 crc kubenswrapper[4755]: I1210 15:42:14.862169 4755 generic.go:334] "Generic (PLEG): container finished" podID="32d65463-88f8-4c3f-910a-0c8c13a39013" containerID="0b07a868e55e37c9fc9e7065ab222ca3505acf032003beff0d2b8b01573b2cb9" exitCode=0 Dec 10 15:42:14 crc kubenswrapper[4755]: I1210 15:42:14.862528 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-mztj5" event={"ID":"32d65463-88f8-4c3f-910a-0c8c13a39013","Type":"ContainerDied","Data":"0b07a868e55e37c9fc9e7065ab222ca3505acf032003beff0d2b8b01573b2cb9"} Dec 10 15:42:14 crc kubenswrapper[4755]: I1210 15:42:14.862553 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-mztj5" event={"ID":"32d65463-88f8-4c3f-910a-0c8c13a39013","Type":"ContainerStarted","Data":"788f0d37bdd31bccef41d5da3b75f720bf0a5efd178f9fe726e6fd7149a22ddc"} Dec 10 15:42:14 crc kubenswrapper[4755]: I1210 15:42:14.893805 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-querier-5467947bf7-qpg72" event={"ID":"f9c583d4-e5d0-4c13-9989-dea15920e9e6","Type":"ContainerStarted","Data":"4164eb28ac9f29baeb1e602db84f2fe27c26cb04d570c22fd72e60c0a69e8dc2"} Dec 10 15:42:14 crc kubenswrapper[4755]: I1210 15:42:14.895683 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-q6n4p" podStartSLOduration=12.687887665 podStartE2EDuration="42.895666968s" podCreationTimestamp="2025-12-10 15:41:32 +0000 UTC" firstStartedPulling="2025-12-10 15:41:44.050909533 +0000 UTC m=+1100.651793165" lastFinishedPulling="2025-12-10 15:42:14.258688836 +0000 UTC m=+1130.859572468" observedRunningTime="2025-12-10 15:42:14.885973664 +0000 UTC m=+1131.486857296" watchObservedRunningTime="2025-12-10 15:42:14.895666968 +0000 UTC m=+1131.496550600" Dec 10 15:42:14 crc kubenswrapper[4755]: I1210 15:42:14.897509 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-querier-5467947bf7-qpg72" Dec 10 15:42:15 crc kubenswrapper[4755]: I1210 15:42:15.004563 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-querier-5467947bf7-qpg72" podStartSLOduration=-9223371999.85026 podStartE2EDuration="37.004515228s" podCreationTimestamp="2025-12-10 15:41:38 +0000 UTC" firstStartedPulling="2025-12-10 15:41:43.538206315 +0000 UTC m=+1100.139089947" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:42:14.993212091 +0000 UTC m=+1131.594095733" watchObservedRunningTime="2025-12-10 15:42:15.004515228 +0000 UTC m=+1131.605398860" Dec 10 15:42:15 crc kubenswrapper[4755]: I1210 15:42:15.128097 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-m8hr9"] Dec 10 15:42:15 crc kubenswrapper[4755]: I1210 15:42:15.143290 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-m8hr9"] Dec 10 15:42:15 crc kubenswrapper[4755]: I1210 15:42:15.220904 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Dec 10 15:42:15 crc kubenswrapper[4755]: I1210 15:42:15.220968 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Dec 10 15:42:15 crc kubenswrapper[4755]: I1210 15:42:15.770925 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="129874e9-8e1c-4660-a365-5496f3148968" path="/var/lib/kubelet/pods/129874e9-8e1c-4660-a365-5496f3148968/volumes" Dec 10 15:42:15 crc kubenswrapper[4755]: I1210 15:42:15.771827 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64c097c3-2f2a-4d0d-b7cb-8d1c88ffd3eb" path="/var/lib/kubelet/pods/64c097c3-2f2a-4d0d-b7cb-8d1c88ffd3eb/volumes" Dec 10 15:42:15 crc kubenswrapper[4755]: I1210 15:42:15.929182 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-mztj5" event={"ID":"32d65463-88f8-4c3f-910a-0c8c13a39013","Type":"ContainerStarted","Data":"948b47c684c1989af9d3da1cd3e56931fc5a54264da22b4c16fe7da963b631f8"} Dec 10 15:42:15 crc kubenswrapper[4755]: I1210 15:42:15.929611 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8554648995-mztj5" Dec 10 15:42:15 crc kubenswrapper[4755]: I1210 15:42:15.950656 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8554648995-mztj5" podStartSLOduration=3.950634401 podStartE2EDuration="3.950634401s" podCreationTimestamp="2025-12-10 15:42:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:42:15.947203971 +0000 UTC m=+1132.548087613" watchObservedRunningTime="2025-12-10 15:42:15.950634401 +0000 UTC m=+1132.551518033" Dec 10 15:42:16 crc kubenswrapper[4755]: I1210 15:42:16.940713 4755 generic.go:334] "Generic (PLEG): container finished" podID="48b9cc99-2595-445c-aca6-b13972e95324" containerID="3f0cc9b840e97ce9262208d09cf9f22729f5bbe00e1f45cb47b31e4ded2d02b0" exitCode=0 Dec 10 15:42:16 crc kubenswrapper[4755]: I1210 15:42:16.940789 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"48b9cc99-2595-445c-aca6-b13972e95324","Type":"ContainerDied","Data":"3f0cc9b840e97ce9262208d09cf9f22729f5bbe00e1f45cb47b31e4ded2d02b0"} Dec 10 15:42:16 crc kubenswrapper[4755]: I1210 15:42:16.943459 4755 generic.go:334] "Generic (PLEG): container finished" podID="7b79f2f6-2414-4403-8c2e-b58f114d941a" containerID="d193384000344d3c4b912f0d707815410f5b783f5b5b1aad09d94626c20dc470" exitCode=0 Dec 10 15:42:16 crc kubenswrapper[4755]: I1210 15:42:16.943568 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-x972h" event={"ID":"7b79f2f6-2414-4403-8c2e-b58f114d941a","Type":"ContainerDied","Data":"d193384000344d3c4b912f0d707815410f5b783f5b5b1aad09d94626c20dc470"} Dec 10 15:42:16 crc kubenswrapper[4755]: I1210 15:42:16.948871 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"376461c9-8e89-4c8c-bcef-6a873320a293","Type":"ContainerStarted","Data":"a5a2f8335c38d9fbb5af0cc8945f06b2b66114fa6846cae3c5d619d7a54a3370"} Dec 10 15:42:17 crc kubenswrapper[4755]: I1210 15:42:17.412026 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Dec 10 15:42:17 crc kubenswrapper[4755]: I1210 15:42:17.488218 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Dec 10 15:42:17 crc kubenswrapper[4755]: I1210 15:42:17.959298 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-x972h" event={"ID":"7b79f2f6-2414-4403-8c2e-b58f114d941a","Type":"ContainerStarted","Data":"50ab1aa0f6289ec3725613b5a3fe8dc840999f8ee98f328970e7171285383f49"} Dec 10 15:42:17 crc kubenswrapper[4755]: I1210 15:42:17.959359 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-x972h" event={"ID":"7b79f2f6-2414-4403-8c2e-b58f114d941a","Type":"ContainerStarted","Data":"f06eb43c0bb23fef1c18bd574ad7a88a88656a0c6d8259c1d77f3e7eda77a163"} Dec 10 15:42:17 crc kubenswrapper[4755]: I1210 15:42:17.959753 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-x972h" Dec 10 15:42:17 crc kubenswrapper[4755]: I1210 15:42:17.959934 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-x972h" Dec 10 15:42:17 crc kubenswrapper[4755]: I1210 15:42:17.961341 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"48b9cc99-2595-445c-aca6-b13972e95324","Type":"ContainerStarted","Data":"da4705476d3bcfcf89c32759b9a96c78cace74d87d983663ebec2013f1805c07"} Dec 10 15:42:17 crc kubenswrapper[4755]: I1210 15:42:17.989365 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-x972h" podStartSLOduration=13.454268778 podStartE2EDuration="45.989342806s" podCreationTimestamp="2025-12-10 15:41:32 +0000 UTC" firstStartedPulling="2025-12-10 15:41:43.667959213 +0000 UTC m=+1100.268842855" lastFinishedPulling="2025-12-10 15:42:16.203033251 +0000 UTC m=+1132.803916883" observedRunningTime="2025-12-10 15:42:17.985920371 +0000 UTC m=+1134.586804024" watchObservedRunningTime="2025-12-10 15:42:17.989342806 +0000 UTC m=+1134.590226438" Dec 10 15:42:18 crc kubenswrapper[4755]: I1210 15:42:18.005748 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=-9223371980.849045 podStartE2EDuration="56.005730114s" podCreationTimestamp="2025-12-10 15:41:22 +0000 UTC" firstStartedPulling="2025-12-10 15:41:42.599570528 +0000 UTC m=+1099.200454160" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:42:18.002868946 +0000 UTC m=+1134.603752578" watchObservedRunningTime="2025-12-10 15:42:18.005730114 +0000 UTC m=+1134.606613746" Dec 10 15:42:18 crc kubenswrapper[4755]: I1210 15:42:18.151349 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-distributor-664b687b54-qvsww" Dec 10 15:42:18 crc kubenswrapper[4755]: I1210 15:42:18.505553 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-dr466" Dec 10 15:42:19 crc kubenswrapper[4755]: I1210 15:42:19.353932 4755 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="e5a3871d-6b81-4b3d-9044-fcbcf437effb" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 10 15:42:19 crc kubenswrapper[4755]: I1210 15:42:19.565597 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-compactor-0" Dec 10 15:42:19 crc kubenswrapper[4755]: I1210 15:42:19.573564 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 10 15:42:19 crc kubenswrapper[4755]: I1210 15:42:19.977033 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"60343e12-2433-4e98-9759-09d5e2b9d82b","Type":"ContainerStarted","Data":"f459e970f94181348851f1bcbebbaaf108eded56e4be8fbd9873dc691cfcad1e"} Dec 10 15:42:19 crc kubenswrapper[4755]: I1210 15:42:19.977310 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Dec 10 15:42:19 crc kubenswrapper[4755]: I1210 15:42:19.978360 4755 generic.go:334] "Generic (PLEG): container finished" podID="251dc547-e1a7-418e-95fd-6b7e8e5c5d35" containerID="e4e9be106e57560a42f428ea3a4d1a14f2562c6147b323f800eb0df17b6b3319" exitCode=0 Dec 10 15:42:19 crc kubenswrapper[4755]: I1210 15:42:19.978394 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"251dc547-e1a7-418e-95fd-6b7e8e5c5d35","Type":"ContainerDied","Data":"e4e9be106e57560a42f428ea3a4d1a14f2562c6147b323f800eb0df17b6b3319"} Dec 10 15:42:19 crc kubenswrapper[4755]: I1210 15:42:19.997386 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=18.320156949 podStartE2EDuration="53.997369432s" podCreationTimestamp="2025-12-10 15:41:26 +0000 UTC" firstStartedPulling="2025-12-10 15:41:43.510183309 +0000 UTC m=+1100.111066941" lastFinishedPulling="2025-12-10 15:42:19.187395792 +0000 UTC m=+1135.788279424" observedRunningTime="2025-12-10 15:42:19.988850208 +0000 UTC m=+1136.589733830" watchObservedRunningTime="2025-12-10 15:42:19.997369432 +0000 UTC m=+1136.598253064" Dec 10 15:42:20 crc kubenswrapper[4755]: I1210 15:42:20.506590 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Dec 10 15:42:22 crc kubenswrapper[4755]: I1210 15:42:22.634686 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8554648995-mztj5" Dec 10 15:42:22 crc kubenswrapper[4755]: I1210 15:42:22.686292 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-prnhr"] Dec 10 15:42:22 crc kubenswrapper[4755]: I1210 15:42:22.686584 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-prnhr" podUID="5bf05873-62f3-4a0f-b58e-ec6346b5f057" containerName="dnsmasq-dns" containerID="cri-o://993dec3ddbb9b3b290a9f0c0de760c18ef25843f76c6b98ae78c790d10369069" gracePeriod=10 Dec 10 15:42:23 crc kubenswrapper[4755]: I1210 15:42:23.003390 4755 generic.go:334] "Generic (PLEG): container finished" podID="5bf05873-62f3-4a0f-b58e-ec6346b5f057" containerID="993dec3ddbb9b3b290a9f0c0de760c18ef25843f76c6b98ae78c790d10369069" exitCode=0 Dec 10 15:42:23 crc kubenswrapper[4755]: I1210 15:42:23.003442 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-prnhr" event={"ID":"5bf05873-62f3-4a0f-b58e-ec6346b5f057","Type":"ContainerDied","Data":"993dec3ddbb9b3b290a9f0c0de760c18ef25843f76c6b98ae78c790d10369069"} Dec 10 15:42:23 crc kubenswrapper[4755]: I1210 15:42:23.007094 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"376461c9-8e89-4c8c-bcef-6a873320a293","Type":"ContainerStarted","Data":"5ce5463013e10bef7f5f44a6a15ca53636b41f7d686d4ef09fc0aecee8f0dae1"} Dec 10 15:42:23 crc kubenswrapper[4755]: I1210 15:42:23.969152 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-prnhr" Dec 10 15:42:24 crc kubenswrapper[4755]: I1210 15:42:24.038202 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-prnhr" Dec 10 15:42:24 crc kubenswrapper[4755]: I1210 15:42:24.038420 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-prnhr" event={"ID":"5bf05873-62f3-4a0f-b58e-ec6346b5f057","Type":"ContainerDied","Data":"ee523c1a4d94bf209f45bd57aeaccdd1efe224f3f4353ff77cc3ff30068f80a8"} Dec 10 15:42:24 crc kubenswrapper[4755]: I1210 15:42:24.047906 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/alertmanager-metric-storage-0" Dec 10 15:42:24 crc kubenswrapper[4755]: I1210 15:42:24.048583 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/alertmanager-metric-storage-0" Dec 10 15:42:24 crc kubenswrapper[4755]: I1210 15:42:24.048656 4755 scope.go:117] "RemoveContainer" containerID="993dec3ddbb9b3b290a9f0c0de760c18ef25843f76c6b98ae78c790d10369069" Dec 10 15:42:24 crc kubenswrapper[4755]: I1210 15:42:24.074951 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/alertmanager-metric-storage-0" podStartSLOduration=23.826783442 podStartE2EDuration="57.074928561s" podCreationTimestamp="2025-12-10 15:41:27 +0000 UTC" firstStartedPulling="2025-12-10 15:41:42.978495482 +0000 UTC m=+1099.579379124" lastFinishedPulling="2025-12-10 15:42:16.226640621 +0000 UTC m=+1132.827524243" observedRunningTime="2025-12-10 15:42:24.068219057 +0000 UTC m=+1140.669102689" watchObservedRunningTime="2025-12-10 15:42:24.074928561 +0000 UTC m=+1140.675812203" Dec 10 15:42:24 crc kubenswrapper[4755]: I1210 15:42:24.099585 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bf05873-62f3-4a0f-b58e-ec6346b5f057-config\") pod \"5bf05873-62f3-4a0f-b58e-ec6346b5f057\" (UID: \"5bf05873-62f3-4a0f-b58e-ec6346b5f057\") " Dec 10 15:42:24 crc kubenswrapper[4755]: I1210 15:42:24.099724 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r94x9\" (UniqueName: \"kubernetes.io/projected/5bf05873-62f3-4a0f-b58e-ec6346b5f057-kube-api-access-r94x9\") pod \"5bf05873-62f3-4a0f-b58e-ec6346b5f057\" (UID: \"5bf05873-62f3-4a0f-b58e-ec6346b5f057\") " Dec 10 15:42:24 crc kubenswrapper[4755]: I1210 15:42:24.099752 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5bf05873-62f3-4a0f-b58e-ec6346b5f057-dns-svc\") pod \"5bf05873-62f3-4a0f-b58e-ec6346b5f057\" (UID: \"5bf05873-62f3-4a0f-b58e-ec6346b5f057\") " Dec 10 15:42:24 crc kubenswrapper[4755]: I1210 15:42:24.110397 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5bf05873-62f3-4a0f-b58e-ec6346b5f057-kube-api-access-r94x9" (OuterVolumeSpecName: "kube-api-access-r94x9") pod "5bf05873-62f3-4a0f-b58e-ec6346b5f057" (UID: "5bf05873-62f3-4a0f-b58e-ec6346b5f057"). InnerVolumeSpecName "kube-api-access-r94x9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:42:24 crc kubenswrapper[4755]: I1210 15:42:24.114292 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Dec 10 15:42:24 crc kubenswrapper[4755]: I1210 15:42:24.114343 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Dec 10 15:42:24 crc kubenswrapper[4755]: I1210 15:42:24.115917 4755 scope.go:117] "RemoveContainer" containerID="4c5db85f5ea66d6d2a0147da188f893b9281638c0e2cd6e59060f23e27505081" Dec 10 15:42:24 crc kubenswrapper[4755]: I1210 15:42:24.162154 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5bf05873-62f3-4a0f-b58e-ec6346b5f057-config" (OuterVolumeSpecName: "config") pod "5bf05873-62f3-4a0f-b58e-ec6346b5f057" (UID: "5bf05873-62f3-4a0f-b58e-ec6346b5f057"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:42:24 crc kubenswrapper[4755]: I1210 15:42:24.171138 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5bf05873-62f3-4a0f-b58e-ec6346b5f057-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5bf05873-62f3-4a0f-b58e-ec6346b5f057" (UID: "5bf05873-62f3-4a0f-b58e-ec6346b5f057"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:42:24 crc kubenswrapper[4755]: I1210 15:42:24.201976 4755 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bf05873-62f3-4a0f-b58e-ec6346b5f057-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:42:24 crc kubenswrapper[4755]: I1210 15:42:24.202018 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r94x9\" (UniqueName: \"kubernetes.io/projected/5bf05873-62f3-4a0f-b58e-ec6346b5f057-kube-api-access-r94x9\") on node \"crc\" DevicePath \"\"" Dec 10 15:42:24 crc kubenswrapper[4755]: I1210 15:42:24.202034 4755 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5bf05873-62f3-4a0f-b58e-ec6346b5f057-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 10 15:42:24 crc kubenswrapper[4755]: I1210 15:42:24.211065 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Dec 10 15:42:24 crc kubenswrapper[4755]: I1210 15:42:24.377648 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-prnhr"] Dec 10 15:42:24 crc kubenswrapper[4755]: I1210 15:42:24.385429 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-prnhr"] Dec 10 15:42:25 crc kubenswrapper[4755]: I1210 15:42:25.144125 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Dec 10 15:42:25 crc kubenswrapper[4755]: I1210 15:42:25.475519 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-1696-account-create-update-v9pxt"] Dec 10 15:42:25 crc kubenswrapper[4755]: E1210 15:42:25.475879 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64c097c3-2f2a-4d0d-b7cb-8d1c88ffd3eb" containerName="dnsmasq-dns" Dec 10 15:42:25 crc kubenswrapper[4755]: I1210 15:42:25.475894 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="64c097c3-2f2a-4d0d-b7cb-8d1c88ffd3eb" containerName="dnsmasq-dns" Dec 10 15:42:25 crc kubenswrapper[4755]: E1210 15:42:25.475908 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="129874e9-8e1c-4660-a365-5496f3148968" containerName="init" Dec 10 15:42:25 crc kubenswrapper[4755]: I1210 15:42:25.475918 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="129874e9-8e1c-4660-a365-5496f3148968" containerName="init" Dec 10 15:42:25 crc kubenswrapper[4755]: E1210 15:42:25.475932 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bf05873-62f3-4a0f-b58e-ec6346b5f057" containerName="dnsmasq-dns" Dec 10 15:42:25 crc kubenswrapper[4755]: I1210 15:42:25.475940 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bf05873-62f3-4a0f-b58e-ec6346b5f057" containerName="dnsmasq-dns" Dec 10 15:42:25 crc kubenswrapper[4755]: E1210 15:42:25.475950 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bf05873-62f3-4a0f-b58e-ec6346b5f057" containerName="init" Dec 10 15:42:25 crc kubenswrapper[4755]: I1210 15:42:25.475955 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bf05873-62f3-4a0f-b58e-ec6346b5f057" containerName="init" Dec 10 15:42:25 crc kubenswrapper[4755]: E1210 15:42:25.475964 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64c097c3-2f2a-4d0d-b7cb-8d1c88ffd3eb" containerName="init" Dec 10 15:42:25 crc kubenswrapper[4755]: I1210 15:42:25.475970 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="64c097c3-2f2a-4d0d-b7cb-8d1c88ffd3eb" containerName="init" Dec 10 15:42:25 crc kubenswrapper[4755]: I1210 15:42:25.476385 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="5bf05873-62f3-4a0f-b58e-ec6346b5f057" containerName="dnsmasq-dns" Dec 10 15:42:25 crc kubenswrapper[4755]: I1210 15:42:25.476395 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="129874e9-8e1c-4660-a365-5496f3148968" containerName="init" Dec 10 15:42:25 crc kubenswrapper[4755]: I1210 15:42:25.476413 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="64c097c3-2f2a-4d0d-b7cb-8d1c88ffd3eb" containerName="dnsmasq-dns" Dec 10 15:42:25 crc kubenswrapper[4755]: I1210 15:42:25.477054 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-1696-account-create-update-v9pxt" Dec 10 15:42:25 crc kubenswrapper[4755]: I1210 15:42:25.479922 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Dec 10 15:42:25 crc kubenswrapper[4755]: I1210 15:42:25.487436 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-1696-account-create-update-v9pxt"] Dec 10 15:42:25 crc kubenswrapper[4755]: I1210 15:42:25.538359 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-v5jhd"] Dec 10 15:42:25 crc kubenswrapper[4755]: I1210 15:42:25.539949 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-v5jhd" Dec 10 15:42:25 crc kubenswrapper[4755]: I1210 15:42:25.555310 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-v5jhd"] Dec 10 15:42:25 crc kubenswrapper[4755]: I1210 15:42:25.633149 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpv6n\" (UniqueName: \"kubernetes.io/projected/857e88aa-b989-4d6e-acbf-0309f0297a25-kube-api-access-dpv6n\") pod \"placement-1696-account-create-update-v9pxt\" (UID: \"857e88aa-b989-4d6e-acbf-0309f0297a25\") " pod="openstack/placement-1696-account-create-update-v9pxt" Dec 10 15:42:25 crc kubenswrapper[4755]: I1210 15:42:25.633274 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6f3fe5d6-3bfb-4f18-8910-f09974b19e37-operator-scripts\") pod \"placement-db-create-v5jhd\" (UID: \"6f3fe5d6-3bfb-4f18-8910-f09974b19e37\") " pod="openstack/placement-db-create-v5jhd" Dec 10 15:42:25 crc kubenswrapper[4755]: I1210 15:42:25.633357 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/857e88aa-b989-4d6e-acbf-0309f0297a25-operator-scripts\") pod \"placement-1696-account-create-update-v9pxt\" (UID: \"857e88aa-b989-4d6e-acbf-0309f0297a25\") " pod="openstack/placement-1696-account-create-update-v9pxt" Dec 10 15:42:25 crc kubenswrapper[4755]: I1210 15:42:25.633410 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ms58k\" (UniqueName: \"kubernetes.io/projected/6f3fe5d6-3bfb-4f18-8910-f09974b19e37-kube-api-access-ms58k\") pod \"placement-db-create-v5jhd\" (UID: \"6f3fe5d6-3bfb-4f18-8910-f09974b19e37\") " pod="openstack/placement-db-create-v5jhd" Dec 10 15:42:25 crc kubenswrapper[4755]: I1210 15:42:25.735173 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ms58k\" (UniqueName: \"kubernetes.io/projected/6f3fe5d6-3bfb-4f18-8910-f09974b19e37-kube-api-access-ms58k\") pod \"placement-db-create-v5jhd\" (UID: \"6f3fe5d6-3bfb-4f18-8910-f09974b19e37\") " pod="openstack/placement-db-create-v5jhd" Dec 10 15:42:25 crc kubenswrapper[4755]: I1210 15:42:25.735274 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dpv6n\" (UniqueName: \"kubernetes.io/projected/857e88aa-b989-4d6e-acbf-0309f0297a25-kube-api-access-dpv6n\") pod \"placement-1696-account-create-update-v9pxt\" (UID: \"857e88aa-b989-4d6e-acbf-0309f0297a25\") " pod="openstack/placement-1696-account-create-update-v9pxt" Dec 10 15:42:25 crc kubenswrapper[4755]: I1210 15:42:25.735313 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6f3fe5d6-3bfb-4f18-8910-f09974b19e37-operator-scripts\") pod \"placement-db-create-v5jhd\" (UID: \"6f3fe5d6-3bfb-4f18-8910-f09974b19e37\") " pod="openstack/placement-db-create-v5jhd" Dec 10 15:42:25 crc kubenswrapper[4755]: I1210 15:42:25.735379 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/857e88aa-b989-4d6e-acbf-0309f0297a25-operator-scripts\") pod \"placement-1696-account-create-update-v9pxt\" (UID: \"857e88aa-b989-4d6e-acbf-0309f0297a25\") " pod="openstack/placement-1696-account-create-update-v9pxt" Dec 10 15:42:25 crc kubenswrapper[4755]: I1210 15:42:25.736071 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/857e88aa-b989-4d6e-acbf-0309f0297a25-operator-scripts\") pod \"placement-1696-account-create-update-v9pxt\" (UID: \"857e88aa-b989-4d6e-acbf-0309f0297a25\") " pod="openstack/placement-1696-account-create-update-v9pxt" Dec 10 15:42:25 crc kubenswrapper[4755]: I1210 15:42:25.736972 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6f3fe5d6-3bfb-4f18-8910-f09974b19e37-operator-scripts\") pod \"placement-db-create-v5jhd\" (UID: \"6f3fe5d6-3bfb-4f18-8910-f09974b19e37\") " pod="openstack/placement-db-create-v5jhd" Dec 10 15:42:25 crc kubenswrapper[4755]: I1210 15:42:25.770280 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ms58k\" (UniqueName: \"kubernetes.io/projected/6f3fe5d6-3bfb-4f18-8910-f09974b19e37-kube-api-access-ms58k\") pod \"placement-db-create-v5jhd\" (UID: \"6f3fe5d6-3bfb-4f18-8910-f09974b19e37\") " pod="openstack/placement-db-create-v5jhd" Dec 10 15:42:25 crc kubenswrapper[4755]: I1210 15:42:25.777992 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpv6n\" (UniqueName: \"kubernetes.io/projected/857e88aa-b989-4d6e-acbf-0309f0297a25-kube-api-access-dpv6n\") pod \"placement-1696-account-create-update-v9pxt\" (UID: \"857e88aa-b989-4d6e-acbf-0309f0297a25\") " pod="openstack/placement-1696-account-create-update-v9pxt" Dec 10 15:42:25 crc kubenswrapper[4755]: I1210 15:42:25.797654 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5bf05873-62f3-4a0f-b58e-ec6346b5f057" path="/var/lib/kubelet/pods/5bf05873-62f3-4a0f-b58e-ec6346b5f057/volumes" Dec 10 15:42:25 crc kubenswrapper[4755]: I1210 15:42:25.798161 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-1696-account-create-update-v9pxt" Dec 10 15:42:25 crc kubenswrapper[4755]: I1210 15:42:25.876811 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-v5jhd" Dec 10 15:42:27 crc kubenswrapper[4755]: I1210 15:42:27.289708 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Dec 10 15:42:27 crc kubenswrapper[4755]: I1210 15:42:27.465735 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-hpccw"] Dec 10 15:42:27 crc kubenswrapper[4755]: I1210 15:42:27.467189 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-hpccw" Dec 10 15:42:27 crc kubenswrapper[4755]: I1210 15:42:27.502665 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-hpccw"] Dec 10 15:42:27 crc kubenswrapper[4755]: I1210 15:42:27.581331 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t49zj\" (UniqueName: \"kubernetes.io/projected/ef27f966-f4d9-4959-bb7e-4d8422fbb1dd-kube-api-access-t49zj\") pod \"dnsmasq-dns-b8fbc5445-hpccw\" (UID: \"ef27f966-f4d9-4959-bb7e-4d8422fbb1dd\") " pod="openstack/dnsmasq-dns-b8fbc5445-hpccw" Dec 10 15:42:27 crc kubenswrapper[4755]: I1210 15:42:27.581756 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ef27f966-f4d9-4959-bb7e-4d8422fbb1dd-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-hpccw\" (UID: \"ef27f966-f4d9-4959-bb7e-4d8422fbb1dd\") " pod="openstack/dnsmasq-dns-b8fbc5445-hpccw" Dec 10 15:42:27 crc kubenswrapper[4755]: I1210 15:42:27.581918 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef27f966-f4d9-4959-bb7e-4d8422fbb1dd-config\") pod \"dnsmasq-dns-b8fbc5445-hpccw\" (UID: \"ef27f966-f4d9-4959-bb7e-4d8422fbb1dd\") " pod="openstack/dnsmasq-dns-b8fbc5445-hpccw" Dec 10 15:42:27 crc kubenswrapper[4755]: I1210 15:42:27.582007 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef27f966-f4d9-4959-bb7e-4d8422fbb1dd-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-hpccw\" (UID: \"ef27f966-f4d9-4959-bb7e-4d8422fbb1dd\") " pod="openstack/dnsmasq-dns-b8fbc5445-hpccw" Dec 10 15:42:27 crc kubenswrapper[4755]: I1210 15:42:27.582090 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ef27f966-f4d9-4959-bb7e-4d8422fbb1dd-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-hpccw\" (UID: \"ef27f966-f4d9-4959-bb7e-4d8422fbb1dd\") " pod="openstack/dnsmasq-dns-b8fbc5445-hpccw" Dec 10 15:42:27 crc kubenswrapper[4755]: I1210 15:42:27.683446 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ef27f966-f4d9-4959-bb7e-4d8422fbb1dd-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-hpccw\" (UID: \"ef27f966-f4d9-4959-bb7e-4d8422fbb1dd\") " pod="openstack/dnsmasq-dns-b8fbc5445-hpccw" Dec 10 15:42:27 crc kubenswrapper[4755]: I1210 15:42:27.683533 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef27f966-f4d9-4959-bb7e-4d8422fbb1dd-config\") pod \"dnsmasq-dns-b8fbc5445-hpccw\" (UID: \"ef27f966-f4d9-4959-bb7e-4d8422fbb1dd\") " pod="openstack/dnsmasq-dns-b8fbc5445-hpccw" Dec 10 15:42:27 crc kubenswrapper[4755]: I1210 15:42:27.683585 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef27f966-f4d9-4959-bb7e-4d8422fbb1dd-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-hpccw\" (UID: \"ef27f966-f4d9-4959-bb7e-4d8422fbb1dd\") " pod="openstack/dnsmasq-dns-b8fbc5445-hpccw" Dec 10 15:42:27 crc kubenswrapper[4755]: I1210 15:42:27.683643 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ef27f966-f4d9-4959-bb7e-4d8422fbb1dd-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-hpccw\" (UID: \"ef27f966-f4d9-4959-bb7e-4d8422fbb1dd\") " pod="openstack/dnsmasq-dns-b8fbc5445-hpccw" Dec 10 15:42:27 crc kubenswrapper[4755]: I1210 15:42:27.683671 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t49zj\" (UniqueName: \"kubernetes.io/projected/ef27f966-f4d9-4959-bb7e-4d8422fbb1dd-kube-api-access-t49zj\") pod \"dnsmasq-dns-b8fbc5445-hpccw\" (UID: \"ef27f966-f4d9-4959-bb7e-4d8422fbb1dd\") " pod="openstack/dnsmasq-dns-b8fbc5445-hpccw" Dec 10 15:42:27 crc kubenswrapper[4755]: I1210 15:42:27.684317 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ef27f966-f4d9-4959-bb7e-4d8422fbb1dd-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-hpccw\" (UID: \"ef27f966-f4d9-4959-bb7e-4d8422fbb1dd\") " pod="openstack/dnsmasq-dns-b8fbc5445-hpccw" Dec 10 15:42:27 crc kubenswrapper[4755]: I1210 15:42:27.684588 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef27f966-f4d9-4959-bb7e-4d8422fbb1dd-config\") pod \"dnsmasq-dns-b8fbc5445-hpccw\" (UID: \"ef27f966-f4d9-4959-bb7e-4d8422fbb1dd\") " pod="openstack/dnsmasq-dns-b8fbc5445-hpccw" Dec 10 15:42:27 crc kubenswrapper[4755]: I1210 15:42:27.684616 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ef27f966-f4d9-4959-bb7e-4d8422fbb1dd-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-hpccw\" (UID: \"ef27f966-f4d9-4959-bb7e-4d8422fbb1dd\") " pod="openstack/dnsmasq-dns-b8fbc5445-hpccw" Dec 10 15:42:27 crc kubenswrapper[4755]: I1210 15:42:27.685088 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef27f966-f4d9-4959-bb7e-4d8422fbb1dd-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-hpccw\" (UID: \"ef27f966-f4d9-4959-bb7e-4d8422fbb1dd\") " pod="openstack/dnsmasq-dns-b8fbc5445-hpccw" Dec 10 15:42:27 crc kubenswrapper[4755]: I1210 15:42:27.700400 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t49zj\" (UniqueName: \"kubernetes.io/projected/ef27f966-f4d9-4959-bb7e-4d8422fbb1dd-kube-api-access-t49zj\") pod \"dnsmasq-dns-b8fbc5445-hpccw\" (UID: \"ef27f966-f4d9-4959-bb7e-4d8422fbb1dd\") " pod="openstack/dnsmasq-dns-b8fbc5445-hpccw" Dec 10 15:42:27 crc kubenswrapper[4755]: I1210 15:42:27.791636 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-hpccw" Dec 10 15:42:28 crc kubenswrapper[4755]: I1210 15:42:28.634255 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Dec 10 15:42:28 crc kubenswrapper[4755]: I1210 15:42:28.648038 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Dec 10 15:42:28 crc kubenswrapper[4755]: I1210 15:42:28.654981 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-5kbrw" Dec 10 15:42:28 crc kubenswrapper[4755]: I1210 15:42:28.655354 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Dec 10 15:42:28 crc kubenswrapper[4755]: I1210 15:42:28.655695 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Dec 10 15:42:28 crc kubenswrapper[4755]: I1210 15:42:28.656802 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Dec 10 15:42:28 crc kubenswrapper[4755]: I1210 15:42:28.662434 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Dec 10 15:42:28 crc kubenswrapper[4755]: W1210 15:42:28.736707 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6f3fe5d6_3bfb_4f18_8910_f09974b19e37.slice/crio-40e8c6c0c396cb37ae98e1db4dbfb4b127bc4ce4b6c0b50421e057e83cdbac2d WatchSource:0}: Error finding container 40e8c6c0c396cb37ae98e1db4dbfb4b127bc4ce4b6c0b50421e057e83cdbac2d: Status 404 returned error can't find the container with id 40e8c6c0c396cb37ae98e1db4dbfb4b127bc4ce4b6c0b50421e057e83cdbac2d Dec 10 15:42:28 crc kubenswrapper[4755]: I1210 15:42:28.740564 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-v5jhd"] Dec 10 15:42:28 crc kubenswrapper[4755]: I1210 15:42:28.815057 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-9cc3d5cb-12c4-4388-acd8-bc4d642c3bbd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9cc3d5cb-12c4-4388-acd8-bc4d642c3bbd\") pod \"swift-storage-0\" (UID: \"72a1cce7-93cb-4fe1-9d12-3d4e19692457\") " pod="openstack/swift-storage-0" Dec 10 15:42:28 crc kubenswrapper[4755]: I1210 15:42:28.815327 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/72a1cce7-93cb-4fe1-9d12-3d4e19692457-cache\") pod \"swift-storage-0\" (UID: \"72a1cce7-93cb-4fe1-9d12-3d4e19692457\") " pod="openstack/swift-storage-0" Dec 10 15:42:28 crc kubenswrapper[4755]: I1210 15:42:28.815397 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5mlh\" (UniqueName: \"kubernetes.io/projected/72a1cce7-93cb-4fe1-9d12-3d4e19692457-kube-api-access-b5mlh\") pod \"swift-storage-0\" (UID: \"72a1cce7-93cb-4fe1-9d12-3d4e19692457\") " pod="openstack/swift-storage-0" Dec 10 15:42:28 crc kubenswrapper[4755]: I1210 15:42:28.815576 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/72a1cce7-93cb-4fe1-9d12-3d4e19692457-lock\") pod \"swift-storage-0\" (UID: \"72a1cce7-93cb-4fe1-9d12-3d4e19692457\") " pod="openstack/swift-storage-0" Dec 10 15:42:28 crc kubenswrapper[4755]: I1210 15:42:28.815746 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/72a1cce7-93cb-4fe1-9d12-3d4e19692457-etc-swift\") pod \"swift-storage-0\" (UID: \"72a1cce7-93cb-4fe1-9d12-3d4e19692457\") " pod="openstack/swift-storage-0" Dec 10 15:42:28 crc kubenswrapper[4755]: I1210 15:42:28.836624 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-hpccw"] Dec 10 15:42:28 crc kubenswrapper[4755]: W1210 15:42:28.842444 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podef27f966_f4d9_4959_bb7e_4d8422fbb1dd.slice/crio-2035681ab4645f31ea731dd886a0fcaa5236cb6bfa6944dcac646dea4dfdd351 WatchSource:0}: Error finding container 2035681ab4645f31ea731dd886a0fcaa5236cb6bfa6944dcac646dea4dfdd351: Status 404 returned error can't find the container with id 2035681ab4645f31ea731dd886a0fcaa5236cb6bfa6944dcac646dea4dfdd351 Dec 10 15:42:28 crc kubenswrapper[4755]: I1210 15:42:28.916945 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5mlh\" (UniqueName: \"kubernetes.io/projected/72a1cce7-93cb-4fe1-9d12-3d4e19692457-kube-api-access-b5mlh\") pod \"swift-storage-0\" (UID: \"72a1cce7-93cb-4fe1-9d12-3d4e19692457\") " pod="openstack/swift-storage-0" Dec 10 15:42:28 crc kubenswrapper[4755]: I1210 15:42:28.917498 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/72a1cce7-93cb-4fe1-9d12-3d4e19692457-lock\") pod \"swift-storage-0\" (UID: \"72a1cce7-93cb-4fe1-9d12-3d4e19692457\") " pod="openstack/swift-storage-0" Dec 10 15:42:28 crc kubenswrapper[4755]: I1210 15:42:28.917528 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/72a1cce7-93cb-4fe1-9d12-3d4e19692457-etc-swift\") pod \"swift-storage-0\" (UID: \"72a1cce7-93cb-4fe1-9d12-3d4e19692457\") " pod="openstack/swift-storage-0" Dec 10 15:42:28 crc kubenswrapper[4755]: I1210 15:42:28.917585 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-9cc3d5cb-12c4-4388-acd8-bc4d642c3bbd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9cc3d5cb-12c4-4388-acd8-bc4d642c3bbd\") pod \"swift-storage-0\" (UID: \"72a1cce7-93cb-4fe1-9d12-3d4e19692457\") " pod="openstack/swift-storage-0" Dec 10 15:42:28 crc kubenswrapper[4755]: I1210 15:42:28.917639 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/72a1cce7-93cb-4fe1-9d12-3d4e19692457-cache\") pod \"swift-storage-0\" (UID: \"72a1cce7-93cb-4fe1-9d12-3d4e19692457\") " pod="openstack/swift-storage-0" Dec 10 15:42:28 crc kubenswrapper[4755]: E1210 15:42:28.917737 4755 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 10 15:42:28 crc kubenswrapper[4755]: E1210 15:42:28.917773 4755 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 10 15:42:28 crc kubenswrapper[4755]: E1210 15:42:28.917832 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/72a1cce7-93cb-4fe1-9d12-3d4e19692457-etc-swift podName:72a1cce7-93cb-4fe1-9d12-3d4e19692457 nodeName:}" failed. No retries permitted until 2025-12-10 15:42:29.417812607 +0000 UTC m=+1146.018696289 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/72a1cce7-93cb-4fe1-9d12-3d4e19692457-etc-swift") pod "swift-storage-0" (UID: "72a1cce7-93cb-4fe1-9d12-3d4e19692457") : configmap "swift-ring-files" not found Dec 10 15:42:28 crc kubenswrapper[4755]: I1210 15:42:28.918578 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/72a1cce7-93cb-4fe1-9d12-3d4e19692457-lock\") pod \"swift-storage-0\" (UID: \"72a1cce7-93cb-4fe1-9d12-3d4e19692457\") " pod="openstack/swift-storage-0" Dec 10 15:42:28 crc kubenswrapper[4755]: I1210 15:42:28.918592 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/72a1cce7-93cb-4fe1-9d12-3d4e19692457-cache\") pod \"swift-storage-0\" (UID: \"72a1cce7-93cb-4fe1-9d12-3d4e19692457\") " pod="openstack/swift-storage-0" Dec 10 15:42:28 crc kubenswrapper[4755]: I1210 15:42:28.924638 4755 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 10 15:42:28 crc kubenswrapper[4755]: I1210 15:42:28.924687 4755 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-9cc3d5cb-12c4-4388-acd8-bc4d642c3bbd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9cc3d5cb-12c4-4388-acd8-bc4d642c3bbd\") pod \"swift-storage-0\" (UID: \"72a1cce7-93cb-4fe1-9d12-3d4e19692457\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/9a8c0f28941fd9872ade021a32783bfa61ac2840ecd53d7b0ec2e925db32f809/globalmount\"" pod="openstack/swift-storage-0" Dec 10 15:42:28 crc kubenswrapper[4755]: I1210 15:42:28.942895 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5mlh\" (UniqueName: \"kubernetes.io/projected/72a1cce7-93cb-4fe1-9d12-3d4e19692457-kube-api-access-b5mlh\") pod \"swift-storage-0\" (UID: \"72a1cce7-93cb-4fe1-9d12-3d4e19692457\") " pod="openstack/swift-storage-0" Dec 10 15:42:28 crc kubenswrapper[4755]: W1210 15:42:28.961062 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod857e88aa_b989_4d6e_acbf_0309f0297a25.slice/crio-1629fa5ca064a1eff89491a4f94b00f3ec922240699798aab045effef470aee2 WatchSource:0}: Error finding container 1629fa5ca064a1eff89491a4f94b00f3ec922240699798aab045effef470aee2: Status 404 returned error can't find the container with id 1629fa5ca064a1eff89491a4f94b00f3ec922240699798aab045effef470aee2 Dec 10 15:42:28 crc kubenswrapper[4755]: I1210 15:42:28.963614 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-1696-account-create-update-v9pxt"] Dec 10 15:42:28 crc kubenswrapper[4755]: I1210 15:42:28.975988 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-9cc3d5cb-12c4-4388-acd8-bc4d642c3bbd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9cc3d5cb-12c4-4388-acd8-bc4d642c3bbd\") pod \"swift-storage-0\" (UID: \"72a1cce7-93cb-4fe1-9d12-3d4e19692457\") " pod="openstack/swift-storage-0" Dec 10 15:42:29 crc kubenswrapper[4755]: I1210 15:42:29.083317 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-v5jhd" event={"ID":"6f3fe5d6-3bfb-4f18-8910-f09974b19e37","Type":"ContainerStarted","Data":"c571038108eba45f39db641147369e9739c4d5ef301e1162b3ec403e5a73b7b0"} Dec 10 15:42:29 crc kubenswrapper[4755]: I1210 15:42:29.083385 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-v5jhd" event={"ID":"6f3fe5d6-3bfb-4f18-8910-f09974b19e37","Type":"ContainerStarted","Data":"40e8c6c0c396cb37ae98e1db4dbfb4b127bc4ce4b6c0b50421e057e83cdbac2d"} Dec 10 15:42:29 crc kubenswrapper[4755]: I1210 15:42:29.090041 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"251dc547-e1a7-418e-95fd-6b7e8e5c5d35","Type":"ContainerStarted","Data":"452d0ac5fe8231a7081c7ffc0e4c94bb5b29f399f5b3db2cca63f1368ea1f265"} Dec 10 15:42:29 crc kubenswrapper[4755]: I1210 15:42:29.091559 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-1696-account-create-update-v9pxt" event={"ID":"857e88aa-b989-4d6e-acbf-0309f0297a25","Type":"ContainerStarted","Data":"1629fa5ca064a1eff89491a4f94b00f3ec922240699798aab045effef470aee2"} Dec 10 15:42:29 crc kubenswrapper[4755]: I1210 15:42:29.097067 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"d586c26d-c444-4202-b286-522cfc372f16","Type":"ContainerStarted","Data":"500a3385e895fb759e18a465d54349199a44e474ab440a2ed01d7701e035d67c"} Dec 10 15:42:29 crc kubenswrapper[4755]: I1210 15:42:29.099685 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-hpccw" event={"ID":"ef27f966-f4d9-4959-bb7e-4d8422fbb1dd","Type":"ContainerStarted","Data":"1a341cef385c275218b0a1b44e553ef0b7278e1a25aa6038cd9423f35211e121"} Dec 10 15:42:29 crc kubenswrapper[4755]: I1210 15:42:29.099721 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-hpccw" event={"ID":"ef27f966-f4d9-4959-bb7e-4d8422fbb1dd","Type":"ContainerStarted","Data":"2035681ab4645f31ea731dd886a0fcaa5236cb6bfa6944dcac646dea4dfdd351"} Dec 10 15:42:29 crc kubenswrapper[4755]: I1210 15:42:29.124170 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-v5jhd" podStartSLOduration=4.124148432 podStartE2EDuration="4.124148432s" podCreationTimestamp="2025-12-10 15:42:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:42:29.104089462 +0000 UTC m=+1145.704973104" watchObservedRunningTime="2025-12-10 15:42:29.124148432 +0000 UTC m=+1145.725032064" Dec 10 15:42:29 crc kubenswrapper[4755]: I1210 15:42:29.141241 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=14.067353719 podStartE2EDuration="58.141217909s" podCreationTimestamp="2025-12-10 15:41:31 +0000 UTC" firstStartedPulling="2025-12-10 15:41:44.131406417 +0000 UTC m=+1100.732290049" lastFinishedPulling="2025-12-10 15:42:28.205270597 +0000 UTC m=+1144.806154239" observedRunningTime="2025-12-10 15:42:29.131109953 +0000 UTC m=+1145.731993605" watchObservedRunningTime="2025-12-10 15:42:29.141217909 +0000 UTC m=+1145.742101541" Dec 10 15:42:29 crc kubenswrapper[4755]: I1210 15:42:29.355008 4755 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="e5a3871d-6b81-4b3d-9044-fcbcf437effb" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 10 15:42:29 crc kubenswrapper[4755]: I1210 15:42:29.426608 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/72a1cce7-93cb-4fe1-9d12-3d4e19692457-etc-swift\") pod \"swift-storage-0\" (UID: \"72a1cce7-93cb-4fe1-9d12-3d4e19692457\") " pod="openstack/swift-storage-0" Dec 10 15:42:29 crc kubenswrapper[4755]: E1210 15:42:29.426780 4755 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 10 15:42:29 crc kubenswrapper[4755]: E1210 15:42:29.426795 4755 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 10 15:42:29 crc kubenswrapper[4755]: E1210 15:42:29.426844 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/72a1cce7-93cb-4fe1-9d12-3d4e19692457-etc-swift podName:72a1cce7-93cb-4fe1-9d12-3d4e19692457 nodeName:}" failed. No retries permitted until 2025-12-10 15:42:30.426830818 +0000 UTC m=+1147.027714450 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/72a1cce7-93cb-4fe1-9d12-3d4e19692457-etc-swift") pod "swift-storage-0" (UID: "72a1cce7-93cb-4fe1-9d12-3d4e19692457") : configmap "swift-ring-files" not found Dec 10 15:42:30 crc kubenswrapper[4755]: I1210 15:42:30.116278 4755 generic.go:334] "Generic (PLEG): container finished" podID="ef27f966-f4d9-4959-bb7e-4d8422fbb1dd" containerID="1a341cef385c275218b0a1b44e553ef0b7278e1a25aa6038cd9423f35211e121" exitCode=0 Dec 10 15:42:30 crc kubenswrapper[4755]: I1210 15:42:30.116323 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-hpccw" event={"ID":"ef27f966-f4d9-4959-bb7e-4d8422fbb1dd","Type":"ContainerDied","Data":"1a341cef385c275218b0a1b44e553ef0b7278e1a25aa6038cd9423f35211e121"} Dec 10 15:42:30 crc kubenswrapper[4755]: I1210 15:42:30.116625 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-hpccw" event={"ID":"ef27f966-f4d9-4959-bb7e-4d8422fbb1dd","Type":"ContainerStarted","Data":"aa0a2cb370ce1a30e6c19ae5c02d01baa9f926faa9f215f37831d4152d9fc2e0"} Dec 10 15:42:30 crc kubenswrapper[4755]: I1210 15:42:30.116646 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-hpccw" Dec 10 15:42:30 crc kubenswrapper[4755]: I1210 15:42:30.118870 4755 generic.go:334] "Generic (PLEG): container finished" podID="6f3fe5d6-3bfb-4f18-8910-f09974b19e37" containerID="c571038108eba45f39db641147369e9739c4d5ef301e1162b3ec403e5a73b7b0" exitCode=0 Dec 10 15:42:30 crc kubenswrapper[4755]: I1210 15:42:30.118933 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-v5jhd" event={"ID":"6f3fe5d6-3bfb-4f18-8910-f09974b19e37","Type":"ContainerDied","Data":"c571038108eba45f39db641147369e9739c4d5ef301e1162b3ec403e5a73b7b0"} Dec 10 15:42:30 crc kubenswrapper[4755]: I1210 15:42:30.120413 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-1696-account-create-update-v9pxt" event={"ID":"857e88aa-b989-4d6e-acbf-0309f0297a25","Type":"ContainerStarted","Data":"74ba3f7506a0df21d04362279379ae7b657a6c2a8d7cea939589543a6ce287d0"} Dec 10 15:42:30 crc kubenswrapper[4755]: I1210 15:42:30.138674 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b8fbc5445-hpccw" podStartSLOduration=3.138648477 podStartE2EDuration="3.138648477s" podCreationTimestamp="2025-12-10 15:42:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:42:30.135892822 +0000 UTC m=+1146.736776454" watchObservedRunningTime="2025-12-10 15:42:30.138648477 +0000 UTC m=+1146.739532109" Dec 10 15:42:30 crc kubenswrapper[4755]: I1210 15:42:30.173545 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-1696-account-create-update-v9pxt" podStartSLOduration=5.173459952 podStartE2EDuration="5.173459952s" podCreationTimestamp="2025-12-10 15:42:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:42:30.168153056 +0000 UTC m=+1146.769036688" watchObservedRunningTime="2025-12-10 15:42:30.173459952 +0000 UTC m=+1146.774343584" Dec 10 15:42:30 crc kubenswrapper[4755]: I1210 15:42:30.272731 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Dec 10 15:42:30 crc kubenswrapper[4755]: I1210 15:42:30.448606 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/72a1cce7-93cb-4fe1-9d12-3d4e19692457-etc-swift\") pod \"swift-storage-0\" (UID: \"72a1cce7-93cb-4fe1-9d12-3d4e19692457\") " pod="openstack/swift-storage-0" Dec 10 15:42:30 crc kubenswrapper[4755]: E1210 15:42:30.449545 4755 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 10 15:42:30 crc kubenswrapper[4755]: E1210 15:42:30.449579 4755 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 10 15:42:30 crc kubenswrapper[4755]: E1210 15:42:30.449617 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/72a1cce7-93cb-4fe1-9d12-3d4e19692457-etc-swift podName:72a1cce7-93cb-4fe1-9d12-3d4e19692457 nodeName:}" failed. No retries permitted until 2025-12-10 15:42:32.44960166 +0000 UTC m=+1149.050485292 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/72a1cce7-93cb-4fe1-9d12-3d4e19692457-etc-swift") pod "swift-storage-0" (UID: "72a1cce7-93cb-4fe1-9d12-3d4e19692457") : configmap "swift-ring-files" not found Dec 10 15:42:30 crc kubenswrapper[4755]: I1210 15:42:30.635336 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-jjdr5"] Dec 10 15:42:30 crc kubenswrapper[4755]: I1210 15:42:30.636917 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-jjdr5" Dec 10 15:42:30 crc kubenswrapper[4755]: I1210 15:42:30.645102 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-jjdr5"] Dec 10 15:42:30 crc kubenswrapper[4755]: I1210 15:42:30.729384 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-1b56-account-create-update-l28rl"] Dec 10 15:42:30 crc kubenswrapper[4755]: I1210 15:42:30.730860 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-1b56-account-create-update-l28rl" Dec 10 15:42:30 crc kubenswrapper[4755]: I1210 15:42:30.732631 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Dec 10 15:42:30 crc kubenswrapper[4755]: I1210 15:42:30.739318 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-1b56-account-create-update-l28rl"] Dec 10 15:42:30 crc kubenswrapper[4755]: I1210 15:42:30.754519 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c0d25e1b-5c7f-468c-9723-4030952bc88c-operator-scripts\") pod \"glance-db-create-jjdr5\" (UID: \"c0d25e1b-5c7f-468c-9723-4030952bc88c\") " pod="openstack/glance-db-create-jjdr5" Dec 10 15:42:30 crc kubenswrapper[4755]: I1210 15:42:30.754595 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkrsq\" (UniqueName: \"kubernetes.io/projected/c0d25e1b-5c7f-468c-9723-4030952bc88c-kube-api-access-pkrsq\") pod \"glance-db-create-jjdr5\" (UID: \"c0d25e1b-5c7f-468c-9723-4030952bc88c\") " pod="openstack/glance-db-create-jjdr5" Dec 10 15:42:30 crc kubenswrapper[4755]: I1210 15:42:30.856565 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4dce77c7-9c60-4919-b20d-a954178e0b0c-operator-scripts\") pod \"glance-1b56-account-create-update-l28rl\" (UID: \"4dce77c7-9c60-4919-b20d-a954178e0b0c\") " pod="openstack/glance-1b56-account-create-update-l28rl" Dec 10 15:42:30 crc kubenswrapper[4755]: I1210 15:42:30.856640 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c0d25e1b-5c7f-468c-9723-4030952bc88c-operator-scripts\") pod \"glance-db-create-jjdr5\" (UID: \"c0d25e1b-5c7f-468c-9723-4030952bc88c\") " pod="openstack/glance-db-create-jjdr5" Dec 10 15:42:30 crc kubenswrapper[4755]: I1210 15:42:30.856801 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkrsq\" (UniqueName: \"kubernetes.io/projected/c0d25e1b-5c7f-468c-9723-4030952bc88c-kube-api-access-pkrsq\") pod \"glance-db-create-jjdr5\" (UID: \"c0d25e1b-5c7f-468c-9723-4030952bc88c\") " pod="openstack/glance-db-create-jjdr5" Dec 10 15:42:30 crc kubenswrapper[4755]: I1210 15:42:30.856974 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmnps\" (UniqueName: \"kubernetes.io/projected/4dce77c7-9c60-4919-b20d-a954178e0b0c-kube-api-access-pmnps\") pod \"glance-1b56-account-create-update-l28rl\" (UID: \"4dce77c7-9c60-4919-b20d-a954178e0b0c\") " pod="openstack/glance-1b56-account-create-update-l28rl" Dec 10 15:42:30 crc kubenswrapper[4755]: I1210 15:42:30.857539 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c0d25e1b-5c7f-468c-9723-4030952bc88c-operator-scripts\") pod \"glance-db-create-jjdr5\" (UID: \"c0d25e1b-5c7f-468c-9723-4030952bc88c\") " pod="openstack/glance-db-create-jjdr5" Dec 10 15:42:30 crc kubenswrapper[4755]: I1210 15:42:30.885791 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkrsq\" (UniqueName: \"kubernetes.io/projected/c0d25e1b-5c7f-468c-9723-4030952bc88c-kube-api-access-pkrsq\") pod \"glance-db-create-jjdr5\" (UID: \"c0d25e1b-5c7f-468c-9723-4030952bc88c\") " pod="openstack/glance-db-create-jjdr5" Dec 10 15:42:30 crc kubenswrapper[4755]: I1210 15:42:30.958961 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4dce77c7-9c60-4919-b20d-a954178e0b0c-operator-scripts\") pod \"glance-1b56-account-create-update-l28rl\" (UID: \"4dce77c7-9c60-4919-b20d-a954178e0b0c\") " pod="openstack/glance-1b56-account-create-update-l28rl" Dec 10 15:42:30 crc kubenswrapper[4755]: I1210 15:42:30.959071 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmnps\" (UniqueName: \"kubernetes.io/projected/4dce77c7-9c60-4919-b20d-a954178e0b0c-kube-api-access-pmnps\") pod \"glance-1b56-account-create-update-l28rl\" (UID: \"4dce77c7-9c60-4919-b20d-a954178e0b0c\") " pod="openstack/glance-1b56-account-create-update-l28rl" Dec 10 15:42:30 crc kubenswrapper[4755]: I1210 15:42:30.960196 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4dce77c7-9c60-4919-b20d-a954178e0b0c-operator-scripts\") pod \"glance-1b56-account-create-update-l28rl\" (UID: \"4dce77c7-9c60-4919-b20d-a954178e0b0c\") " pod="openstack/glance-1b56-account-create-update-l28rl" Dec 10 15:42:30 crc kubenswrapper[4755]: I1210 15:42:30.965685 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-jjdr5" Dec 10 15:42:30 crc kubenswrapper[4755]: I1210 15:42:30.976556 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmnps\" (UniqueName: \"kubernetes.io/projected/4dce77c7-9c60-4919-b20d-a954178e0b0c-kube-api-access-pmnps\") pod \"glance-1b56-account-create-update-l28rl\" (UID: \"4dce77c7-9c60-4919-b20d-a954178e0b0c\") " pod="openstack/glance-1b56-account-create-update-l28rl" Dec 10 15:42:31 crc kubenswrapper[4755]: I1210 15:42:31.051490 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-1b56-account-create-update-l28rl" Dec 10 15:42:31 crc kubenswrapper[4755]: I1210 15:42:31.133012 4755 generic.go:334] "Generic (PLEG): container finished" podID="857e88aa-b989-4d6e-acbf-0309f0297a25" containerID="74ba3f7506a0df21d04362279379ae7b657a6c2a8d7cea939589543a6ce287d0" exitCode=0 Dec 10 15:42:31 crc kubenswrapper[4755]: I1210 15:42:31.133270 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-1696-account-create-update-v9pxt" event={"ID":"857e88aa-b989-4d6e-acbf-0309f0297a25","Type":"ContainerDied","Data":"74ba3f7506a0df21d04362279379ae7b657a6c2a8d7cea939589543a6ce287d0"} Dec 10 15:42:31 crc kubenswrapper[4755]: W1210 15:42:31.505765 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc0d25e1b_5c7f_468c_9723_4030952bc88c.slice/crio-2fd32550304a64765654c689ebea4ab1420aecc18b0c13a7013bf465a96df75d WatchSource:0}: Error finding container 2fd32550304a64765654c689ebea4ab1420aecc18b0c13a7013bf465a96df75d: Status 404 returned error can't find the container with id 2fd32550304a64765654c689ebea4ab1420aecc18b0c13a7013bf465a96df75d Dec 10 15:42:31 crc kubenswrapper[4755]: I1210 15:42:31.533224 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-jjdr5"] Dec 10 15:42:31 crc kubenswrapper[4755]: I1210 15:42:31.614744 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-1b56-account-create-update-l28rl"] Dec 10 15:42:31 crc kubenswrapper[4755]: I1210 15:42:31.620355 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-v5jhd" Dec 10 15:42:31 crc kubenswrapper[4755]: W1210 15:42:31.622812 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4dce77c7_9c60_4919_b20d_a954178e0b0c.slice/crio-5eb84d1c99e7679d47cfdbb94b5d1db4d8ace0d42a6d838bb1f312432da7c596 WatchSource:0}: Error finding container 5eb84d1c99e7679d47cfdbb94b5d1db4d8ace0d42a6d838bb1f312432da7c596: Status 404 returned error can't find the container with id 5eb84d1c99e7679d47cfdbb94b5d1db4d8ace0d42a6d838bb1f312432da7c596 Dec 10 15:42:31 crc kubenswrapper[4755]: I1210 15:42:31.772239 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6f3fe5d6-3bfb-4f18-8910-f09974b19e37-operator-scripts\") pod \"6f3fe5d6-3bfb-4f18-8910-f09974b19e37\" (UID: \"6f3fe5d6-3bfb-4f18-8910-f09974b19e37\") " Dec 10 15:42:31 crc kubenswrapper[4755]: I1210 15:42:31.772688 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ms58k\" (UniqueName: \"kubernetes.io/projected/6f3fe5d6-3bfb-4f18-8910-f09974b19e37-kube-api-access-ms58k\") pod \"6f3fe5d6-3bfb-4f18-8910-f09974b19e37\" (UID: \"6f3fe5d6-3bfb-4f18-8910-f09974b19e37\") " Dec 10 15:42:31 crc kubenswrapper[4755]: I1210 15:42:31.773015 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f3fe5d6-3bfb-4f18-8910-f09974b19e37-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6f3fe5d6-3bfb-4f18-8910-f09974b19e37" (UID: "6f3fe5d6-3bfb-4f18-8910-f09974b19e37"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:42:31 crc kubenswrapper[4755]: I1210 15:42:31.773296 4755 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6f3fe5d6-3bfb-4f18-8910-f09974b19e37-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 15:42:31 crc kubenswrapper[4755]: I1210 15:42:31.777300 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f3fe5d6-3bfb-4f18-8910-f09974b19e37-kube-api-access-ms58k" (OuterVolumeSpecName: "kube-api-access-ms58k") pod "6f3fe5d6-3bfb-4f18-8910-f09974b19e37" (UID: "6f3fe5d6-3bfb-4f18-8910-f09974b19e37"). InnerVolumeSpecName "kube-api-access-ms58k". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:42:31 crc kubenswrapper[4755]: I1210 15:42:31.877035 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ms58k\" (UniqueName: \"kubernetes.io/projected/6f3fe5d6-3bfb-4f18-8910-f09974b19e37-kube-api-access-ms58k\") on node \"crc\" DevicePath \"\"" Dec 10 15:42:32 crc kubenswrapper[4755]: I1210 15:42:32.140932 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-jjdr5" event={"ID":"c0d25e1b-5c7f-468c-9723-4030952bc88c","Type":"ContainerStarted","Data":"2fd32550304a64765654c689ebea4ab1420aecc18b0c13a7013bf465a96df75d"} Dec 10 15:42:32 crc kubenswrapper[4755]: I1210 15:42:32.142326 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-1b56-account-create-update-l28rl" event={"ID":"4dce77c7-9c60-4919-b20d-a954178e0b0c","Type":"ContainerStarted","Data":"5eb84d1c99e7679d47cfdbb94b5d1db4d8ace0d42a6d838bb1f312432da7c596"} Dec 10 15:42:32 crc kubenswrapper[4755]: I1210 15:42:32.143638 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-v5jhd" event={"ID":"6f3fe5d6-3bfb-4f18-8910-f09974b19e37","Type":"ContainerDied","Data":"40e8c6c0c396cb37ae98e1db4dbfb4b127bc4ce4b6c0b50421e057e83cdbac2d"} Dec 10 15:42:32 crc kubenswrapper[4755]: I1210 15:42:32.143663 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-v5jhd" Dec 10 15:42:32 crc kubenswrapper[4755]: I1210 15:42:32.143669 4755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="40e8c6c0c396cb37ae98e1db4dbfb4b127bc4ce4b6c0b50421e057e83cdbac2d" Dec 10 15:42:32 crc kubenswrapper[4755]: I1210 15:42:32.489046 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/72a1cce7-93cb-4fe1-9d12-3d4e19692457-etc-swift\") pod \"swift-storage-0\" (UID: \"72a1cce7-93cb-4fe1-9d12-3d4e19692457\") " pod="openstack/swift-storage-0" Dec 10 15:42:32 crc kubenswrapper[4755]: E1210 15:42:32.489285 4755 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 10 15:42:32 crc kubenswrapper[4755]: E1210 15:42:32.489627 4755 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 10 15:42:32 crc kubenswrapper[4755]: E1210 15:42:32.489687 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/72a1cce7-93cb-4fe1-9d12-3d4e19692457-etc-swift podName:72a1cce7-93cb-4fe1-9d12-3d4e19692457 nodeName:}" failed. No retries permitted until 2025-12-10 15:42:36.489670676 +0000 UTC m=+1153.090554308 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/72a1cce7-93cb-4fe1-9d12-3d4e19692457-etc-swift") pod "swift-storage-0" (UID: "72a1cce7-93cb-4fe1-9d12-3d4e19692457") : configmap "swift-ring-files" not found Dec 10 15:42:32 crc kubenswrapper[4755]: I1210 15:42:32.511291 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-qg2cr"] Dec 10 15:42:32 crc kubenswrapper[4755]: E1210 15:42:32.511721 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f3fe5d6-3bfb-4f18-8910-f09974b19e37" containerName="mariadb-database-create" Dec 10 15:42:32 crc kubenswrapper[4755]: I1210 15:42:32.511744 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f3fe5d6-3bfb-4f18-8910-f09974b19e37" containerName="mariadb-database-create" Dec 10 15:42:32 crc kubenswrapper[4755]: I1210 15:42:32.511927 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f3fe5d6-3bfb-4f18-8910-f09974b19e37" containerName="mariadb-database-create" Dec 10 15:42:32 crc kubenswrapper[4755]: I1210 15:42:32.512602 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-qg2cr" Dec 10 15:42:32 crc kubenswrapper[4755]: I1210 15:42:32.514395 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Dec 10 15:42:32 crc kubenswrapper[4755]: I1210 15:42:32.515506 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Dec 10 15:42:32 crc kubenswrapper[4755]: I1210 15:42:32.516303 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Dec 10 15:42:32 crc kubenswrapper[4755]: I1210 15:42:32.527693 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-qg2cr"] Dec 10 15:42:32 crc kubenswrapper[4755]: I1210 15:42:32.591879 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/aff2e950-1295-4b9e-996a-f9a6c4a1dedd-scripts\") pod \"swift-ring-rebalance-qg2cr\" (UID: \"aff2e950-1295-4b9e-996a-f9a6c4a1dedd\") " pod="openstack/swift-ring-rebalance-qg2cr" Dec 10 15:42:32 crc kubenswrapper[4755]: I1210 15:42:32.592140 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/aff2e950-1295-4b9e-996a-f9a6c4a1dedd-ring-data-devices\") pod \"swift-ring-rebalance-qg2cr\" (UID: \"aff2e950-1295-4b9e-996a-f9a6c4a1dedd\") " pod="openstack/swift-ring-rebalance-qg2cr" Dec 10 15:42:32 crc kubenswrapper[4755]: I1210 15:42:32.592222 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxdtc\" (UniqueName: \"kubernetes.io/projected/aff2e950-1295-4b9e-996a-f9a6c4a1dedd-kube-api-access-rxdtc\") pod \"swift-ring-rebalance-qg2cr\" (UID: \"aff2e950-1295-4b9e-996a-f9a6c4a1dedd\") " pod="openstack/swift-ring-rebalance-qg2cr" Dec 10 15:42:32 crc kubenswrapper[4755]: I1210 15:42:32.592315 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/aff2e950-1295-4b9e-996a-f9a6c4a1dedd-dispersionconf\") pod \"swift-ring-rebalance-qg2cr\" (UID: \"aff2e950-1295-4b9e-996a-f9a6c4a1dedd\") " pod="openstack/swift-ring-rebalance-qg2cr" Dec 10 15:42:32 crc kubenswrapper[4755]: I1210 15:42:32.592433 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/aff2e950-1295-4b9e-996a-f9a6c4a1dedd-etc-swift\") pod \"swift-ring-rebalance-qg2cr\" (UID: \"aff2e950-1295-4b9e-996a-f9a6c4a1dedd\") " pod="openstack/swift-ring-rebalance-qg2cr" Dec 10 15:42:32 crc kubenswrapper[4755]: I1210 15:42:32.592587 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aff2e950-1295-4b9e-996a-f9a6c4a1dedd-combined-ca-bundle\") pod \"swift-ring-rebalance-qg2cr\" (UID: \"aff2e950-1295-4b9e-996a-f9a6c4a1dedd\") " pod="openstack/swift-ring-rebalance-qg2cr" Dec 10 15:42:32 crc kubenswrapper[4755]: I1210 15:42:32.592760 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/aff2e950-1295-4b9e-996a-f9a6c4a1dedd-swiftconf\") pod \"swift-ring-rebalance-qg2cr\" (UID: \"aff2e950-1295-4b9e-996a-f9a6c4a1dedd\") " pod="openstack/swift-ring-rebalance-qg2cr" Dec 10 15:42:32 crc kubenswrapper[4755]: I1210 15:42:32.692736 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-1696-account-create-update-v9pxt" Dec 10 15:42:32 crc kubenswrapper[4755]: I1210 15:42:32.694220 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aff2e950-1295-4b9e-996a-f9a6c4a1dedd-combined-ca-bundle\") pod \"swift-ring-rebalance-qg2cr\" (UID: \"aff2e950-1295-4b9e-996a-f9a6c4a1dedd\") " pod="openstack/swift-ring-rebalance-qg2cr" Dec 10 15:42:32 crc kubenswrapper[4755]: I1210 15:42:32.694503 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/aff2e950-1295-4b9e-996a-f9a6c4a1dedd-swiftconf\") pod \"swift-ring-rebalance-qg2cr\" (UID: \"aff2e950-1295-4b9e-996a-f9a6c4a1dedd\") " pod="openstack/swift-ring-rebalance-qg2cr" Dec 10 15:42:32 crc kubenswrapper[4755]: I1210 15:42:32.694628 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/aff2e950-1295-4b9e-996a-f9a6c4a1dedd-scripts\") pod \"swift-ring-rebalance-qg2cr\" (UID: \"aff2e950-1295-4b9e-996a-f9a6c4a1dedd\") " pod="openstack/swift-ring-rebalance-qg2cr" Dec 10 15:42:32 crc kubenswrapper[4755]: I1210 15:42:32.694726 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/aff2e950-1295-4b9e-996a-f9a6c4a1dedd-ring-data-devices\") pod \"swift-ring-rebalance-qg2cr\" (UID: \"aff2e950-1295-4b9e-996a-f9a6c4a1dedd\") " pod="openstack/swift-ring-rebalance-qg2cr" Dec 10 15:42:32 crc kubenswrapper[4755]: I1210 15:42:32.694876 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxdtc\" (UniqueName: \"kubernetes.io/projected/aff2e950-1295-4b9e-996a-f9a6c4a1dedd-kube-api-access-rxdtc\") pod \"swift-ring-rebalance-qg2cr\" (UID: \"aff2e950-1295-4b9e-996a-f9a6c4a1dedd\") " pod="openstack/swift-ring-rebalance-qg2cr" Dec 10 15:42:32 crc kubenswrapper[4755]: I1210 15:42:32.695335 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/aff2e950-1295-4b9e-996a-f9a6c4a1dedd-dispersionconf\") pod \"swift-ring-rebalance-qg2cr\" (UID: \"aff2e950-1295-4b9e-996a-f9a6c4a1dedd\") " pod="openstack/swift-ring-rebalance-qg2cr" Dec 10 15:42:32 crc kubenswrapper[4755]: I1210 15:42:32.695899 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/aff2e950-1295-4b9e-996a-f9a6c4a1dedd-etc-swift\") pod \"swift-ring-rebalance-qg2cr\" (UID: \"aff2e950-1295-4b9e-996a-f9a6c4a1dedd\") " pod="openstack/swift-ring-rebalance-qg2cr" Dec 10 15:42:32 crc kubenswrapper[4755]: I1210 15:42:32.695761 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/aff2e950-1295-4b9e-996a-f9a6c4a1dedd-ring-data-devices\") pod \"swift-ring-rebalance-qg2cr\" (UID: \"aff2e950-1295-4b9e-996a-f9a6c4a1dedd\") " pod="openstack/swift-ring-rebalance-qg2cr" Dec 10 15:42:32 crc kubenswrapper[4755]: I1210 15:42:32.696274 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/aff2e950-1295-4b9e-996a-f9a6c4a1dedd-etc-swift\") pod \"swift-ring-rebalance-qg2cr\" (UID: \"aff2e950-1295-4b9e-996a-f9a6c4a1dedd\") " pod="openstack/swift-ring-rebalance-qg2cr" Dec 10 15:42:32 crc kubenswrapper[4755]: I1210 15:42:32.696302 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/aff2e950-1295-4b9e-996a-f9a6c4a1dedd-scripts\") pod \"swift-ring-rebalance-qg2cr\" (UID: \"aff2e950-1295-4b9e-996a-f9a6c4a1dedd\") " pod="openstack/swift-ring-rebalance-qg2cr" Dec 10 15:42:32 crc kubenswrapper[4755]: I1210 15:42:32.702240 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/aff2e950-1295-4b9e-996a-f9a6c4a1dedd-swiftconf\") pod \"swift-ring-rebalance-qg2cr\" (UID: \"aff2e950-1295-4b9e-996a-f9a6c4a1dedd\") " pod="openstack/swift-ring-rebalance-qg2cr" Dec 10 15:42:32 crc kubenswrapper[4755]: I1210 15:42:32.702888 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aff2e950-1295-4b9e-996a-f9a6c4a1dedd-combined-ca-bundle\") pod \"swift-ring-rebalance-qg2cr\" (UID: \"aff2e950-1295-4b9e-996a-f9a6c4a1dedd\") " pod="openstack/swift-ring-rebalance-qg2cr" Dec 10 15:42:32 crc kubenswrapper[4755]: I1210 15:42:32.703206 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/aff2e950-1295-4b9e-996a-f9a6c4a1dedd-dispersionconf\") pod \"swift-ring-rebalance-qg2cr\" (UID: \"aff2e950-1295-4b9e-996a-f9a6c4a1dedd\") " pod="openstack/swift-ring-rebalance-qg2cr" Dec 10 15:42:32 crc kubenswrapper[4755]: I1210 15:42:32.722153 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxdtc\" (UniqueName: \"kubernetes.io/projected/aff2e950-1295-4b9e-996a-f9a6c4a1dedd-kube-api-access-rxdtc\") pod \"swift-ring-rebalance-qg2cr\" (UID: \"aff2e950-1295-4b9e-996a-f9a6c4a1dedd\") " pod="openstack/swift-ring-rebalance-qg2cr" Dec 10 15:42:32 crc kubenswrapper[4755]: I1210 15:42:32.797553 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dpv6n\" (UniqueName: \"kubernetes.io/projected/857e88aa-b989-4d6e-acbf-0309f0297a25-kube-api-access-dpv6n\") pod \"857e88aa-b989-4d6e-acbf-0309f0297a25\" (UID: \"857e88aa-b989-4d6e-acbf-0309f0297a25\") " Dec 10 15:42:32 crc kubenswrapper[4755]: I1210 15:42:32.797602 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/857e88aa-b989-4d6e-acbf-0309f0297a25-operator-scripts\") pod \"857e88aa-b989-4d6e-acbf-0309f0297a25\" (UID: \"857e88aa-b989-4d6e-acbf-0309f0297a25\") " Dec 10 15:42:32 crc kubenswrapper[4755]: I1210 15:42:32.798592 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/857e88aa-b989-4d6e-acbf-0309f0297a25-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "857e88aa-b989-4d6e-acbf-0309f0297a25" (UID: "857e88aa-b989-4d6e-acbf-0309f0297a25"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:42:32 crc kubenswrapper[4755]: I1210 15:42:32.802072 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/857e88aa-b989-4d6e-acbf-0309f0297a25-kube-api-access-dpv6n" (OuterVolumeSpecName: "kube-api-access-dpv6n") pod "857e88aa-b989-4d6e-acbf-0309f0297a25" (UID: "857e88aa-b989-4d6e-acbf-0309f0297a25"). InnerVolumeSpecName "kube-api-access-dpv6n". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:42:32 crc kubenswrapper[4755]: I1210 15:42:32.812332 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-qg2cr" Dec 10 15:42:32 crc kubenswrapper[4755]: I1210 15:42:32.900109 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dpv6n\" (UniqueName: \"kubernetes.io/projected/857e88aa-b989-4d6e-acbf-0309f0297a25-kube-api-access-dpv6n\") on node \"crc\" DevicePath \"\"" Dec 10 15:42:32 crc kubenswrapper[4755]: I1210 15:42:32.901916 4755 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/857e88aa-b989-4d6e-acbf-0309f0297a25-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 15:42:33 crc kubenswrapper[4755]: I1210 15:42:33.157104 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-1696-account-create-update-v9pxt" event={"ID":"857e88aa-b989-4d6e-acbf-0309f0297a25","Type":"ContainerDied","Data":"1629fa5ca064a1eff89491a4f94b00f3ec922240699798aab045effef470aee2"} Dec 10 15:42:33 crc kubenswrapper[4755]: I1210 15:42:33.157421 4755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1629fa5ca064a1eff89491a4f94b00f3ec922240699798aab045effef470aee2" Dec 10 15:42:33 crc kubenswrapper[4755]: I1210 15:42:33.157511 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-1696-account-create-update-v9pxt" Dec 10 15:42:33 crc kubenswrapper[4755]: I1210 15:42:33.164266 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"251dc547-e1a7-418e-95fd-6b7e8e5c5d35","Type":"ContainerStarted","Data":"425c4aaa12065585d9b150bb9bcf966d2169d96d7b0ba5754b7474e16b675184"} Dec 10 15:42:33 crc kubenswrapper[4755]: I1210 15:42:33.170758 4755 generic.go:334] "Generic (PLEG): container finished" podID="c0d25e1b-5c7f-468c-9723-4030952bc88c" containerID="6a90258492774ed628d3c0326176405b1e73b5605a84995a033402c90ac3f3f4" exitCode=0 Dec 10 15:42:33 crc kubenswrapper[4755]: I1210 15:42:33.170816 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-jjdr5" event={"ID":"c0d25e1b-5c7f-468c-9723-4030952bc88c","Type":"ContainerDied","Data":"6a90258492774ed628d3c0326176405b1e73b5605a84995a033402c90ac3f3f4"} Dec 10 15:42:33 crc kubenswrapper[4755]: I1210 15:42:33.176164 4755 generic.go:334] "Generic (PLEG): container finished" podID="4dce77c7-9c60-4919-b20d-a954178e0b0c" containerID="2f61206613b11c920d9e7fafb07f8c923553a7345715b895d4afc705f9949642" exitCode=0 Dec 10 15:42:33 crc kubenswrapper[4755]: I1210 15:42:33.176205 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-1b56-account-create-update-l28rl" event={"ID":"4dce77c7-9c60-4919-b20d-a954178e0b0c","Type":"ContainerDied","Data":"2f61206613b11c920d9e7fafb07f8c923553a7345715b895d4afc705f9949642"} Dec 10 15:42:33 crc kubenswrapper[4755]: I1210 15:42:33.272744 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Dec 10 15:42:33 crc kubenswrapper[4755]: I1210 15:42:33.316280 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Dec 10 15:42:33 crc kubenswrapper[4755]: I1210 15:42:33.474310 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-qg2cr"] Dec 10 15:42:34 crc kubenswrapper[4755]: I1210 15:42:34.186249 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-qg2cr" event={"ID":"aff2e950-1295-4b9e-996a-f9a6c4a1dedd","Type":"ContainerStarted","Data":"59b021ce6148d021c5c4c738623bed47b88bbd5936870bd65be1e6a983abb9b5"} Dec 10 15:42:34 crc kubenswrapper[4755]: I1210 15:42:34.246987 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Dec 10 15:42:34 crc kubenswrapper[4755]: I1210 15:42:34.532552 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Dec 10 15:42:34 crc kubenswrapper[4755]: E1210 15:42:34.532913 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="857e88aa-b989-4d6e-acbf-0309f0297a25" containerName="mariadb-account-create-update" Dec 10 15:42:34 crc kubenswrapper[4755]: I1210 15:42:34.532925 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="857e88aa-b989-4d6e-acbf-0309f0297a25" containerName="mariadb-account-create-update" Dec 10 15:42:34 crc kubenswrapper[4755]: I1210 15:42:34.533108 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="857e88aa-b989-4d6e-acbf-0309f0297a25" containerName="mariadb-account-create-update" Dec 10 15:42:34 crc kubenswrapper[4755]: I1210 15:42:34.534065 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Dec 10 15:42:34 crc kubenswrapper[4755]: I1210 15:42:34.545399 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Dec 10 15:42:34 crc kubenswrapper[4755]: I1210 15:42:34.545444 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Dec 10 15:42:34 crc kubenswrapper[4755]: I1210 15:42:34.545544 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Dec 10 15:42:34 crc kubenswrapper[4755]: I1210 15:42:34.545682 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-cr4p6" Dec 10 15:42:34 crc kubenswrapper[4755]: I1210 15:42:34.582288 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Dec 10 15:42:34 crc kubenswrapper[4755]: I1210 15:42:34.663874 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3423b67e-8bda-4237-a94e-82cf18faf1c2-scripts\") pod \"ovn-northd-0\" (UID: \"3423b67e-8bda-4237-a94e-82cf18faf1c2\") " pod="openstack/ovn-northd-0" Dec 10 15:42:34 crc kubenswrapper[4755]: I1210 15:42:34.664151 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3423b67e-8bda-4237-a94e-82cf18faf1c2-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"3423b67e-8bda-4237-a94e-82cf18faf1c2\") " pod="openstack/ovn-northd-0" Dec 10 15:42:34 crc kubenswrapper[4755]: I1210 15:42:34.664194 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njqs4\" (UniqueName: \"kubernetes.io/projected/3423b67e-8bda-4237-a94e-82cf18faf1c2-kube-api-access-njqs4\") pod \"ovn-northd-0\" (UID: \"3423b67e-8bda-4237-a94e-82cf18faf1c2\") " pod="openstack/ovn-northd-0" Dec 10 15:42:34 crc kubenswrapper[4755]: I1210 15:42:34.664240 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3423b67e-8bda-4237-a94e-82cf18faf1c2-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"3423b67e-8bda-4237-a94e-82cf18faf1c2\") " pod="openstack/ovn-northd-0" Dec 10 15:42:34 crc kubenswrapper[4755]: I1210 15:42:34.664271 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3423b67e-8bda-4237-a94e-82cf18faf1c2-config\") pod \"ovn-northd-0\" (UID: \"3423b67e-8bda-4237-a94e-82cf18faf1c2\") " pod="openstack/ovn-northd-0" Dec 10 15:42:34 crc kubenswrapper[4755]: I1210 15:42:34.664293 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3423b67e-8bda-4237-a94e-82cf18faf1c2-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"3423b67e-8bda-4237-a94e-82cf18faf1c2\") " pod="openstack/ovn-northd-0" Dec 10 15:42:34 crc kubenswrapper[4755]: I1210 15:42:34.664339 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/3423b67e-8bda-4237-a94e-82cf18faf1c2-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"3423b67e-8bda-4237-a94e-82cf18faf1c2\") " pod="openstack/ovn-northd-0" Dec 10 15:42:34 crc kubenswrapper[4755]: I1210 15:42:34.763103 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-jjdr5" Dec 10 15:42:34 crc kubenswrapper[4755]: I1210 15:42:34.765648 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3423b67e-8bda-4237-a94e-82cf18faf1c2-config\") pod \"ovn-northd-0\" (UID: \"3423b67e-8bda-4237-a94e-82cf18faf1c2\") " pod="openstack/ovn-northd-0" Dec 10 15:42:34 crc kubenswrapper[4755]: I1210 15:42:34.765716 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3423b67e-8bda-4237-a94e-82cf18faf1c2-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"3423b67e-8bda-4237-a94e-82cf18faf1c2\") " pod="openstack/ovn-northd-0" Dec 10 15:42:34 crc kubenswrapper[4755]: I1210 15:42:34.765791 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/3423b67e-8bda-4237-a94e-82cf18faf1c2-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"3423b67e-8bda-4237-a94e-82cf18faf1c2\") " pod="openstack/ovn-northd-0" Dec 10 15:42:34 crc kubenswrapper[4755]: I1210 15:42:34.765864 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3423b67e-8bda-4237-a94e-82cf18faf1c2-scripts\") pod \"ovn-northd-0\" (UID: \"3423b67e-8bda-4237-a94e-82cf18faf1c2\") " pod="openstack/ovn-northd-0" Dec 10 15:42:34 crc kubenswrapper[4755]: I1210 15:42:34.765885 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3423b67e-8bda-4237-a94e-82cf18faf1c2-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"3423b67e-8bda-4237-a94e-82cf18faf1c2\") " pod="openstack/ovn-northd-0" Dec 10 15:42:34 crc kubenswrapper[4755]: I1210 15:42:34.765923 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njqs4\" (UniqueName: \"kubernetes.io/projected/3423b67e-8bda-4237-a94e-82cf18faf1c2-kube-api-access-njqs4\") pod \"ovn-northd-0\" (UID: \"3423b67e-8bda-4237-a94e-82cf18faf1c2\") " pod="openstack/ovn-northd-0" Dec 10 15:42:34 crc kubenswrapper[4755]: I1210 15:42:34.765967 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3423b67e-8bda-4237-a94e-82cf18faf1c2-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"3423b67e-8bda-4237-a94e-82cf18faf1c2\") " pod="openstack/ovn-northd-0" Dec 10 15:42:34 crc kubenswrapper[4755]: I1210 15:42:34.766696 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3423b67e-8bda-4237-a94e-82cf18faf1c2-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"3423b67e-8bda-4237-a94e-82cf18faf1c2\") " pod="openstack/ovn-northd-0" Dec 10 15:42:34 crc kubenswrapper[4755]: I1210 15:42:34.767807 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3423b67e-8bda-4237-a94e-82cf18faf1c2-config\") pod \"ovn-northd-0\" (UID: \"3423b67e-8bda-4237-a94e-82cf18faf1c2\") " pod="openstack/ovn-northd-0" Dec 10 15:42:34 crc kubenswrapper[4755]: I1210 15:42:34.768143 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3423b67e-8bda-4237-a94e-82cf18faf1c2-scripts\") pod \"ovn-northd-0\" (UID: \"3423b67e-8bda-4237-a94e-82cf18faf1c2\") " pod="openstack/ovn-northd-0" Dec 10 15:42:34 crc kubenswrapper[4755]: I1210 15:42:34.788568 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/3423b67e-8bda-4237-a94e-82cf18faf1c2-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"3423b67e-8bda-4237-a94e-82cf18faf1c2\") " pod="openstack/ovn-northd-0" Dec 10 15:42:34 crc kubenswrapper[4755]: I1210 15:42:34.798019 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njqs4\" (UniqueName: \"kubernetes.io/projected/3423b67e-8bda-4237-a94e-82cf18faf1c2-kube-api-access-njqs4\") pod \"ovn-northd-0\" (UID: \"3423b67e-8bda-4237-a94e-82cf18faf1c2\") " pod="openstack/ovn-northd-0" Dec 10 15:42:34 crc kubenswrapper[4755]: I1210 15:42:34.803410 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3423b67e-8bda-4237-a94e-82cf18faf1c2-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"3423b67e-8bda-4237-a94e-82cf18faf1c2\") " pod="openstack/ovn-northd-0" Dec 10 15:42:34 crc kubenswrapper[4755]: I1210 15:42:34.807037 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3423b67e-8bda-4237-a94e-82cf18faf1c2-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"3423b67e-8bda-4237-a94e-82cf18faf1c2\") " pod="openstack/ovn-northd-0" Dec 10 15:42:34 crc kubenswrapper[4755]: I1210 15:42:34.866814 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c0d25e1b-5c7f-468c-9723-4030952bc88c-operator-scripts\") pod \"c0d25e1b-5c7f-468c-9723-4030952bc88c\" (UID: \"c0d25e1b-5c7f-468c-9723-4030952bc88c\") " Dec 10 15:42:34 crc kubenswrapper[4755]: I1210 15:42:34.866999 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pkrsq\" (UniqueName: \"kubernetes.io/projected/c0d25e1b-5c7f-468c-9723-4030952bc88c-kube-api-access-pkrsq\") pod \"c0d25e1b-5c7f-468c-9723-4030952bc88c\" (UID: \"c0d25e1b-5c7f-468c-9723-4030952bc88c\") " Dec 10 15:42:34 crc kubenswrapper[4755]: I1210 15:42:34.867295 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0d25e1b-5c7f-468c-9723-4030952bc88c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c0d25e1b-5c7f-468c-9723-4030952bc88c" (UID: "c0d25e1b-5c7f-468c-9723-4030952bc88c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:42:34 crc kubenswrapper[4755]: I1210 15:42:34.868792 4755 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c0d25e1b-5c7f-468c-9723-4030952bc88c-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 15:42:34 crc kubenswrapper[4755]: I1210 15:42:34.869085 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-1b56-account-create-update-l28rl" Dec 10 15:42:34 crc kubenswrapper[4755]: I1210 15:42:34.871016 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0d25e1b-5c7f-468c-9723-4030952bc88c-kube-api-access-pkrsq" (OuterVolumeSpecName: "kube-api-access-pkrsq") pod "c0d25e1b-5c7f-468c-9723-4030952bc88c" (UID: "c0d25e1b-5c7f-468c-9723-4030952bc88c"). InnerVolumeSpecName "kube-api-access-pkrsq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:42:34 crc kubenswrapper[4755]: I1210 15:42:34.900162 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Dec 10 15:42:34 crc kubenswrapper[4755]: I1210 15:42:34.970218 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4dce77c7-9c60-4919-b20d-a954178e0b0c-operator-scripts\") pod \"4dce77c7-9c60-4919-b20d-a954178e0b0c\" (UID: \"4dce77c7-9c60-4919-b20d-a954178e0b0c\") " Dec 10 15:42:34 crc kubenswrapper[4755]: I1210 15:42:34.970333 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pmnps\" (UniqueName: \"kubernetes.io/projected/4dce77c7-9c60-4919-b20d-a954178e0b0c-kube-api-access-pmnps\") pod \"4dce77c7-9c60-4919-b20d-a954178e0b0c\" (UID: \"4dce77c7-9c60-4919-b20d-a954178e0b0c\") " Dec 10 15:42:34 crc kubenswrapper[4755]: I1210 15:42:34.970834 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4dce77c7-9c60-4919-b20d-a954178e0b0c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4dce77c7-9c60-4919-b20d-a954178e0b0c" (UID: "4dce77c7-9c60-4919-b20d-a954178e0b0c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:42:34 crc kubenswrapper[4755]: I1210 15:42:34.971326 4755 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4dce77c7-9c60-4919-b20d-a954178e0b0c-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 15:42:34 crc kubenswrapper[4755]: I1210 15:42:34.971355 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pkrsq\" (UniqueName: \"kubernetes.io/projected/c0d25e1b-5c7f-468c-9723-4030952bc88c-kube-api-access-pkrsq\") on node \"crc\" DevicePath \"\"" Dec 10 15:42:34 crc kubenswrapper[4755]: I1210 15:42:34.975145 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4dce77c7-9c60-4919-b20d-a954178e0b0c-kube-api-access-pmnps" (OuterVolumeSpecName: "kube-api-access-pmnps") pod "4dce77c7-9c60-4919-b20d-a954178e0b0c" (UID: "4dce77c7-9c60-4919-b20d-a954178e0b0c"). InnerVolumeSpecName "kube-api-access-pmnps". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:42:35 crc kubenswrapper[4755]: I1210 15:42:35.072980 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pmnps\" (UniqueName: \"kubernetes.io/projected/4dce77c7-9c60-4919-b20d-a954178e0b0c-kube-api-access-pmnps\") on node \"crc\" DevicePath \"\"" Dec 10 15:42:35 crc kubenswrapper[4755]: I1210 15:42:35.152157 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-g7qh8"] Dec 10 15:42:35 crc kubenswrapper[4755]: E1210 15:42:35.152665 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0d25e1b-5c7f-468c-9723-4030952bc88c" containerName="mariadb-database-create" Dec 10 15:42:35 crc kubenswrapper[4755]: I1210 15:42:35.152686 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0d25e1b-5c7f-468c-9723-4030952bc88c" containerName="mariadb-database-create" Dec 10 15:42:35 crc kubenswrapper[4755]: E1210 15:42:35.152728 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4dce77c7-9c60-4919-b20d-a954178e0b0c" containerName="mariadb-account-create-update" Dec 10 15:42:35 crc kubenswrapper[4755]: I1210 15:42:35.152736 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="4dce77c7-9c60-4919-b20d-a954178e0b0c" containerName="mariadb-account-create-update" Dec 10 15:42:35 crc kubenswrapper[4755]: I1210 15:42:35.152953 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0d25e1b-5c7f-468c-9723-4030952bc88c" containerName="mariadb-database-create" Dec 10 15:42:35 crc kubenswrapper[4755]: I1210 15:42:35.152972 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="4dce77c7-9c60-4919-b20d-a954178e0b0c" containerName="mariadb-account-create-update" Dec 10 15:42:35 crc kubenswrapper[4755]: I1210 15:42:35.153992 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-g7qh8" Dec 10 15:42:35 crc kubenswrapper[4755]: I1210 15:42:35.166185 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-g7qh8"] Dec 10 15:42:35 crc kubenswrapper[4755]: I1210 15:42:35.217412 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-jjdr5" event={"ID":"c0d25e1b-5c7f-468c-9723-4030952bc88c","Type":"ContainerDied","Data":"2fd32550304a64765654c689ebea4ab1420aecc18b0c13a7013bf465a96df75d"} Dec 10 15:42:35 crc kubenswrapper[4755]: I1210 15:42:35.217501 4755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2fd32550304a64765654c689ebea4ab1420aecc18b0c13a7013bf465a96df75d" Dec 10 15:42:35 crc kubenswrapper[4755]: I1210 15:42:35.217618 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-jjdr5" Dec 10 15:42:35 crc kubenswrapper[4755]: I1210 15:42:35.251602 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-1b56-account-create-update-l28rl" Dec 10 15:42:35 crc kubenswrapper[4755]: I1210 15:42:35.251664 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-1b56-account-create-update-l28rl" event={"ID":"4dce77c7-9c60-4919-b20d-a954178e0b0c","Type":"ContainerDied","Data":"5eb84d1c99e7679d47cfdbb94b5d1db4d8ace0d42a6d838bb1f312432da7c596"} Dec 10 15:42:35 crc kubenswrapper[4755]: I1210 15:42:35.251697 4755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5eb84d1c99e7679d47cfdbb94b5d1db4d8ace0d42a6d838bb1f312432da7c596" Dec 10 15:42:35 crc kubenswrapper[4755]: I1210 15:42:35.264428 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-d0ed-account-create-update-ck22x"] Dec 10 15:42:35 crc kubenswrapper[4755]: I1210 15:42:35.266868 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-d0ed-account-create-update-ck22x" Dec 10 15:42:35 crc kubenswrapper[4755]: I1210 15:42:35.268905 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Dec 10 15:42:35 crc kubenswrapper[4755]: I1210 15:42:35.281247 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjh99\" (UniqueName: \"kubernetes.io/projected/224b1eb4-d368-436c-90b1-fe760dc26591-kube-api-access-qjh99\") pod \"keystone-db-create-g7qh8\" (UID: \"224b1eb4-d368-436c-90b1-fe760dc26591\") " pod="openstack/keystone-db-create-g7qh8" Dec 10 15:42:35 crc kubenswrapper[4755]: I1210 15:42:35.281357 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/224b1eb4-d368-436c-90b1-fe760dc26591-operator-scripts\") pod \"keystone-db-create-g7qh8\" (UID: \"224b1eb4-d368-436c-90b1-fe760dc26591\") " pod="openstack/keystone-db-create-g7qh8" Dec 10 15:42:35 crc kubenswrapper[4755]: I1210 15:42:35.287174 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-d0ed-account-create-update-ck22x"] Dec 10 15:42:35 crc kubenswrapper[4755]: I1210 15:42:35.383684 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzl95\" (UniqueName: \"kubernetes.io/projected/c9b77d20-983f-4ca4-9fa6-67ab4fb8d3ba-kube-api-access-tzl95\") pod \"keystone-d0ed-account-create-update-ck22x\" (UID: \"c9b77d20-983f-4ca4-9fa6-67ab4fb8d3ba\") " pod="openstack/keystone-d0ed-account-create-update-ck22x" Dec 10 15:42:35 crc kubenswrapper[4755]: I1210 15:42:35.383972 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjh99\" (UniqueName: \"kubernetes.io/projected/224b1eb4-d368-436c-90b1-fe760dc26591-kube-api-access-qjh99\") pod \"keystone-db-create-g7qh8\" (UID: \"224b1eb4-d368-436c-90b1-fe760dc26591\") " pod="openstack/keystone-db-create-g7qh8" Dec 10 15:42:35 crc kubenswrapper[4755]: I1210 15:42:35.384208 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/224b1eb4-d368-436c-90b1-fe760dc26591-operator-scripts\") pod \"keystone-db-create-g7qh8\" (UID: \"224b1eb4-d368-436c-90b1-fe760dc26591\") " pod="openstack/keystone-db-create-g7qh8" Dec 10 15:42:35 crc kubenswrapper[4755]: I1210 15:42:35.384369 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9b77d20-983f-4ca4-9fa6-67ab4fb8d3ba-operator-scripts\") pod \"keystone-d0ed-account-create-update-ck22x\" (UID: \"c9b77d20-983f-4ca4-9fa6-67ab4fb8d3ba\") " pod="openstack/keystone-d0ed-account-create-update-ck22x" Dec 10 15:42:35 crc kubenswrapper[4755]: I1210 15:42:35.387774 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/224b1eb4-d368-436c-90b1-fe760dc26591-operator-scripts\") pod \"keystone-db-create-g7qh8\" (UID: \"224b1eb4-d368-436c-90b1-fe760dc26591\") " pod="openstack/keystone-db-create-g7qh8" Dec 10 15:42:35 crc kubenswrapper[4755]: I1210 15:42:35.413036 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjh99\" (UniqueName: \"kubernetes.io/projected/224b1eb4-d368-436c-90b1-fe760dc26591-kube-api-access-qjh99\") pod \"keystone-db-create-g7qh8\" (UID: \"224b1eb4-d368-436c-90b1-fe760dc26591\") " pod="openstack/keystone-db-create-g7qh8" Dec 10 15:42:35 crc kubenswrapper[4755]: I1210 15:42:35.432263 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Dec 10 15:42:35 crc kubenswrapper[4755]: I1210 15:42:35.480124 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-g7qh8" Dec 10 15:42:35 crc kubenswrapper[4755]: I1210 15:42:35.487986 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9b77d20-983f-4ca4-9fa6-67ab4fb8d3ba-operator-scripts\") pod \"keystone-d0ed-account-create-update-ck22x\" (UID: \"c9b77d20-983f-4ca4-9fa6-67ab4fb8d3ba\") " pod="openstack/keystone-d0ed-account-create-update-ck22x" Dec 10 15:42:35 crc kubenswrapper[4755]: I1210 15:42:35.488143 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzl95\" (UniqueName: \"kubernetes.io/projected/c9b77d20-983f-4ca4-9fa6-67ab4fb8d3ba-kube-api-access-tzl95\") pod \"keystone-d0ed-account-create-update-ck22x\" (UID: \"c9b77d20-983f-4ca4-9fa6-67ab4fb8d3ba\") " pod="openstack/keystone-d0ed-account-create-update-ck22x" Dec 10 15:42:35 crc kubenswrapper[4755]: I1210 15:42:35.490067 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9b77d20-983f-4ca4-9fa6-67ab4fb8d3ba-operator-scripts\") pod \"keystone-d0ed-account-create-update-ck22x\" (UID: \"c9b77d20-983f-4ca4-9fa6-67ab4fb8d3ba\") " pod="openstack/keystone-d0ed-account-create-update-ck22x" Dec 10 15:42:35 crc kubenswrapper[4755]: I1210 15:42:35.516383 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzl95\" (UniqueName: \"kubernetes.io/projected/c9b77d20-983f-4ca4-9fa6-67ab4fb8d3ba-kube-api-access-tzl95\") pod \"keystone-d0ed-account-create-update-ck22x\" (UID: \"c9b77d20-983f-4ca4-9fa6-67ab4fb8d3ba\") " pod="openstack/keystone-d0ed-account-create-update-ck22x" Dec 10 15:42:35 crc kubenswrapper[4755]: I1210 15:42:35.595242 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-d0ed-account-create-update-ck22x" Dec 10 15:42:36 crc kubenswrapper[4755]: I1210 15:42:36.508520 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/72a1cce7-93cb-4fe1-9d12-3d4e19692457-etc-swift\") pod \"swift-storage-0\" (UID: \"72a1cce7-93cb-4fe1-9d12-3d4e19692457\") " pod="openstack/swift-storage-0" Dec 10 15:42:36 crc kubenswrapper[4755]: E1210 15:42:36.509083 4755 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 10 15:42:36 crc kubenswrapper[4755]: E1210 15:42:36.509103 4755 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 10 15:42:36 crc kubenswrapper[4755]: E1210 15:42:36.509148 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/72a1cce7-93cb-4fe1-9d12-3d4e19692457-etc-swift podName:72a1cce7-93cb-4fe1-9d12-3d4e19692457 nodeName:}" failed. No retries permitted until 2025-12-10 15:42:44.509133491 +0000 UTC m=+1161.110017123 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/72a1cce7-93cb-4fe1-9d12-3d4e19692457-etc-swift") pod "swift-storage-0" (UID: "72a1cce7-93cb-4fe1-9d12-3d4e19692457") : configmap "swift-ring-files" not found Dec 10 15:42:37 crc kubenswrapper[4755]: I1210 15:42:37.270443 4755 generic.go:334] "Generic (PLEG): container finished" podID="fb480bc7-6936-4208-964b-44cffd08f907" containerID="05050464a8e3abb66cfa4fb28127a52df0fde7cacd0e16b4c4b9c9d38958867c" exitCode=0 Dec 10 15:42:37 crc kubenswrapper[4755]: I1210 15:42:37.270543 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"fb480bc7-6936-4208-964b-44cffd08f907","Type":"ContainerDied","Data":"05050464a8e3abb66cfa4fb28127a52df0fde7cacd0e16b4c4b9c9d38958867c"} Dec 10 15:42:37 crc kubenswrapper[4755]: I1210 15:42:37.793656 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b8fbc5445-hpccw" Dec 10 15:42:37 crc kubenswrapper[4755]: I1210 15:42:37.864774 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-mztj5"] Dec 10 15:42:37 crc kubenswrapper[4755]: I1210 15:42:37.865125 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8554648995-mztj5" podUID="32d65463-88f8-4c3f-910a-0c8c13a39013" containerName="dnsmasq-dns" containerID="cri-o://948b47c684c1989af9d3da1cd3e56931fc5a54264da22b4c16fe7da963b631f8" gracePeriod=10 Dec 10 15:42:38 crc kubenswrapper[4755]: I1210 15:42:38.302336 4755 generic.go:334] "Generic (PLEG): container finished" podID="32d65463-88f8-4c3f-910a-0c8c13a39013" containerID="948b47c684c1989af9d3da1cd3e56931fc5a54264da22b4c16fe7da963b631f8" exitCode=0 Dec 10 15:42:38 crc kubenswrapper[4755]: I1210 15:42:38.302420 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-mztj5" event={"ID":"32d65463-88f8-4c3f-910a-0c8c13a39013","Type":"ContainerDied","Data":"948b47c684c1989af9d3da1cd3e56931fc5a54264da22b4c16fe7da963b631f8"} Dec 10 15:42:38 crc kubenswrapper[4755]: I1210 15:42:38.305286 4755 generic.go:334] "Generic (PLEG): container finished" podID="89e8722f-e9fc-4850-bb96-e51f9859805e" containerID="2737c09a7a60eb8a396709c9839d92a46f9c8e8d9ca2c58a8da58c76ff81fbda" exitCode=0 Dec 10 15:42:38 crc kubenswrapper[4755]: I1210 15:42:38.305352 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"89e8722f-e9fc-4850-bb96-e51f9859805e","Type":"ContainerDied","Data":"2737c09a7a60eb8a396709c9839d92a46f9c8e8d9ca2c58a8da58c76ff81fbda"} Dec 10 15:42:38 crc kubenswrapper[4755]: I1210 15:42:38.312076 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"3423b67e-8bda-4237-a94e-82cf18faf1c2","Type":"ContainerStarted","Data":"169b174f172457472c31677894024b30263613878e666e81bdc1e4f12af077ca"} Dec 10 15:42:38 crc kubenswrapper[4755]: I1210 15:42:38.399272 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-querier-5467947bf7-qpg72" Dec 10 15:42:39 crc kubenswrapper[4755]: I1210 15:42:39.360460 4755 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="e5a3871d-6b81-4b3d-9044-fcbcf437effb" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 10 15:42:40 crc kubenswrapper[4755]: I1210 15:42:40.358771 4755 patch_prober.go:28] interesting pod/machine-config-daemon-ggt8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 15:42:40 crc kubenswrapper[4755]: I1210 15:42:40.359065 4755 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 15:42:40 crc kubenswrapper[4755]: I1210 15:42:40.359416 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-mztj5" event={"ID":"32d65463-88f8-4c3f-910a-0c8c13a39013","Type":"ContainerDied","Data":"788f0d37bdd31bccef41d5da3b75f720bf0a5efd178f9fe726e6fd7149a22ddc"} Dec 10 15:42:40 crc kubenswrapper[4755]: I1210 15:42:40.359443 4755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="788f0d37bdd31bccef41d5da3b75f720bf0a5efd178f9fe726e6fd7149a22ddc" Dec 10 15:42:40 crc kubenswrapper[4755]: I1210 15:42:40.370416 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-mztj5" Dec 10 15:42:40 crc kubenswrapper[4755]: I1210 15:42:40.521139 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32d65463-88f8-4c3f-910a-0c8c13a39013-config\") pod \"32d65463-88f8-4c3f-910a-0c8c13a39013\" (UID: \"32d65463-88f8-4c3f-910a-0c8c13a39013\") " Dec 10 15:42:40 crc kubenswrapper[4755]: I1210 15:42:40.521206 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/32d65463-88f8-4c3f-910a-0c8c13a39013-dns-svc\") pod \"32d65463-88f8-4c3f-910a-0c8c13a39013\" (UID: \"32d65463-88f8-4c3f-910a-0c8c13a39013\") " Dec 10 15:42:40 crc kubenswrapper[4755]: I1210 15:42:40.521341 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t2tcr\" (UniqueName: \"kubernetes.io/projected/32d65463-88f8-4c3f-910a-0c8c13a39013-kube-api-access-t2tcr\") pod \"32d65463-88f8-4c3f-910a-0c8c13a39013\" (UID: \"32d65463-88f8-4c3f-910a-0c8c13a39013\") " Dec 10 15:42:40 crc kubenswrapper[4755]: I1210 15:42:40.521391 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/32d65463-88f8-4c3f-910a-0c8c13a39013-ovsdbserver-sb\") pod \"32d65463-88f8-4c3f-910a-0c8c13a39013\" (UID: \"32d65463-88f8-4c3f-910a-0c8c13a39013\") " Dec 10 15:42:40 crc kubenswrapper[4755]: I1210 15:42:40.521438 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/32d65463-88f8-4c3f-910a-0c8c13a39013-ovsdbserver-nb\") pod \"32d65463-88f8-4c3f-910a-0c8c13a39013\" (UID: \"32d65463-88f8-4c3f-910a-0c8c13a39013\") " Dec 10 15:42:40 crc kubenswrapper[4755]: I1210 15:42:40.536684 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32d65463-88f8-4c3f-910a-0c8c13a39013-kube-api-access-t2tcr" (OuterVolumeSpecName: "kube-api-access-t2tcr") pod "32d65463-88f8-4c3f-910a-0c8c13a39013" (UID: "32d65463-88f8-4c3f-910a-0c8c13a39013"). InnerVolumeSpecName "kube-api-access-t2tcr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:42:40 crc kubenswrapper[4755]: I1210 15:42:40.580681 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32d65463-88f8-4c3f-910a-0c8c13a39013-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "32d65463-88f8-4c3f-910a-0c8c13a39013" (UID: "32d65463-88f8-4c3f-910a-0c8c13a39013"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:42:40 crc kubenswrapper[4755]: I1210 15:42:40.586547 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32d65463-88f8-4c3f-910a-0c8c13a39013-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "32d65463-88f8-4c3f-910a-0c8c13a39013" (UID: "32d65463-88f8-4c3f-910a-0c8c13a39013"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:42:40 crc kubenswrapper[4755]: I1210 15:42:40.592678 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32d65463-88f8-4c3f-910a-0c8c13a39013-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "32d65463-88f8-4c3f-910a-0c8c13a39013" (UID: "32d65463-88f8-4c3f-910a-0c8c13a39013"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:42:40 crc kubenswrapper[4755]: I1210 15:42:40.592924 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32d65463-88f8-4c3f-910a-0c8c13a39013-config" (OuterVolumeSpecName: "config") pod "32d65463-88f8-4c3f-910a-0c8c13a39013" (UID: "32d65463-88f8-4c3f-910a-0c8c13a39013"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:42:40 crc kubenswrapper[4755]: I1210 15:42:40.623058 4755 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32d65463-88f8-4c3f-910a-0c8c13a39013-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:42:40 crc kubenswrapper[4755]: I1210 15:42:40.623094 4755 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/32d65463-88f8-4c3f-910a-0c8c13a39013-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 10 15:42:40 crc kubenswrapper[4755]: I1210 15:42:40.623105 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t2tcr\" (UniqueName: \"kubernetes.io/projected/32d65463-88f8-4c3f-910a-0c8c13a39013-kube-api-access-t2tcr\") on node \"crc\" DevicePath \"\"" Dec 10 15:42:40 crc kubenswrapper[4755]: I1210 15:42:40.623117 4755 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/32d65463-88f8-4c3f-910a-0c8c13a39013-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 10 15:42:40 crc kubenswrapper[4755]: I1210 15:42:40.623127 4755 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/32d65463-88f8-4c3f-910a-0c8c13a39013-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 10 15:42:40 crc kubenswrapper[4755]: I1210 15:42:40.657854 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-g7qh8"] Dec 10 15:42:40 crc kubenswrapper[4755]: I1210 15:42:40.766971 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-d0ed-account-create-update-ck22x"] Dec 10 15:42:40 crc kubenswrapper[4755]: W1210 15:42:40.894059 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc9b77d20_983f_4ca4_9fa6_67ab4fb8d3ba.slice/crio-d9ca9e1f70a9d4bafef6b282ee9f045bdb0ab29c3954d2419f42f38431c37600 WatchSource:0}: Error finding container d9ca9e1f70a9d4bafef6b282ee9f045bdb0ab29c3954d2419f42f38431c37600: Status 404 returned error can't find the container with id d9ca9e1f70a9d4bafef6b282ee9f045bdb0ab29c3954d2419f42f38431c37600 Dec 10 15:42:40 crc kubenswrapper[4755]: W1210 15:42:40.895857 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod224b1eb4_d368_436c_90b1_fe760dc26591.slice/crio-d329f9dad4f454df2b857b1e9c5455f286237178f056bcf81ff98ff08f762ca1 WatchSource:0}: Error finding container d329f9dad4f454df2b857b1e9c5455f286237178f056bcf81ff98ff08f762ca1: Status 404 returned error can't find the container with id d329f9dad4f454df2b857b1e9c5455f286237178f056bcf81ff98ff08f762ca1 Dec 10 15:42:41 crc kubenswrapper[4755]: I1210 15:42:41.029739 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-pth9b"] Dec 10 15:42:41 crc kubenswrapper[4755]: E1210 15:42:41.030325 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32d65463-88f8-4c3f-910a-0c8c13a39013" containerName="init" Dec 10 15:42:41 crc kubenswrapper[4755]: I1210 15:42:41.030353 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="32d65463-88f8-4c3f-910a-0c8c13a39013" containerName="init" Dec 10 15:42:41 crc kubenswrapper[4755]: E1210 15:42:41.030401 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32d65463-88f8-4c3f-910a-0c8c13a39013" containerName="dnsmasq-dns" Dec 10 15:42:41 crc kubenswrapper[4755]: I1210 15:42:41.030410 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="32d65463-88f8-4c3f-910a-0c8c13a39013" containerName="dnsmasq-dns" Dec 10 15:42:41 crc kubenswrapper[4755]: I1210 15:42:41.030637 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="32d65463-88f8-4c3f-910a-0c8c13a39013" containerName="dnsmasq-dns" Dec 10 15:42:41 crc kubenswrapper[4755]: I1210 15:42:41.031623 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-pth9b" Dec 10 15:42:41 crc kubenswrapper[4755]: I1210 15:42:41.034160 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Dec 10 15:42:41 crc kubenswrapper[4755]: I1210 15:42:41.034532 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-dm8gw" Dec 10 15:42:41 crc kubenswrapper[4755]: I1210 15:42:41.050395 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-pth9b"] Dec 10 15:42:41 crc kubenswrapper[4755]: I1210 15:42:41.131817 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5d949f1d-5cb7-49e0-aa0b-52a615dfe4b5-db-sync-config-data\") pod \"glance-db-sync-pth9b\" (UID: \"5d949f1d-5cb7-49e0-aa0b-52a615dfe4b5\") " pod="openstack/glance-db-sync-pth9b" Dec 10 15:42:41 crc kubenswrapper[4755]: I1210 15:42:41.131950 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8q87\" (UniqueName: \"kubernetes.io/projected/5d949f1d-5cb7-49e0-aa0b-52a615dfe4b5-kube-api-access-k8q87\") pod \"glance-db-sync-pth9b\" (UID: \"5d949f1d-5cb7-49e0-aa0b-52a615dfe4b5\") " pod="openstack/glance-db-sync-pth9b" Dec 10 15:42:41 crc kubenswrapper[4755]: I1210 15:42:41.132019 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d949f1d-5cb7-49e0-aa0b-52a615dfe4b5-combined-ca-bundle\") pod \"glance-db-sync-pth9b\" (UID: \"5d949f1d-5cb7-49e0-aa0b-52a615dfe4b5\") " pod="openstack/glance-db-sync-pth9b" Dec 10 15:42:41 crc kubenswrapper[4755]: I1210 15:42:41.132042 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d949f1d-5cb7-49e0-aa0b-52a615dfe4b5-config-data\") pod \"glance-db-sync-pth9b\" (UID: \"5d949f1d-5cb7-49e0-aa0b-52a615dfe4b5\") " pod="openstack/glance-db-sync-pth9b" Dec 10 15:42:41 crc kubenswrapper[4755]: I1210 15:42:41.233988 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5d949f1d-5cb7-49e0-aa0b-52a615dfe4b5-db-sync-config-data\") pod \"glance-db-sync-pth9b\" (UID: \"5d949f1d-5cb7-49e0-aa0b-52a615dfe4b5\") " pod="openstack/glance-db-sync-pth9b" Dec 10 15:42:41 crc kubenswrapper[4755]: I1210 15:42:41.234905 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8q87\" (UniqueName: \"kubernetes.io/projected/5d949f1d-5cb7-49e0-aa0b-52a615dfe4b5-kube-api-access-k8q87\") pod \"glance-db-sync-pth9b\" (UID: \"5d949f1d-5cb7-49e0-aa0b-52a615dfe4b5\") " pod="openstack/glance-db-sync-pth9b" Dec 10 15:42:41 crc kubenswrapper[4755]: I1210 15:42:41.235155 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d949f1d-5cb7-49e0-aa0b-52a615dfe4b5-combined-ca-bundle\") pod \"glance-db-sync-pth9b\" (UID: \"5d949f1d-5cb7-49e0-aa0b-52a615dfe4b5\") " pod="openstack/glance-db-sync-pth9b" Dec 10 15:42:41 crc kubenswrapper[4755]: I1210 15:42:41.235260 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d949f1d-5cb7-49e0-aa0b-52a615dfe4b5-config-data\") pod \"glance-db-sync-pth9b\" (UID: \"5d949f1d-5cb7-49e0-aa0b-52a615dfe4b5\") " pod="openstack/glance-db-sync-pth9b" Dec 10 15:42:41 crc kubenswrapper[4755]: I1210 15:42:41.241239 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d949f1d-5cb7-49e0-aa0b-52a615dfe4b5-combined-ca-bundle\") pod \"glance-db-sync-pth9b\" (UID: \"5d949f1d-5cb7-49e0-aa0b-52a615dfe4b5\") " pod="openstack/glance-db-sync-pth9b" Dec 10 15:42:41 crc kubenswrapper[4755]: I1210 15:42:41.241246 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5d949f1d-5cb7-49e0-aa0b-52a615dfe4b5-db-sync-config-data\") pod \"glance-db-sync-pth9b\" (UID: \"5d949f1d-5cb7-49e0-aa0b-52a615dfe4b5\") " pod="openstack/glance-db-sync-pth9b" Dec 10 15:42:41 crc kubenswrapper[4755]: I1210 15:42:41.246095 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d949f1d-5cb7-49e0-aa0b-52a615dfe4b5-config-data\") pod \"glance-db-sync-pth9b\" (UID: \"5d949f1d-5cb7-49e0-aa0b-52a615dfe4b5\") " pod="openstack/glance-db-sync-pth9b" Dec 10 15:42:41 crc kubenswrapper[4755]: I1210 15:42:41.258969 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8q87\" (UniqueName: \"kubernetes.io/projected/5d949f1d-5cb7-49e0-aa0b-52a615dfe4b5-kube-api-access-k8q87\") pod \"glance-db-sync-pth9b\" (UID: \"5d949f1d-5cb7-49e0-aa0b-52a615dfe4b5\") " pod="openstack/glance-db-sync-pth9b" Dec 10 15:42:41 crc kubenswrapper[4755]: I1210 15:42:41.370716 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-pth9b" Dec 10 15:42:41 crc kubenswrapper[4755]: I1210 15:42:41.383651 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"3423b67e-8bda-4237-a94e-82cf18faf1c2","Type":"ContainerStarted","Data":"1fb5d96c4eb5475da68dcef2a71830555a983cdbf34a863bc3d604b5008b65f0"} Dec 10 15:42:41 crc kubenswrapper[4755]: I1210 15:42:41.389739 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-g7qh8" event={"ID":"224b1eb4-d368-436c-90b1-fe760dc26591","Type":"ContainerStarted","Data":"c8b67f6f92b34f8ca46c888205ae21fe2e9c2a7675821b4f3a8e5032b999adc4"} Dec 10 15:42:41 crc kubenswrapper[4755]: I1210 15:42:41.390547 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-g7qh8" event={"ID":"224b1eb4-d368-436c-90b1-fe760dc26591","Type":"ContainerStarted","Data":"d329f9dad4f454df2b857b1e9c5455f286237178f056bcf81ff98ff08f762ca1"} Dec 10 15:42:41 crc kubenswrapper[4755]: I1210 15:42:41.394493 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"fb480bc7-6936-4208-964b-44cffd08f907","Type":"ContainerStarted","Data":"8736ae2271c56389e0799a850f399ae7691aedda4dca66c57b45b6c795cfb756"} Dec 10 15:42:41 crc kubenswrapper[4755]: I1210 15:42:41.395089 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:42:41 crc kubenswrapper[4755]: I1210 15:42:41.401098 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"251dc547-e1a7-418e-95fd-6b7e8e5c5d35","Type":"ContainerStarted","Data":"d7783d43679c54f1e3414562b811ef69ab54e808d530fb5d096b6f5f62d927de"} Dec 10 15:42:41 crc kubenswrapper[4755]: I1210 15:42:41.414051 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-qg2cr" event={"ID":"aff2e950-1295-4b9e-996a-f9a6c4a1dedd","Type":"ContainerStarted","Data":"d6ad9ef9c98cc4d482615c1aae239da3d0259370b462bb9941096dc1d54b73ec"} Dec 10 15:42:41 crc kubenswrapper[4755]: I1210 15:42:41.418143 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-g7qh8" podStartSLOduration=6.418124039 podStartE2EDuration="6.418124039s" podCreationTimestamp="2025-12-10 15:42:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:42:41.413434621 +0000 UTC m=+1158.014318263" watchObservedRunningTime="2025-12-10 15:42:41.418124039 +0000 UTC m=+1158.019007671" Dec 10 15:42:41 crc kubenswrapper[4755]: I1210 15:42:41.434887 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"89e8722f-e9fc-4850-bb96-e51f9859805e","Type":"ContainerStarted","Data":"b4fc1550b67e9a56eb7b14dcfe33b6d44f6c5c3cdeba56241c9f298b59af77a0"} Dec 10 15:42:41 crc kubenswrapper[4755]: I1210 15:42:41.436949 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Dec 10 15:42:41 crc kubenswrapper[4755]: I1210 15:42:41.446176 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-mztj5" Dec 10 15:42:41 crc kubenswrapper[4755]: I1210 15:42:41.452341 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-d0ed-account-create-update-ck22x" event={"ID":"c9b77d20-983f-4ca4-9fa6-67ab4fb8d3ba","Type":"ContainerStarted","Data":"981625c691068530e3d274326788e12ce542fa2e25b890d1c1f0c57499eb4e28"} Dec 10 15:42:41 crc kubenswrapper[4755]: I1210 15:42:41.452400 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-d0ed-account-create-update-ck22x" event={"ID":"c9b77d20-983f-4ca4-9fa6-67ab4fb8d3ba","Type":"ContainerStarted","Data":"d9ca9e1f70a9d4bafef6b282ee9f045bdb0ab29c3954d2419f42f38431c37600"} Dec 10 15:42:41 crc kubenswrapper[4755]: I1210 15:42:41.457142 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=17.649062355 podStartE2EDuration="1m14.457126959s" podCreationTimestamp="2025-12-10 15:41:27 +0000 UTC" firstStartedPulling="2025-12-10 15:41:43.51020226 +0000 UTC m=+1100.111085892" lastFinishedPulling="2025-12-10 15:42:40.318266874 +0000 UTC m=+1156.919150496" observedRunningTime="2025-12-10 15:42:41.45280867 +0000 UTC m=+1158.053692302" watchObservedRunningTime="2025-12-10 15:42:41.457126959 +0000 UTC m=+1158.058010591" Dec 10 15:42:41 crc kubenswrapper[4755]: I1210 15:42:41.490643 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=62.668339652 podStartE2EDuration="1m20.490622397s" podCreationTimestamp="2025-12-10 15:41:21 +0000 UTC" firstStartedPulling="2025-12-10 15:41:43.067054038 +0000 UTC m=+1099.667937670" lastFinishedPulling="2025-12-10 15:42:00.889336763 +0000 UTC m=+1117.490220415" observedRunningTime="2025-12-10 15:42:41.490082961 +0000 UTC m=+1158.090966603" watchObservedRunningTime="2025-12-10 15:42:41.490622397 +0000 UTC m=+1158.091506029" Dec 10 15:42:41 crc kubenswrapper[4755]: I1210 15:42:41.536443 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=62.586355911 podStartE2EDuration="1m21.536416061s" podCreationTimestamp="2025-12-10 15:41:20 +0000 UTC" firstStartedPulling="2025-12-10 15:41:42.565978406 +0000 UTC m=+1099.166862038" lastFinishedPulling="2025-12-10 15:42:01.516038556 +0000 UTC m=+1118.116922188" observedRunningTime="2025-12-10 15:42:41.523138637 +0000 UTC m=+1158.124022269" watchObservedRunningTime="2025-12-10 15:42:41.536416061 +0000 UTC m=+1158.137299723" Dec 10 15:42:41 crc kubenswrapper[4755]: I1210 15:42:41.562351 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-qg2cr" podStartSLOduration=2.107939339 podStartE2EDuration="9.562329031s" podCreationTimestamp="2025-12-10 15:42:32 +0000 UTC" firstStartedPulling="2025-12-10 15:42:33.49465828 +0000 UTC m=+1150.095541912" lastFinishedPulling="2025-12-10 15:42:40.949047972 +0000 UTC m=+1157.549931604" observedRunningTime="2025-12-10 15:42:41.56079348 +0000 UTC m=+1158.161677112" watchObservedRunningTime="2025-12-10 15:42:41.562329031 +0000 UTC m=+1158.163212663" Dec 10 15:42:41 crc kubenswrapper[4755]: I1210 15:42:41.596153 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-d0ed-account-create-update-ck22x" podStartSLOduration=6.5961314479999995 podStartE2EDuration="6.596131448s" podCreationTimestamp="2025-12-10 15:42:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:42:41.578237008 +0000 UTC m=+1158.179120650" watchObservedRunningTime="2025-12-10 15:42:41.596131448 +0000 UTC m=+1158.197015080" Dec 10 15:42:41 crc kubenswrapper[4755]: I1210 15:42:41.624120 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-mztj5"] Dec 10 15:42:41 crc kubenswrapper[4755]: I1210 15:42:41.631861 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8554648995-mztj5"] Dec 10 15:42:41 crc kubenswrapper[4755]: I1210 15:42:41.772453 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32d65463-88f8-4c3f-910a-0c8c13a39013" path="/var/lib/kubelet/pods/32d65463-88f8-4c3f-910a-0c8c13a39013/volumes" Dec 10 15:42:41 crc kubenswrapper[4755]: I1210 15:42:41.900294 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-pth9b"] Dec 10 15:42:41 crc kubenswrapper[4755]: W1210 15:42:41.904925 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5d949f1d_5cb7_49e0_aa0b_52a615dfe4b5.slice/crio-e0422f0aece4d26e8cbeb6db3ee7d4ec40360a305d934f5a72a67474784dfd1d WatchSource:0}: Error finding container e0422f0aece4d26e8cbeb6db3ee7d4ec40360a305d934f5a72a67474784dfd1d: Status 404 returned error can't find the container with id e0422f0aece4d26e8cbeb6db3ee7d4ec40360a305d934f5a72a67474784dfd1d Dec 10 15:42:42 crc kubenswrapper[4755]: I1210 15:42:42.456850 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"3423b67e-8bda-4237-a94e-82cf18faf1c2","Type":"ContainerStarted","Data":"c197f48a24d86b6d58df85c9f75389296a20299f3d00af6fc5891ef47255bd48"} Dec 10 15:42:42 crc kubenswrapper[4755]: I1210 15:42:42.457209 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Dec 10 15:42:42 crc kubenswrapper[4755]: I1210 15:42:42.458946 4755 generic.go:334] "Generic (PLEG): container finished" podID="224b1eb4-d368-436c-90b1-fe760dc26591" containerID="c8b67f6f92b34f8ca46c888205ae21fe2e9c2a7675821b4f3a8e5032b999adc4" exitCode=0 Dec 10 15:42:42 crc kubenswrapper[4755]: I1210 15:42:42.458987 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-g7qh8" event={"ID":"224b1eb4-d368-436c-90b1-fe760dc26591","Type":"ContainerDied","Data":"c8b67f6f92b34f8ca46c888205ae21fe2e9c2a7675821b4f3a8e5032b999adc4"} Dec 10 15:42:42 crc kubenswrapper[4755]: I1210 15:42:42.460307 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-pth9b" event={"ID":"5d949f1d-5cb7-49e0-aa0b-52a615dfe4b5","Type":"ContainerStarted","Data":"e0422f0aece4d26e8cbeb6db3ee7d4ec40360a305d934f5a72a67474784dfd1d"} Dec 10 15:42:42 crc kubenswrapper[4755]: I1210 15:42:42.462306 4755 generic.go:334] "Generic (PLEG): container finished" podID="c9b77d20-983f-4ca4-9fa6-67ab4fb8d3ba" containerID="981625c691068530e3d274326788e12ce542fa2e25b890d1c1f0c57499eb4e28" exitCode=0 Dec 10 15:42:42 crc kubenswrapper[4755]: I1210 15:42:42.462587 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-d0ed-account-create-update-ck22x" event={"ID":"c9b77d20-983f-4ca4-9fa6-67ab4fb8d3ba","Type":"ContainerDied","Data":"981625c691068530e3d274326788e12ce542fa2e25b890d1c1f0c57499eb4e28"} Dec 10 15:42:42 crc kubenswrapper[4755]: I1210 15:42:42.480325 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=5.522881374 podStartE2EDuration="8.480306902s" podCreationTimestamp="2025-12-10 15:42:34 +0000 UTC" firstStartedPulling="2025-12-10 15:42:38.013509684 +0000 UTC m=+1154.614393316" lastFinishedPulling="2025-12-10 15:42:40.970935202 +0000 UTC m=+1157.571818844" observedRunningTime="2025-12-10 15:42:42.474300247 +0000 UTC m=+1159.075183879" watchObservedRunningTime="2025-12-10 15:42:42.480306902 +0000 UTC m=+1159.081190534" Dec 10 15:42:43 crc kubenswrapper[4755]: I1210 15:42:43.698388 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Dec 10 15:42:43 crc kubenswrapper[4755]: I1210 15:42:43.698489 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Dec 10 15:42:43 crc kubenswrapper[4755]: I1210 15:42:43.704837 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Dec 10 15:42:44 crc kubenswrapper[4755]: I1210 15:42:44.016316 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-g7qh8" Dec 10 15:42:44 crc kubenswrapper[4755]: I1210 15:42:44.023274 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-d0ed-account-create-update-ck22x" Dec 10 15:42:44 crc kubenswrapper[4755]: I1210 15:42:44.125010 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qjh99\" (UniqueName: \"kubernetes.io/projected/224b1eb4-d368-436c-90b1-fe760dc26591-kube-api-access-qjh99\") pod \"224b1eb4-d368-436c-90b1-fe760dc26591\" (UID: \"224b1eb4-d368-436c-90b1-fe760dc26591\") " Dec 10 15:42:44 crc kubenswrapper[4755]: I1210 15:42:44.125212 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9b77d20-983f-4ca4-9fa6-67ab4fb8d3ba-operator-scripts\") pod \"c9b77d20-983f-4ca4-9fa6-67ab4fb8d3ba\" (UID: \"c9b77d20-983f-4ca4-9fa6-67ab4fb8d3ba\") " Dec 10 15:42:44 crc kubenswrapper[4755]: I1210 15:42:44.125299 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tzl95\" (UniqueName: \"kubernetes.io/projected/c9b77d20-983f-4ca4-9fa6-67ab4fb8d3ba-kube-api-access-tzl95\") pod \"c9b77d20-983f-4ca4-9fa6-67ab4fb8d3ba\" (UID: \"c9b77d20-983f-4ca4-9fa6-67ab4fb8d3ba\") " Dec 10 15:42:44 crc kubenswrapper[4755]: I1210 15:42:44.125484 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/224b1eb4-d368-436c-90b1-fe760dc26591-operator-scripts\") pod \"224b1eb4-d368-436c-90b1-fe760dc26591\" (UID: \"224b1eb4-d368-436c-90b1-fe760dc26591\") " Dec 10 15:42:44 crc kubenswrapper[4755]: I1210 15:42:44.126376 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/224b1eb4-d368-436c-90b1-fe760dc26591-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "224b1eb4-d368-436c-90b1-fe760dc26591" (UID: "224b1eb4-d368-436c-90b1-fe760dc26591"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:42:44 crc kubenswrapper[4755]: I1210 15:42:44.127195 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9b77d20-983f-4ca4-9fa6-67ab4fb8d3ba-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c9b77d20-983f-4ca4-9fa6-67ab4fb8d3ba" (UID: "c9b77d20-983f-4ca4-9fa6-67ab4fb8d3ba"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:42:44 crc kubenswrapper[4755]: I1210 15:42:44.131616 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9b77d20-983f-4ca4-9fa6-67ab4fb8d3ba-kube-api-access-tzl95" (OuterVolumeSpecName: "kube-api-access-tzl95") pod "c9b77d20-983f-4ca4-9fa6-67ab4fb8d3ba" (UID: "c9b77d20-983f-4ca4-9fa6-67ab4fb8d3ba"). InnerVolumeSpecName "kube-api-access-tzl95". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:42:44 crc kubenswrapper[4755]: I1210 15:42:44.131959 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/224b1eb4-d368-436c-90b1-fe760dc26591-kube-api-access-qjh99" (OuterVolumeSpecName: "kube-api-access-qjh99") pod "224b1eb4-d368-436c-90b1-fe760dc26591" (UID: "224b1eb4-d368-436c-90b1-fe760dc26591"). InnerVolumeSpecName "kube-api-access-qjh99". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:42:44 crc kubenswrapper[4755]: I1210 15:42:44.227449 4755 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/224b1eb4-d368-436c-90b1-fe760dc26591-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 15:42:44 crc kubenswrapper[4755]: I1210 15:42:44.227855 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qjh99\" (UniqueName: \"kubernetes.io/projected/224b1eb4-d368-436c-90b1-fe760dc26591-kube-api-access-qjh99\") on node \"crc\" DevicePath \"\"" Dec 10 15:42:44 crc kubenswrapper[4755]: I1210 15:42:44.227967 4755 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9b77d20-983f-4ca4-9fa6-67ab4fb8d3ba-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 15:42:44 crc kubenswrapper[4755]: I1210 15:42:44.228053 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tzl95\" (UniqueName: \"kubernetes.io/projected/c9b77d20-983f-4ca4-9fa6-67ab4fb8d3ba-kube-api-access-tzl95\") on node \"crc\" DevicePath \"\"" Dec 10 15:42:44 crc kubenswrapper[4755]: I1210 15:42:44.496672 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-d0ed-account-create-update-ck22x" Dec 10 15:42:44 crc kubenswrapper[4755]: I1210 15:42:44.496682 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-d0ed-account-create-update-ck22x" event={"ID":"c9b77d20-983f-4ca4-9fa6-67ab4fb8d3ba","Type":"ContainerDied","Data":"d9ca9e1f70a9d4bafef6b282ee9f045bdb0ab29c3954d2419f42f38431c37600"} Dec 10 15:42:44 crc kubenswrapper[4755]: I1210 15:42:44.496722 4755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d9ca9e1f70a9d4bafef6b282ee9f045bdb0ab29c3954d2419f42f38431c37600" Dec 10 15:42:44 crc kubenswrapper[4755]: I1210 15:42:44.497914 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-g7qh8" Dec 10 15:42:44 crc kubenswrapper[4755]: I1210 15:42:44.497905 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-g7qh8" event={"ID":"224b1eb4-d368-436c-90b1-fe760dc26591","Type":"ContainerDied","Data":"d329f9dad4f454df2b857b1e9c5455f286237178f056bcf81ff98ff08f762ca1"} Dec 10 15:42:44 crc kubenswrapper[4755]: I1210 15:42:44.498021 4755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d329f9dad4f454df2b857b1e9c5455f286237178f056bcf81ff98ff08f762ca1" Dec 10 15:42:44 crc kubenswrapper[4755]: I1210 15:42:44.499086 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Dec 10 15:42:44 crc kubenswrapper[4755]: I1210 15:42:44.532980 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/72a1cce7-93cb-4fe1-9d12-3d4e19692457-etc-swift\") pod \"swift-storage-0\" (UID: \"72a1cce7-93cb-4fe1-9d12-3d4e19692457\") " pod="openstack/swift-storage-0" Dec 10 15:42:44 crc kubenswrapper[4755]: E1210 15:42:44.533167 4755 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 10 15:42:44 crc kubenswrapper[4755]: E1210 15:42:44.533189 4755 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 10 15:42:44 crc kubenswrapper[4755]: E1210 15:42:44.533243 4755 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/72a1cce7-93cb-4fe1-9d12-3d4e19692457-etc-swift podName:72a1cce7-93cb-4fe1-9d12-3d4e19692457 nodeName:}" failed. No retries permitted until 2025-12-10 15:43:00.533223789 +0000 UTC m=+1177.134107421 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/72a1cce7-93cb-4fe1-9d12-3d4e19692457-etc-swift") pod "swift-storage-0" (UID: "72a1cce7-93cb-4fe1-9d12-3d4e19692457") : configmap "swift-ring-files" not found Dec 10 15:42:47 crc kubenswrapper[4755]: I1210 15:42:47.257749 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 10 15:42:47 crc kubenswrapper[4755]: I1210 15:42:47.438023 4755 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-q6n4p" podUID="46b6df85-96b1-4583-a80f-97a5d980cc72" containerName="ovn-controller" probeResult="failure" output=< Dec 10 15:42:47 crc kubenswrapper[4755]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Dec 10 15:42:47 crc kubenswrapper[4755]: > Dec 10 15:42:47 crc kubenswrapper[4755]: I1210 15:42:47.456897 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-x972h" Dec 10 15:42:47 crc kubenswrapper[4755]: I1210 15:42:47.484216 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-x972h" Dec 10 15:42:47 crc kubenswrapper[4755]: I1210 15:42:47.547638 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="251dc547-e1a7-418e-95fd-6b7e8e5c5d35" containerName="prometheus" containerID="cri-o://452d0ac5fe8231a7081c7ffc0e4c94bb5b29f399f5b3db2cca63f1368ea1f265" gracePeriod=600 Dec 10 15:42:47 crc kubenswrapper[4755]: I1210 15:42:47.547820 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="251dc547-e1a7-418e-95fd-6b7e8e5c5d35" containerName="thanos-sidecar" containerID="cri-o://d7783d43679c54f1e3414562b811ef69ab54e808d530fb5d096b6f5f62d927de" gracePeriod=600 Dec 10 15:42:47 crc kubenswrapper[4755]: I1210 15:42:47.547817 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="251dc547-e1a7-418e-95fd-6b7e8e5c5d35" containerName="config-reloader" containerID="cri-o://425c4aaa12065585d9b150bb9bcf966d2169d96d7b0ba5754b7474e16b675184" gracePeriod=600 Dec 10 15:42:47 crc kubenswrapper[4755]: I1210 15:42:47.697667 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-q6n4p-config-rt44m"] Dec 10 15:42:47 crc kubenswrapper[4755]: E1210 15:42:47.698125 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9b77d20-983f-4ca4-9fa6-67ab4fb8d3ba" containerName="mariadb-account-create-update" Dec 10 15:42:47 crc kubenswrapper[4755]: I1210 15:42:47.698143 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9b77d20-983f-4ca4-9fa6-67ab4fb8d3ba" containerName="mariadb-account-create-update" Dec 10 15:42:47 crc kubenswrapper[4755]: E1210 15:42:47.698160 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="224b1eb4-d368-436c-90b1-fe760dc26591" containerName="mariadb-database-create" Dec 10 15:42:47 crc kubenswrapper[4755]: I1210 15:42:47.698168 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="224b1eb4-d368-436c-90b1-fe760dc26591" containerName="mariadb-database-create" Dec 10 15:42:47 crc kubenswrapper[4755]: I1210 15:42:47.698413 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9b77d20-983f-4ca4-9fa6-67ab4fb8d3ba" containerName="mariadb-account-create-update" Dec 10 15:42:47 crc kubenswrapper[4755]: I1210 15:42:47.698440 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="224b1eb4-d368-436c-90b1-fe760dc26591" containerName="mariadb-database-create" Dec 10 15:42:47 crc kubenswrapper[4755]: I1210 15:42:47.699443 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-q6n4p-config-rt44m" Dec 10 15:42:47 crc kubenswrapper[4755]: I1210 15:42:47.703983 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Dec 10 15:42:47 crc kubenswrapper[4755]: I1210 15:42:47.724104 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-q6n4p-config-rt44m"] Dec 10 15:42:47 crc kubenswrapper[4755]: I1210 15:42:47.784972 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/49d1edb3-5fcb-4dd6-9f34-6c8f7114aa1b-scripts\") pod \"ovn-controller-q6n4p-config-rt44m\" (UID: \"49d1edb3-5fcb-4dd6-9f34-6c8f7114aa1b\") " pod="openstack/ovn-controller-q6n4p-config-rt44m" Dec 10 15:42:47 crc kubenswrapper[4755]: I1210 15:42:47.785045 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/49d1edb3-5fcb-4dd6-9f34-6c8f7114aa1b-var-run\") pod \"ovn-controller-q6n4p-config-rt44m\" (UID: \"49d1edb3-5fcb-4dd6-9f34-6c8f7114aa1b\") " pod="openstack/ovn-controller-q6n4p-config-rt44m" Dec 10 15:42:47 crc kubenswrapper[4755]: I1210 15:42:47.785088 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/49d1edb3-5fcb-4dd6-9f34-6c8f7114aa1b-additional-scripts\") pod \"ovn-controller-q6n4p-config-rt44m\" (UID: \"49d1edb3-5fcb-4dd6-9f34-6c8f7114aa1b\") " pod="openstack/ovn-controller-q6n4p-config-rt44m" Dec 10 15:42:47 crc kubenswrapper[4755]: I1210 15:42:47.785292 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9l8wj\" (UniqueName: \"kubernetes.io/projected/49d1edb3-5fcb-4dd6-9f34-6c8f7114aa1b-kube-api-access-9l8wj\") pod \"ovn-controller-q6n4p-config-rt44m\" (UID: \"49d1edb3-5fcb-4dd6-9f34-6c8f7114aa1b\") " pod="openstack/ovn-controller-q6n4p-config-rt44m" Dec 10 15:42:47 crc kubenswrapper[4755]: I1210 15:42:47.785375 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/49d1edb3-5fcb-4dd6-9f34-6c8f7114aa1b-var-log-ovn\") pod \"ovn-controller-q6n4p-config-rt44m\" (UID: \"49d1edb3-5fcb-4dd6-9f34-6c8f7114aa1b\") " pod="openstack/ovn-controller-q6n4p-config-rt44m" Dec 10 15:42:47 crc kubenswrapper[4755]: I1210 15:42:47.785671 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/49d1edb3-5fcb-4dd6-9f34-6c8f7114aa1b-var-run-ovn\") pod \"ovn-controller-q6n4p-config-rt44m\" (UID: \"49d1edb3-5fcb-4dd6-9f34-6c8f7114aa1b\") " pod="openstack/ovn-controller-q6n4p-config-rt44m" Dec 10 15:42:47 crc kubenswrapper[4755]: I1210 15:42:47.888025 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/49d1edb3-5fcb-4dd6-9f34-6c8f7114aa1b-var-run\") pod \"ovn-controller-q6n4p-config-rt44m\" (UID: \"49d1edb3-5fcb-4dd6-9f34-6c8f7114aa1b\") " pod="openstack/ovn-controller-q6n4p-config-rt44m" Dec 10 15:42:47 crc kubenswrapper[4755]: I1210 15:42:47.888118 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/49d1edb3-5fcb-4dd6-9f34-6c8f7114aa1b-additional-scripts\") pod \"ovn-controller-q6n4p-config-rt44m\" (UID: \"49d1edb3-5fcb-4dd6-9f34-6c8f7114aa1b\") " pod="openstack/ovn-controller-q6n4p-config-rt44m" Dec 10 15:42:47 crc kubenswrapper[4755]: I1210 15:42:47.888188 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9l8wj\" (UniqueName: \"kubernetes.io/projected/49d1edb3-5fcb-4dd6-9f34-6c8f7114aa1b-kube-api-access-9l8wj\") pod \"ovn-controller-q6n4p-config-rt44m\" (UID: \"49d1edb3-5fcb-4dd6-9f34-6c8f7114aa1b\") " pod="openstack/ovn-controller-q6n4p-config-rt44m" Dec 10 15:42:47 crc kubenswrapper[4755]: I1210 15:42:47.888244 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/49d1edb3-5fcb-4dd6-9f34-6c8f7114aa1b-var-log-ovn\") pod \"ovn-controller-q6n4p-config-rt44m\" (UID: \"49d1edb3-5fcb-4dd6-9f34-6c8f7114aa1b\") " pod="openstack/ovn-controller-q6n4p-config-rt44m" Dec 10 15:42:47 crc kubenswrapper[4755]: I1210 15:42:47.888311 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/49d1edb3-5fcb-4dd6-9f34-6c8f7114aa1b-var-run-ovn\") pod \"ovn-controller-q6n4p-config-rt44m\" (UID: \"49d1edb3-5fcb-4dd6-9f34-6c8f7114aa1b\") " pod="openstack/ovn-controller-q6n4p-config-rt44m" Dec 10 15:42:47 crc kubenswrapper[4755]: I1210 15:42:47.888370 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/49d1edb3-5fcb-4dd6-9f34-6c8f7114aa1b-scripts\") pod \"ovn-controller-q6n4p-config-rt44m\" (UID: \"49d1edb3-5fcb-4dd6-9f34-6c8f7114aa1b\") " pod="openstack/ovn-controller-q6n4p-config-rt44m" Dec 10 15:42:47 crc kubenswrapper[4755]: I1210 15:42:47.888573 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/49d1edb3-5fcb-4dd6-9f34-6c8f7114aa1b-var-log-ovn\") pod \"ovn-controller-q6n4p-config-rt44m\" (UID: \"49d1edb3-5fcb-4dd6-9f34-6c8f7114aa1b\") " pod="openstack/ovn-controller-q6n4p-config-rt44m" Dec 10 15:42:47 crc kubenswrapper[4755]: I1210 15:42:47.888658 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/49d1edb3-5fcb-4dd6-9f34-6c8f7114aa1b-var-run-ovn\") pod \"ovn-controller-q6n4p-config-rt44m\" (UID: \"49d1edb3-5fcb-4dd6-9f34-6c8f7114aa1b\") " pod="openstack/ovn-controller-q6n4p-config-rt44m" Dec 10 15:42:47 crc kubenswrapper[4755]: I1210 15:42:47.888716 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/49d1edb3-5fcb-4dd6-9f34-6c8f7114aa1b-var-run\") pod \"ovn-controller-q6n4p-config-rt44m\" (UID: \"49d1edb3-5fcb-4dd6-9f34-6c8f7114aa1b\") " pod="openstack/ovn-controller-q6n4p-config-rt44m" Dec 10 15:42:47 crc kubenswrapper[4755]: I1210 15:42:47.888810 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/49d1edb3-5fcb-4dd6-9f34-6c8f7114aa1b-additional-scripts\") pod \"ovn-controller-q6n4p-config-rt44m\" (UID: \"49d1edb3-5fcb-4dd6-9f34-6c8f7114aa1b\") " pod="openstack/ovn-controller-q6n4p-config-rt44m" Dec 10 15:42:47 crc kubenswrapper[4755]: I1210 15:42:47.890258 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/49d1edb3-5fcb-4dd6-9f34-6c8f7114aa1b-scripts\") pod \"ovn-controller-q6n4p-config-rt44m\" (UID: \"49d1edb3-5fcb-4dd6-9f34-6c8f7114aa1b\") " pod="openstack/ovn-controller-q6n4p-config-rt44m" Dec 10 15:42:47 crc kubenswrapper[4755]: I1210 15:42:47.911210 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9l8wj\" (UniqueName: \"kubernetes.io/projected/49d1edb3-5fcb-4dd6-9f34-6c8f7114aa1b-kube-api-access-9l8wj\") pod \"ovn-controller-q6n4p-config-rt44m\" (UID: \"49d1edb3-5fcb-4dd6-9f34-6c8f7114aa1b\") " pod="openstack/ovn-controller-q6n4p-config-rt44m" Dec 10 15:42:48 crc kubenswrapper[4755]: I1210 15:42:48.109261 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-q6n4p-config-rt44m" Dec 10 15:42:48 crc kubenswrapper[4755]: I1210 15:42:48.404418 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-q6n4p-config-rt44m"] Dec 10 15:42:48 crc kubenswrapper[4755]: W1210 15:42:48.413772 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod49d1edb3_5fcb_4dd6_9f34_6c8f7114aa1b.slice/crio-0f1885cbc6adfaf6278dacfee798c8f172ba4b5cf6c1ad4740de5a7c9f38cd1a WatchSource:0}: Error finding container 0f1885cbc6adfaf6278dacfee798c8f172ba4b5cf6c1ad4740de5a7c9f38cd1a: Status 404 returned error can't find the container with id 0f1885cbc6adfaf6278dacfee798c8f172ba4b5cf6c1ad4740de5a7c9f38cd1a Dec 10 15:42:48 crc kubenswrapper[4755]: I1210 15:42:48.580702 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-q6n4p-config-rt44m" event={"ID":"49d1edb3-5fcb-4dd6-9f34-6c8f7114aa1b","Type":"ContainerStarted","Data":"0f1885cbc6adfaf6278dacfee798c8f172ba4b5cf6c1ad4740de5a7c9f38cd1a"} Dec 10 15:42:48 crc kubenswrapper[4755]: I1210 15:42:48.583813 4755 generic.go:334] "Generic (PLEG): container finished" podID="251dc547-e1a7-418e-95fd-6b7e8e5c5d35" containerID="d7783d43679c54f1e3414562b811ef69ab54e808d530fb5d096b6f5f62d927de" exitCode=0 Dec 10 15:42:48 crc kubenswrapper[4755]: I1210 15:42:48.583844 4755 generic.go:334] "Generic (PLEG): container finished" podID="251dc547-e1a7-418e-95fd-6b7e8e5c5d35" containerID="425c4aaa12065585d9b150bb9bcf966d2169d96d7b0ba5754b7474e16b675184" exitCode=0 Dec 10 15:42:48 crc kubenswrapper[4755]: I1210 15:42:48.583853 4755 generic.go:334] "Generic (PLEG): container finished" podID="251dc547-e1a7-418e-95fd-6b7e8e5c5d35" containerID="452d0ac5fe8231a7081c7ffc0e4c94bb5b29f399f5b3db2cca63f1368ea1f265" exitCode=0 Dec 10 15:42:48 crc kubenswrapper[4755]: I1210 15:42:48.583886 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"251dc547-e1a7-418e-95fd-6b7e8e5c5d35","Type":"ContainerDied","Data":"d7783d43679c54f1e3414562b811ef69ab54e808d530fb5d096b6f5f62d927de"} Dec 10 15:42:48 crc kubenswrapper[4755]: I1210 15:42:48.583910 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"251dc547-e1a7-418e-95fd-6b7e8e5c5d35","Type":"ContainerDied","Data":"425c4aaa12065585d9b150bb9bcf966d2169d96d7b0ba5754b7474e16b675184"} Dec 10 15:42:48 crc kubenswrapper[4755]: I1210 15:42:48.583919 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"251dc547-e1a7-418e-95fd-6b7e8e5c5d35","Type":"ContainerDied","Data":"452d0ac5fe8231a7081c7ffc0e4c94bb5b29f399f5b3db2cca63f1368ea1f265"} Dec 10 15:42:48 crc kubenswrapper[4755]: I1210 15:42:48.942252 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Dec 10 15:42:49 crc kubenswrapper[4755]: I1210 15:42:49.018362 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/251dc547-e1a7-418e-95fd-6b7e8e5c5d35-prometheus-metric-storage-rulefiles-0\") pod \"251dc547-e1a7-418e-95fd-6b7e8e5c5d35\" (UID: \"251dc547-e1a7-418e-95fd-6b7e8e5c5d35\") " Dec 10 15:42:49 crc kubenswrapper[4755]: I1210 15:42:49.018454 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/251dc547-e1a7-418e-95fd-6b7e8e5c5d35-thanos-prometheus-http-client-file\") pod \"251dc547-e1a7-418e-95fd-6b7e8e5c5d35\" (UID: \"251dc547-e1a7-418e-95fd-6b7e8e5c5d35\") " Dec 10 15:42:49 crc kubenswrapper[4755]: I1210 15:42:49.018566 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-54p6w\" (UniqueName: \"kubernetes.io/projected/251dc547-e1a7-418e-95fd-6b7e8e5c5d35-kube-api-access-54p6w\") pod \"251dc547-e1a7-418e-95fd-6b7e8e5c5d35\" (UID: \"251dc547-e1a7-418e-95fd-6b7e8e5c5d35\") " Dec 10 15:42:49 crc kubenswrapper[4755]: I1210 15:42:49.018603 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/251dc547-e1a7-418e-95fd-6b7e8e5c5d35-tls-assets\") pod \"251dc547-e1a7-418e-95fd-6b7e8e5c5d35\" (UID: \"251dc547-e1a7-418e-95fd-6b7e8e5c5d35\") " Dec 10 15:42:49 crc kubenswrapper[4755]: I1210 15:42:49.018622 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/251dc547-e1a7-418e-95fd-6b7e8e5c5d35-config-out\") pod \"251dc547-e1a7-418e-95fd-6b7e8e5c5d35\" (UID: \"251dc547-e1a7-418e-95fd-6b7e8e5c5d35\") " Dec 10 15:42:49 crc kubenswrapper[4755]: I1210 15:42:49.018806 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-79f2f86d-f9f7-48cb-8d8e-8ad3ded58407\") pod \"251dc547-e1a7-418e-95fd-6b7e8e5c5d35\" (UID: \"251dc547-e1a7-418e-95fd-6b7e8e5c5d35\") " Dec 10 15:42:49 crc kubenswrapper[4755]: I1210 15:42:49.018864 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/251dc547-e1a7-418e-95fd-6b7e8e5c5d35-config\") pod \"251dc547-e1a7-418e-95fd-6b7e8e5c5d35\" (UID: \"251dc547-e1a7-418e-95fd-6b7e8e5c5d35\") " Dec 10 15:42:49 crc kubenswrapper[4755]: I1210 15:42:49.018891 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/251dc547-e1a7-418e-95fd-6b7e8e5c5d35-web-config\") pod \"251dc547-e1a7-418e-95fd-6b7e8e5c5d35\" (UID: \"251dc547-e1a7-418e-95fd-6b7e8e5c5d35\") " Dec 10 15:42:49 crc kubenswrapper[4755]: I1210 15:42:49.020026 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/251dc547-e1a7-418e-95fd-6b7e8e5c5d35-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "251dc547-e1a7-418e-95fd-6b7e8e5c5d35" (UID: "251dc547-e1a7-418e-95fd-6b7e8e5c5d35"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:42:49 crc kubenswrapper[4755]: I1210 15:42:49.028057 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/251dc547-e1a7-418e-95fd-6b7e8e5c5d35-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "251dc547-e1a7-418e-95fd-6b7e8e5c5d35" (UID: "251dc547-e1a7-418e-95fd-6b7e8e5c5d35"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:42:49 crc kubenswrapper[4755]: I1210 15:42:49.028097 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/251dc547-e1a7-418e-95fd-6b7e8e5c5d35-config" (OuterVolumeSpecName: "config") pod "251dc547-e1a7-418e-95fd-6b7e8e5c5d35" (UID: "251dc547-e1a7-418e-95fd-6b7e8e5c5d35"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:42:49 crc kubenswrapper[4755]: I1210 15:42:49.034826 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/251dc547-e1a7-418e-95fd-6b7e8e5c5d35-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "251dc547-e1a7-418e-95fd-6b7e8e5c5d35" (UID: "251dc547-e1a7-418e-95fd-6b7e8e5c5d35"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:42:49 crc kubenswrapper[4755]: I1210 15:42:49.034927 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/251dc547-e1a7-418e-95fd-6b7e8e5c5d35-config-out" (OuterVolumeSpecName: "config-out") pod "251dc547-e1a7-418e-95fd-6b7e8e5c5d35" (UID: "251dc547-e1a7-418e-95fd-6b7e8e5c5d35"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:42:49 crc kubenswrapper[4755]: I1210 15:42:49.036981 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/251dc547-e1a7-418e-95fd-6b7e8e5c5d35-kube-api-access-54p6w" (OuterVolumeSpecName: "kube-api-access-54p6w") pod "251dc547-e1a7-418e-95fd-6b7e8e5c5d35" (UID: "251dc547-e1a7-418e-95fd-6b7e8e5c5d35"). InnerVolumeSpecName "kube-api-access-54p6w". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:42:49 crc kubenswrapper[4755]: I1210 15:42:49.055095 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-79f2f86d-f9f7-48cb-8d8e-8ad3ded58407" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "251dc547-e1a7-418e-95fd-6b7e8e5c5d35" (UID: "251dc547-e1a7-418e-95fd-6b7e8e5c5d35"). InnerVolumeSpecName "pvc-79f2f86d-f9f7-48cb-8d8e-8ad3ded58407". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 10 15:42:49 crc kubenswrapper[4755]: I1210 15:42:49.064795 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/251dc547-e1a7-418e-95fd-6b7e8e5c5d35-web-config" (OuterVolumeSpecName: "web-config") pod "251dc547-e1a7-418e-95fd-6b7e8e5c5d35" (UID: "251dc547-e1a7-418e-95fd-6b7e8e5c5d35"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:42:49 crc kubenswrapper[4755]: I1210 15:42:49.121502 4755 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/251dc547-e1a7-418e-95fd-6b7e8e5c5d35-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Dec 10 15:42:49 crc kubenswrapper[4755]: I1210 15:42:49.121536 4755 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/251dc547-e1a7-418e-95fd-6b7e8e5c5d35-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Dec 10 15:42:49 crc kubenswrapper[4755]: I1210 15:42:49.121547 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-54p6w\" (UniqueName: \"kubernetes.io/projected/251dc547-e1a7-418e-95fd-6b7e8e5c5d35-kube-api-access-54p6w\") on node \"crc\" DevicePath \"\"" Dec 10 15:42:49 crc kubenswrapper[4755]: I1210 15:42:49.121558 4755 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/251dc547-e1a7-418e-95fd-6b7e8e5c5d35-tls-assets\") on node \"crc\" DevicePath \"\"" Dec 10 15:42:49 crc kubenswrapper[4755]: I1210 15:42:49.121568 4755 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/251dc547-e1a7-418e-95fd-6b7e8e5c5d35-config-out\") on node \"crc\" DevicePath \"\"" Dec 10 15:42:49 crc kubenswrapper[4755]: I1210 15:42:49.121599 4755 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-79f2f86d-f9f7-48cb-8d8e-8ad3ded58407\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-79f2f86d-f9f7-48cb-8d8e-8ad3ded58407\") on node \"crc\" " Dec 10 15:42:49 crc kubenswrapper[4755]: I1210 15:42:49.121621 4755 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/251dc547-e1a7-418e-95fd-6b7e8e5c5d35-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:42:49 crc kubenswrapper[4755]: I1210 15:42:49.121630 4755 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/251dc547-e1a7-418e-95fd-6b7e8e5c5d35-web-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:42:49 crc kubenswrapper[4755]: I1210 15:42:49.153329 4755 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Dec 10 15:42:49 crc kubenswrapper[4755]: I1210 15:42:49.153481 4755 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-79f2f86d-f9f7-48cb-8d8e-8ad3ded58407" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-79f2f86d-f9f7-48cb-8d8e-8ad3ded58407") on node "crc" Dec 10 15:42:49 crc kubenswrapper[4755]: I1210 15:42:49.222944 4755 reconciler_common.go:293] "Volume detached for volume \"pvc-79f2f86d-f9f7-48cb-8d8e-8ad3ded58407\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-79f2f86d-f9f7-48cb-8d8e-8ad3ded58407\") on node \"crc\" DevicePath \"\"" Dec 10 15:42:49 crc kubenswrapper[4755]: I1210 15:42:49.354417 4755 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="e5a3871d-6b81-4b3d-9044-fcbcf437effb" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 10 15:42:49 crc kubenswrapper[4755]: I1210 15:42:49.594151 4755 generic.go:334] "Generic (PLEG): container finished" podID="49d1edb3-5fcb-4dd6-9f34-6c8f7114aa1b" containerID="e6f2e2237f54e7208333bc2fd411a233dee4e8ec6575fb8a13da88a568f0e066" exitCode=0 Dec 10 15:42:49 crc kubenswrapper[4755]: I1210 15:42:49.594460 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-q6n4p-config-rt44m" event={"ID":"49d1edb3-5fcb-4dd6-9f34-6c8f7114aa1b","Type":"ContainerDied","Data":"e6f2e2237f54e7208333bc2fd411a233dee4e8ec6575fb8a13da88a568f0e066"} Dec 10 15:42:49 crc kubenswrapper[4755]: I1210 15:42:49.601740 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"251dc547-e1a7-418e-95fd-6b7e8e5c5d35","Type":"ContainerDied","Data":"dc452af1c9a792fdf759eae41abe656ab9a955c8c54ebc6ae21f33337f32131e"} Dec 10 15:42:49 crc kubenswrapper[4755]: I1210 15:42:49.601995 4755 scope.go:117] "RemoveContainer" containerID="d7783d43679c54f1e3414562b811ef69ab54e808d530fb5d096b6f5f62d927de" Dec 10 15:42:49 crc kubenswrapper[4755]: I1210 15:42:49.602288 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Dec 10 15:42:49 crc kubenswrapper[4755]: I1210 15:42:49.607870 4755 generic.go:334] "Generic (PLEG): container finished" podID="aff2e950-1295-4b9e-996a-f9a6c4a1dedd" containerID="d6ad9ef9c98cc4d482615c1aae239da3d0259370b462bb9941096dc1d54b73ec" exitCode=0 Dec 10 15:42:49 crc kubenswrapper[4755]: I1210 15:42:49.607917 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-qg2cr" event={"ID":"aff2e950-1295-4b9e-996a-f9a6c4a1dedd","Type":"ContainerDied","Data":"d6ad9ef9c98cc4d482615c1aae239da3d0259370b462bb9941096dc1d54b73ec"} Dec 10 15:42:49 crc kubenswrapper[4755]: I1210 15:42:49.671553 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 10 15:42:49 crc kubenswrapper[4755]: I1210 15:42:49.687515 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 10 15:42:49 crc kubenswrapper[4755]: I1210 15:42:49.697992 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 10 15:42:49 crc kubenswrapper[4755]: E1210 15:42:49.698548 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="251dc547-e1a7-418e-95fd-6b7e8e5c5d35" containerName="config-reloader" Dec 10 15:42:49 crc kubenswrapper[4755]: I1210 15:42:49.698561 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="251dc547-e1a7-418e-95fd-6b7e8e5c5d35" containerName="config-reloader" Dec 10 15:42:49 crc kubenswrapper[4755]: E1210 15:42:49.698583 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="251dc547-e1a7-418e-95fd-6b7e8e5c5d35" containerName="init-config-reloader" Dec 10 15:42:49 crc kubenswrapper[4755]: I1210 15:42:49.698592 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="251dc547-e1a7-418e-95fd-6b7e8e5c5d35" containerName="init-config-reloader" Dec 10 15:42:49 crc kubenswrapper[4755]: E1210 15:42:49.698601 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="251dc547-e1a7-418e-95fd-6b7e8e5c5d35" containerName="prometheus" Dec 10 15:42:49 crc kubenswrapper[4755]: I1210 15:42:49.698608 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="251dc547-e1a7-418e-95fd-6b7e8e5c5d35" containerName="prometheus" Dec 10 15:42:49 crc kubenswrapper[4755]: E1210 15:42:49.698634 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="251dc547-e1a7-418e-95fd-6b7e8e5c5d35" containerName="thanos-sidecar" Dec 10 15:42:49 crc kubenswrapper[4755]: I1210 15:42:49.698641 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="251dc547-e1a7-418e-95fd-6b7e8e5c5d35" containerName="thanos-sidecar" Dec 10 15:42:49 crc kubenswrapper[4755]: I1210 15:42:49.698829 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="251dc547-e1a7-418e-95fd-6b7e8e5c5d35" containerName="config-reloader" Dec 10 15:42:49 crc kubenswrapper[4755]: I1210 15:42:49.698842 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="251dc547-e1a7-418e-95fd-6b7e8e5c5d35" containerName="prometheus" Dec 10 15:42:49 crc kubenswrapper[4755]: I1210 15:42:49.698860 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="251dc547-e1a7-418e-95fd-6b7e8e5c5d35" containerName="thanos-sidecar" Dec 10 15:42:49 crc kubenswrapper[4755]: I1210 15:42:49.701013 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Dec 10 15:42:49 crc kubenswrapper[4755]: I1210 15:42:49.711706 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Dec 10 15:42:49 crc kubenswrapper[4755]: I1210 15:42:49.711950 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Dec 10 15:42:49 crc kubenswrapper[4755]: I1210 15:42:49.712055 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Dec 10 15:42:49 crc kubenswrapper[4755]: I1210 15:42:49.712278 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Dec 10 15:42:49 crc kubenswrapper[4755]: I1210 15:42:49.712444 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-gw2gg" Dec 10 15:42:49 crc kubenswrapper[4755]: I1210 15:42:49.717694 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Dec 10 15:42:49 crc kubenswrapper[4755]: I1210 15:42:49.724370 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 10 15:42:49 crc kubenswrapper[4755]: I1210 15:42:49.725970 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Dec 10 15:42:49 crc kubenswrapper[4755]: I1210 15:42:49.776275 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="251dc547-e1a7-418e-95fd-6b7e8e5c5d35" path="/var/lib/kubelet/pods/251dc547-e1a7-418e-95fd-6b7e8e5c5d35/volumes" Dec 10 15:42:49 crc kubenswrapper[4755]: I1210 15:42:49.832681 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/28273c51-8829-45f1-9edb-4f30a83b66e3-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"28273c51-8829-45f1-9edb-4f30a83b66e3\") " pod="openstack/prometheus-metric-storage-0" Dec 10 15:42:49 crc kubenswrapper[4755]: I1210 15:42:49.832729 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/28273c51-8829-45f1-9edb-4f30a83b66e3-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"28273c51-8829-45f1-9edb-4f30a83b66e3\") " pod="openstack/prometheus-metric-storage-0" Dec 10 15:42:49 crc kubenswrapper[4755]: I1210 15:42:49.832762 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/28273c51-8829-45f1-9edb-4f30a83b66e3-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"28273c51-8829-45f1-9edb-4f30a83b66e3\") " pod="openstack/prometheus-metric-storage-0" Dec 10 15:42:49 crc kubenswrapper[4755]: I1210 15:42:49.832786 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/28273c51-8829-45f1-9edb-4f30a83b66e3-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"28273c51-8829-45f1-9edb-4f30a83b66e3\") " pod="openstack/prometheus-metric-storage-0" Dec 10 15:42:49 crc kubenswrapper[4755]: I1210 15:42:49.832816 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28273c51-8829-45f1-9edb-4f30a83b66e3-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"28273c51-8829-45f1-9edb-4f30a83b66e3\") " pod="openstack/prometheus-metric-storage-0" Dec 10 15:42:49 crc kubenswrapper[4755]: I1210 15:42:49.832837 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/28273c51-8829-45f1-9edb-4f30a83b66e3-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"28273c51-8829-45f1-9edb-4f30a83b66e3\") " pod="openstack/prometheus-metric-storage-0" Dec 10 15:42:49 crc kubenswrapper[4755]: I1210 15:42:49.832859 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/28273c51-8829-45f1-9edb-4f30a83b66e3-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"28273c51-8829-45f1-9edb-4f30a83b66e3\") " pod="openstack/prometheus-metric-storage-0" Dec 10 15:42:49 crc kubenswrapper[4755]: I1210 15:42:49.832890 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-79f2f86d-f9f7-48cb-8d8e-8ad3ded58407\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-79f2f86d-f9f7-48cb-8d8e-8ad3ded58407\") pod \"prometheus-metric-storage-0\" (UID: \"28273c51-8829-45f1-9edb-4f30a83b66e3\") " pod="openstack/prometheus-metric-storage-0" Dec 10 15:42:49 crc kubenswrapper[4755]: I1210 15:42:49.832922 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdv2m\" (UniqueName: \"kubernetes.io/projected/28273c51-8829-45f1-9edb-4f30a83b66e3-kube-api-access-mdv2m\") pod \"prometheus-metric-storage-0\" (UID: \"28273c51-8829-45f1-9edb-4f30a83b66e3\") " pod="openstack/prometheus-metric-storage-0" Dec 10 15:42:49 crc kubenswrapper[4755]: I1210 15:42:49.832949 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/28273c51-8829-45f1-9edb-4f30a83b66e3-config\") pod \"prometheus-metric-storage-0\" (UID: \"28273c51-8829-45f1-9edb-4f30a83b66e3\") " pod="openstack/prometheus-metric-storage-0" Dec 10 15:42:49 crc kubenswrapper[4755]: I1210 15:42:49.832979 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/28273c51-8829-45f1-9edb-4f30a83b66e3-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"28273c51-8829-45f1-9edb-4f30a83b66e3\") " pod="openstack/prometheus-metric-storage-0" Dec 10 15:42:49 crc kubenswrapper[4755]: I1210 15:42:49.934682 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/28273c51-8829-45f1-9edb-4f30a83b66e3-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"28273c51-8829-45f1-9edb-4f30a83b66e3\") " pod="openstack/prometheus-metric-storage-0" Dec 10 15:42:49 crc kubenswrapper[4755]: I1210 15:42:49.934727 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/28273c51-8829-45f1-9edb-4f30a83b66e3-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"28273c51-8829-45f1-9edb-4f30a83b66e3\") " pod="openstack/prometheus-metric-storage-0" Dec 10 15:42:49 crc kubenswrapper[4755]: I1210 15:42:49.934756 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/28273c51-8829-45f1-9edb-4f30a83b66e3-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"28273c51-8829-45f1-9edb-4f30a83b66e3\") " pod="openstack/prometheus-metric-storage-0" Dec 10 15:42:49 crc kubenswrapper[4755]: I1210 15:42:49.934775 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/28273c51-8829-45f1-9edb-4f30a83b66e3-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"28273c51-8829-45f1-9edb-4f30a83b66e3\") " pod="openstack/prometheus-metric-storage-0" Dec 10 15:42:49 crc kubenswrapper[4755]: I1210 15:42:49.934802 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28273c51-8829-45f1-9edb-4f30a83b66e3-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"28273c51-8829-45f1-9edb-4f30a83b66e3\") " pod="openstack/prometheus-metric-storage-0" Dec 10 15:42:49 crc kubenswrapper[4755]: I1210 15:42:49.934825 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/28273c51-8829-45f1-9edb-4f30a83b66e3-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"28273c51-8829-45f1-9edb-4f30a83b66e3\") " pod="openstack/prometheus-metric-storage-0" Dec 10 15:42:49 crc kubenswrapper[4755]: I1210 15:42:49.934847 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/28273c51-8829-45f1-9edb-4f30a83b66e3-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"28273c51-8829-45f1-9edb-4f30a83b66e3\") " pod="openstack/prometheus-metric-storage-0" Dec 10 15:42:49 crc kubenswrapper[4755]: I1210 15:42:49.934880 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-79f2f86d-f9f7-48cb-8d8e-8ad3ded58407\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-79f2f86d-f9f7-48cb-8d8e-8ad3ded58407\") pod \"prometheus-metric-storage-0\" (UID: \"28273c51-8829-45f1-9edb-4f30a83b66e3\") " pod="openstack/prometheus-metric-storage-0" Dec 10 15:42:49 crc kubenswrapper[4755]: I1210 15:42:49.934913 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdv2m\" (UniqueName: \"kubernetes.io/projected/28273c51-8829-45f1-9edb-4f30a83b66e3-kube-api-access-mdv2m\") pod \"prometheus-metric-storage-0\" (UID: \"28273c51-8829-45f1-9edb-4f30a83b66e3\") " pod="openstack/prometheus-metric-storage-0" Dec 10 15:42:49 crc kubenswrapper[4755]: I1210 15:42:49.934951 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/28273c51-8829-45f1-9edb-4f30a83b66e3-config\") pod \"prometheus-metric-storage-0\" (UID: \"28273c51-8829-45f1-9edb-4f30a83b66e3\") " pod="openstack/prometheus-metric-storage-0" Dec 10 15:42:49 crc kubenswrapper[4755]: I1210 15:42:49.934990 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/28273c51-8829-45f1-9edb-4f30a83b66e3-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"28273c51-8829-45f1-9edb-4f30a83b66e3\") " pod="openstack/prometheus-metric-storage-0" Dec 10 15:42:49 crc kubenswrapper[4755]: I1210 15:42:49.935682 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/28273c51-8829-45f1-9edb-4f30a83b66e3-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"28273c51-8829-45f1-9edb-4f30a83b66e3\") " pod="openstack/prometheus-metric-storage-0" Dec 10 15:42:49 crc kubenswrapper[4755]: I1210 15:42:49.940136 4755 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 10 15:42:49 crc kubenswrapper[4755]: I1210 15:42:49.940196 4755 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-79f2f86d-f9f7-48cb-8d8e-8ad3ded58407\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-79f2f86d-f9f7-48cb-8d8e-8ad3ded58407\") pod \"prometheus-metric-storage-0\" (UID: \"28273c51-8829-45f1-9edb-4f30a83b66e3\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/d5e41e57ba6a2e605b0cf3bdcb01431c97f507b808900a5d6e4da4950adfc002/globalmount\"" pod="openstack/prometheus-metric-storage-0" Dec 10 15:42:49 crc kubenswrapper[4755]: I1210 15:42:49.948401 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/28273c51-8829-45f1-9edb-4f30a83b66e3-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"28273c51-8829-45f1-9edb-4f30a83b66e3\") " pod="openstack/prometheus-metric-storage-0" Dec 10 15:42:49 crc kubenswrapper[4755]: I1210 15:42:49.948843 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/28273c51-8829-45f1-9edb-4f30a83b66e3-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"28273c51-8829-45f1-9edb-4f30a83b66e3\") " pod="openstack/prometheus-metric-storage-0" Dec 10 15:42:49 crc kubenswrapper[4755]: I1210 15:42:49.950006 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/28273c51-8829-45f1-9edb-4f30a83b66e3-config\") pod \"prometheus-metric-storage-0\" (UID: \"28273c51-8829-45f1-9edb-4f30a83b66e3\") " pod="openstack/prometheus-metric-storage-0" Dec 10 15:42:49 crc kubenswrapper[4755]: I1210 15:42:49.950623 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/28273c51-8829-45f1-9edb-4f30a83b66e3-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"28273c51-8829-45f1-9edb-4f30a83b66e3\") " pod="openstack/prometheus-metric-storage-0" Dec 10 15:42:49 crc kubenswrapper[4755]: I1210 15:42:49.952119 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/28273c51-8829-45f1-9edb-4f30a83b66e3-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"28273c51-8829-45f1-9edb-4f30a83b66e3\") " pod="openstack/prometheus-metric-storage-0" Dec 10 15:42:49 crc kubenswrapper[4755]: I1210 15:42:49.958871 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28273c51-8829-45f1-9edb-4f30a83b66e3-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"28273c51-8829-45f1-9edb-4f30a83b66e3\") " pod="openstack/prometheus-metric-storage-0" Dec 10 15:42:49 crc kubenswrapper[4755]: I1210 15:42:49.964346 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdv2m\" (UniqueName: \"kubernetes.io/projected/28273c51-8829-45f1-9edb-4f30a83b66e3-kube-api-access-mdv2m\") pod \"prometheus-metric-storage-0\" (UID: \"28273c51-8829-45f1-9edb-4f30a83b66e3\") " pod="openstack/prometheus-metric-storage-0" Dec 10 15:42:49 crc kubenswrapper[4755]: I1210 15:42:49.970684 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/28273c51-8829-45f1-9edb-4f30a83b66e3-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"28273c51-8829-45f1-9edb-4f30a83b66e3\") " pod="openstack/prometheus-metric-storage-0" Dec 10 15:42:49 crc kubenswrapper[4755]: I1210 15:42:49.983173 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/28273c51-8829-45f1-9edb-4f30a83b66e3-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"28273c51-8829-45f1-9edb-4f30a83b66e3\") " pod="openstack/prometheus-metric-storage-0" Dec 10 15:42:50 crc kubenswrapper[4755]: I1210 15:42:50.061334 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-79f2f86d-f9f7-48cb-8d8e-8ad3ded58407\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-79f2f86d-f9f7-48cb-8d8e-8ad3ded58407\") pod \"prometheus-metric-storage-0\" (UID: \"28273c51-8829-45f1-9edb-4f30a83b66e3\") " pod="openstack/prometheus-metric-storage-0" Dec 10 15:42:50 crc kubenswrapper[4755]: I1210 15:42:50.324547 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Dec 10 15:42:51 crc kubenswrapper[4755]: I1210 15:42:51.716694 4755 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="251dc547-e1a7-418e-95fd-6b7e8e5c5d35" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.113:9090/-/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 10 15:42:52 crc kubenswrapper[4755]: I1210 15:42:52.311840 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Dec 10 15:42:52 crc kubenswrapper[4755]: I1210 15:42:52.491769 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-q6n4p" Dec 10 15:42:52 crc kubenswrapper[4755]: I1210 15:42:52.785680 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:42:52 crc kubenswrapper[4755]: I1210 15:42:52.857609 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-db-create-2872x"] Dec 10 15:42:52 crc kubenswrapper[4755]: I1210 15:42:52.858754 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-create-2872x" Dec 10 15:42:52 crc kubenswrapper[4755]: I1210 15:42:52.882905 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-create-2872x"] Dec 10 15:42:52 crc kubenswrapper[4755]: I1210 15:42:52.971651 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-9b26-account-create-update-q9h4x"] Dec 10 15:42:52 crc kubenswrapper[4755]: I1210 15:42:52.977793 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-9b26-account-create-update-q9h4x" Dec 10 15:42:52 crc kubenswrapper[4755]: I1210 15:42:52.981478 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Dec 10 15:42:52 crc kubenswrapper[4755]: I1210 15:42:52.985481 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-fxbjx"] Dec 10 15:42:52 crc kubenswrapper[4755]: I1210 15:42:52.986627 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-fxbjx" Dec 10 15:42:52 crc kubenswrapper[4755]: I1210 15:42:52.996840 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/006b2612-d428-4655-9b11-7805124e026d-operator-scripts\") pod \"cloudkitty-db-create-2872x\" (UID: \"006b2612-d428-4655-9b11-7805124e026d\") " pod="openstack/cloudkitty-db-create-2872x" Dec 10 15:42:52 crc kubenswrapper[4755]: I1210 15:42:52.996887 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkf2q\" (UniqueName: \"kubernetes.io/projected/006b2612-d428-4655-9b11-7805124e026d-kube-api-access-hkf2q\") pod \"cloudkitty-db-create-2872x\" (UID: \"006b2612-d428-4655-9b11-7805124e026d\") " pod="openstack/cloudkitty-db-create-2872x" Dec 10 15:42:52 crc kubenswrapper[4755]: I1210 15:42:52.997104 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-9b26-account-create-update-q9h4x"] Dec 10 15:42:53 crc kubenswrapper[4755]: I1210 15:42:53.017244 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-fxbjx"] Dec 10 15:42:53 crc kubenswrapper[4755]: I1210 15:42:53.098272 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d99bcd9-160f-4acf-b5a2-048e2cf4e69d-operator-scripts\") pod \"barbican-9b26-account-create-update-q9h4x\" (UID: \"5d99bcd9-160f-4acf-b5a2-048e2cf4e69d\") " pod="openstack/barbican-9b26-account-create-update-q9h4x" Dec 10 15:42:53 crc kubenswrapper[4755]: I1210 15:42:53.098612 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/006b2612-d428-4655-9b11-7805124e026d-operator-scripts\") pod \"cloudkitty-db-create-2872x\" (UID: \"006b2612-d428-4655-9b11-7805124e026d\") " pod="openstack/cloudkitty-db-create-2872x" Dec 10 15:42:53 crc kubenswrapper[4755]: I1210 15:42:53.098647 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkf2q\" (UniqueName: \"kubernetes.io/projected/006b2612-d428-4655-9b11-7805124e026d-kube-api-access-hkf2q\") pod \"cloudkitty-db-create-2872x\" (UID: \"006b2612-d428-4655-9b11-7805124e026d\") " pod="openstack/cloudkitty-db-create-2872x" Dec 10 15:42:53 crc kubenswrapper[4755]: I1210 15:42:53.098697 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnnlj\" (UniqueName: \"kubernetes.io/projected/5c9e6f66-2358-4311-98ff-066fe8edd720-kube-api-access-nnnlj\") pod \"cinder-db-create-fxbjx\" (UID: \"5c9e6f66-2358-4311-98ff-066fe8edd720\") " pod="openstack/cinder-db-create-fxbjx" Dec 10 15:42:53 crc kubenswrapper[4755]: I1210 15:42:53.098754 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5c9e6f66-2358-4311-98ff-066fe8edd720-operator-scripts\") pod \"cinder-db-create-fxbjx\" (UID: \"5c9e6f66-2358-4311-98ff-066fe8edd720\") " pod="openstack/cinder-db-create-fxbjx" Dec 10 15:42:53 crc kubenswrapper[4755]: I1210 15:42:53.098811 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jqhp\" (UniqueName: \"kubernetes.io/projected/5d99bcd9-160f-4acf-b5a2-048e2cf4e69d-kube-api-access-5jqhp\") pod \"barbican-9b26-account-create-update-q9h4x\" (UID: \"5d99bcd9-160f-4acf-b5a2-048e2cf4e69d\") " pod="openstack/barbican-9b26-account-create-update-q9h4x" Dec 10 15:42:53 crc kubenswrapper[4755]: I1210 15:42:53.099309 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/006b2612-d428-4655-9b11-7805124e026d-operator-scripts\") pod \"cloudkitty-db-create-2872x\" (UID: \"006b2612-d428-4655-9b11-7805124e026d\") " pod="openstack/cloudkitty-db-create-2872x" Dec 10 15:42:53 crc kubenswrapper[4755]: I1210 15:42:53.122702 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkf2q\" (UniqueName: \"kubernetes.io/projected/006b2612-d428-4655-9b11-7805124e026d-kube-api-access-hkf2q\") pod \"cloudkitty-db-create-2872x\" (UID: \"006b2612-d428-4655-9b11-7805124e026d\") " pod="openstack/cloudkitty-db-create-2872x" Dec 10 15:42:53 crc kubenswrapper[4755]: I1210 15:42:53.155082 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-84g2q"] Dec 10 15:42:53 crc kubenswrapper[4755]: I1210 15:42:53.156421 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-84g2q" Dec 10 15:42:53 crc kubenswrapper[4755]: I1210 15:42:53.170707 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-84g2q"] Dec 10 15:42:53 crc kubenswrapper[4755]: I1210 15:42:53.192259 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-create-2872x" Dec 10 15:42:53 crc kubenswrapper[4755]: I1210 15:42:53.193503 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-6m9cc"] Dec 10 15:42:53 crc kubenswrapper[4755]: I1210 15:42:53.194706 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-6m9cc" Dec 10 15:42:53 crc kubenswrapper[4755]: I1210 15:42:53.197687 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-tp2mw" Dec 10 15:42:53 crc kubenswrapper[4755]: I1210 15:42:53.197848 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 10 15:42:53 crc kubenswrapper[4755]: I1210 15:42:53.198041 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 10 15:42:53 crc kubenswrapper[4755]: I1210 15:42:53.200293 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnnlj\" (UniqueName: \"kubernetes.io/projected/5c9e6f66-2358-4311-98ff-066fe8edd720-kube-api-access-nnnlj\") pod \"cinder-db-create-fxbjx\" (UID: \"5c9e6f66-2358-4311-98ff-066fe8edd720\") " pod="openstack/cinder-db-create-fxbjx" Dec 10 15:42:53 crc kubenswrapper[4755]: I1210 15:42:53.200359 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5c9e6f66-2358-4311-98ff-066fe8edd720-operator-scripts\") pod \"cinder-db-create-fxbjx\" (UID: \"5c9e6f66-2358-4311-98ff-066fe8edd720\") " pod="openstack/cinder-db-create-fxbjx" Dec 10 15:42:53 crc kubenswrapper[4755]: I1210 15:42:53.200409 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jqhp\" (UniqueName: \"kubernetes.io/projected/5d99bcd9-160f-4acf-b5a2-048e2cf4e69d-kube-api-access-5jqhp\") pod \"barbican-9b26-account-create-update-q9h4x\" (UID: \"5d99bcd9-160f-4acf-b5a2-048e2cf4e69d\") " pod="openstack/barbican-9b26-account-create-update-q9h4x" Dec 10 15:42:53 crc kubenswrapper[4755]: I1210 15:42:53.200430 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d99bcd9-160f-4acf-b5a2-048e2cf4e69d-operator-scripts\") pod \"barbican-9b26-account-create-update-q9h4x\" (UID: \"5d99bcd9-160f-4acf-b5a2-048e2cf4e69d\") " pod="openstack/barbican-9b26-account-create-update-q9h4x" Dec 10 15:42:53 crc kubenswrapper[4755]: I1210 15:42:53.205223 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d99bcd9-160f-4acf-b5a2-048e2cf4e69d-operator-scripts\") pod \"barbican-9b26-account-create-update-q9h4x\" (UID: \"5d99bcd9-160f-4acf-b5a2-048e2cf4e69d\") " pod="openstack/barbican-9b26-account-create-update-q9h4x" Dec 10 15:42:53 crc kubenswrapper[4755]: I1210 15:42:53.205258 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 10 15:42:53 crc kubenswrapper[4755]: I1210 15:42:53.206106 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5c9e6f66-2358-4311-98ff-066fe8edd720-operator-scripts\") pod \"cinder-db-create-fxbjx\" (UID: \"5c9e6f66-2358-4311-98ff-066fe8edd720\") " pod="openstack/cinder-db-create-fxbjx" Dec 10 15:42:53 crc kubenswrapper[4755]: I1210 15:42:53.222247 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-6m9cc"] Dec 10 15:42:53 crc kubenswrapper[4755]: I1210 15:42:53.237770 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jqhp\" (UniqueName: \"kubernetes.io/projected/5d99bcd9-160f-4acf-b5a2-048e2cf4e69d-kube-api-access-5jqhp\") pod \"barbican-9b26-account-create-update-q9h4x\" (UID: \"5d99bcd9-160f-4acf-b5a2-048e2cf4e69d\") " pod="openstack/barbican-9b26-account-create-update-q9h4x" Dec 10 15:42:53 crc kubenswrapper[4755]: I1210 15:42:53.256997 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnnlj\" (UniqueName: \"kubernetes.io/projected/5c9e6f66-2358-4311-98ff-066fe8edd720-kube-api-access-nnnlj\") pod \"cinder-db-create-fxbjx\" (UID: \"5c9e6f66-2358-4311-98ff-066fe8edd720\") " pod="openstack/cinder-db-create-fxbjx" Dec 10 15:42:53 crc kubenswrapper[4755]: I1210 15:42:53.307449 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmw7p\" (UniqueName: \"kubernetes.io/projected/86e3285c-eddd-4bd4-bca9-8d9ccf2019e7-kube-api-access-cmw7p\") pod \"keystone-db-sync-6m9cc\" (UID: \"86e3285c-eddd-4bd4-bca9-8d9ccf2019e7\") " pod="openstack/keystone-db-sync-6m9cc" Dec 10 15:42:53 crc kubenswrapper[4755]: I1210 15:42:53.307693 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4qmh\" (UniqueName: \"kubernetes.io/projected/4b387caa-3f36-4771-8046-f41a609fc2ba-kube-api-access-l4qmh\") pod \"barbican-db-create-84g2q\" (UID: \"4b387caa-3f36-4771-8046-f41a609fc2ba\") " pod="openstack/barbican-db-create-84g2q" Dec 10 15:42:53 crc kubenswrapper[4755]: I1210 15:42:53.307780 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86e3285c-eddd-4bd4-bca9-8d9ccf2019e7-config-data\") pod \"keystone-db-sync-6m9cc\" (UID: \"86e3285c-eddd-4bd4-bca9-8d9ccf2019e7\") " pod="openstack/keystone-db-sync-6m9cc" Dec 10 15:42:53 crc kubenswrapper[4755]: I1210 15:42:53.307883 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86e3285c-eddd-4bd4-bca9-8d9ccf2019e7-combined-ca-bundle\") pod \"keystone-db-sync-6m9cc\" (UID: \"86e3285c-eddd-4bd4-bca9-8d9ccf2019e7\") " pod="openstack/keystone-db-sync-6m9cc" Dec 10 15:42:53 crc kubenswrapper[4755]: I1210 15:42:53.307977 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b387caa-3f36-4771-8046-f41a609fc2ba-operator-scripts\") pod \"barbican-db-create-84g2q\" (UID: \"4b387caa-3f36-4771-8046-f41a609fc2ba\") " pod="openstack/barbican-db-create-84g2q" Dec 10 15:42:53 crc kubenswrapper[4755]: I1210 15:42:53.314102 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-9b26-account-create-update-q9h4x" Dec 10 15:42:53 crc kubenswrapper[4755]: I1210 15:42:53.347275 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-a190-account-create-update-brt5j"] Dec 10 15:42:53 crc kubenswrapper[4755]: I1210 15:42:53.354149 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-fxbjx" Dec 10 15:42:53 crc kubenswrapper[4755]: I1210 15:42:53.364750 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-a190-account-create-update-brt5j" Dec 10 15:42:53 crc kubenswrapper[4755]: I1210 15:42:53.391343 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-a190-account-create-update-brt5j"] Dec 10 15:42:53 crc kubenswrapper[4755]: I1210 15:42:53.424566 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Dec 10 15:42:53 crc kubenswrapper[4755]: I1210 15:42:53.425992 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmw7p\" (UniqueName: \"kubernetes.io/projected/86e3285c-eddd-4bd4-bca9-8d9ccf2019e7-kube-api-access-cmw7p\") pod \"keystone-db-sync-6m9cc\" (UID: \"86e3285c-eddd-4bd4-bca9-8d9ccf2019e7\") " pod="openstack/keystone-db-sync-6m9cc" Dec 10 15:42:53 crc kubenswrapper[4755]: I1210 15:42:53.426064 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4qmh\" (UniqueName: \"kubernetes.io/projected/4b387caa-3f36-4771-8046-f41a609fc2ba-kube-api-access-l4qmh\") pod \"barbican-db-create-84g2q\" (UID: \"4b387caa-3f36-4771-8046-f41a609fc2ba\") " pod="openstack/barbican-db-create-84g2q" Dec 10 15:42:53 crc kubenswrapper[4755]: I1210 15:42:53.426116 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86e3285c-eddd-4bd4-bca9-8d9ccf2019e7-config-data\") pod \"keystone-db-sync-6m9cc\" (UID: \"86e3285c-eddd-4bd4-bca9-8d9ccf2019e7\") " pod="openstack/keystone-db-sync-6m9cc" Dec 10 15:42:53 crc kubenswrapper[4755]: I1210 15:42:53.426158 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86e3285c-eddd-4bd4-bca9-8d9ccf2019e7-combined-ca-bundle\") pod \"keystone-db-sync-6m9cc\" (UID: \"86e3285c-eddd-4bd4-bca9-8d9ccf2019e7\") " pod="openstack/keystone-db-sync-6m9cc" Dec 10 15:42:53 crc kubenswrapper[4755]: I1210 15:42:53.426201 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b387caa-3f36-4771-8046-f41a609fc2ba-operator-scripts\") pod \"barbican-db-create-84g2q\" (UID: \"4b387caa-3f36-4771-8046-f41a609fc2ba\") " pod="openstack/barbican-db-create-84g2q" Dec 10 15:42:53 crc kubenswrapper[4755]: I1210 15:42:53.427677 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b387caa-3f36-4771-8046-f41a609fc2ba-operator-scripts\") pod \"barbican-db-create-84g2q\" (UID: \"4b387caa-3f36-4771-8046-f41a609fc2ba\") " pod="openstack/barbican-db-create-84g2q" Dec 10 15:42:53 crc kubenswrapper[4755]: I1210 15:42:53.436207 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86e3285c-eddd-4bd4-bca9-8d9ccf2019e7-config-data\") pod \"keystone-db-sync-6m9cc\" (UID: \"86e3285c-eddd-4bd4-bca9-8d9ccf2019e7\") " pod="openstack/keystone-db-sync-6m9cc" Dec 10 15:42:53 crc kubenswrapper[4755]: I1210 15:42:53.443548 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-eb88-account-create-update-t2zx9"] Dec 10 15:42:53 crc kubenswrapper[4755]: I1210 15:42:53.444912 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-eb88-account-create-update-t2zx9" Dec 10 15:42:53 crc kubenswrapper[4755]: I1210 15:42:53.447353 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86e3285c-eddd-4bd4-bca9-8d9ccf2019e7-combined-ca-bundle\") pod \"keystone-db-sync-6m9cc\" (UID: \"86e3285c-eddd-4bd4-bca9-8d9ccf2019e7\") " pod="openstack/keystone-db-sync-6m9cc" Dec 10 15:42:53 crc kubenswrapper[4755]: I1210 15:42:53.459318 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Dec 10 15:42:53 crc kubenswrapper[4755]: I1210 15:42:53.464537 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-eb88-account-create-update-t2zx9"] Dec 10 15:42:53 crc kubenswrapper[4755]: I1210 15:42:53.468832 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4qmh\" (UniqueName: \"kubernetes.io/projected/4b387caa-3f36-4771-8046-f41a609fc2ba-kube-api-access-l4qmh\") pod \"barbican-db-create-84g2q\" (UID: \"4b387caa-3f36-4771-8046-f41a609fc2ba\") " pod="openstack/barbican-db-create-84g2q" Dec 10 15:42:53 crc kubenswrapper[4755]: I1210 15:42:53.478184 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmw7p\" (UniqueName: \"kubernetes.io/projected/86e3285c-eddd-4bd4-bca9-8d9ccf2019e7-kube-api-access-cmw7p\") pod \"keystone-db-sync-6m9cc\" (UID: \"86e3285c-eddd-4bd4-bca9-8d9ccf2019e7\") " pod="openstack/keystone-db-sync-6m9cc" Dec 10 15:42:53 crc kubenswrapper[4755]: I1210 15:42:53.494881 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-84g2q" Dec 10 15:42:53 crc kubenswrapper[4755]: I1210 15:42:53.507516 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-rs5rh"] Dec 10 15:42:53 crc kubenswrapper[4755]: I1210 15:42:53.509278 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-rs5rh" Dec 10 15:42:53 crc kubenswrapper[4755]: I1210 15:42:53.525124 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-6m9cc" Dec 10 15:42:53 crc kubenswrapper[4755]: I1210 15:42:53.527123 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8fb0565-641d-4083-a004-068c8f2da61f-operator-scripts\") pod \"neutron-a190-account-create-update-brt5j\" (UID: \"d8fb0565-641d-4083-a004-068c8f2da61f\") " pod="openstack/neutron-a190-account-create-update-brt5j" Dec 10 15:42:53 crc kubenswrapper[4755]: I1210 15:42:53.527166 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9wf6\" (UniqueName: \"kubernetes.io/projected/d8fb0565-641d-4083-a004-068c8f2da61f-kube-api-access-d9wf6\") pod \"neutron-a190-account-create-update-brt5j\" (UID: \"d8fb0565-641d-4083-a004-068c8f2da61f\") " pod="openstack/neutron-a190-account-create-update-brt5j" Dec 10 15:42:53 crc kubenswrapper[4755]: I1210 15:42:53.527250 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqb6c\" (UniqueName: \"kubernetes.io/projected/4b53b622-751c-4098-95ef-86d7bbb6f03b-kube-api-access-hqb6c\") pod \"cinder-eb88-account-create-update-t2zx9\" (UID: \"4b53b622-751c-4098-95ef-86d7bbb6f03b\") " pod="openstack/cinder-eb88-account-create-update-t2zx9" Dec 10 15:42:53 crc kubenswrapper[4755]: I1210 15:42:53.527298 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b53b622-751c-4098-95ef-86d7bbb6f03b-operator-scripts\") pod \"cinder-eb88-account-create-update-t2zx9\" (UID: \"4b53b622-751c-4098-95ef-86d7bbb6f03b\") " pod="openstack/cinder-eb88-account-create-update-t2zx9" Dec 10 15:42:53 crc kubenswrapper[4755]: I1210 15:42:53.543050 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-2b7c-account-create-update-dtbb2"] Dec 10 15:42:53 crc kubenswrapper[4755]: I1210 15:42:53.544170 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-2b7c-account-create-update-dtbb2" Dec 10 15:42:53 crc kubenswrapper[4755]: I1210 15:42:53.554169 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-db-secret" Dec 10 15:42:53 crc kubenswrapper[4755]: I1210 15:42:53.557682 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-rs5rh"] Dec 10 15:42:53 crc kubenswrapper[4755]: I1210 15:42:53.587194 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-2b7c-account-create-update-dtbb2"] Dec 10 15:42:53 crc kubenswrapper[4755]: I1210 15:42:53.628932 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b53b622-751c-4098-95ef-86d7bbb6f03b-operator-scripts\") pod \"cinder-eb88-account-create-update-t2zx9\" (UID: \"4b53b622-751c-4098-95ef-86d7bbb6f03b\") " pod="openstack/cinder-eb88-account-create-update-t2zx9" Dec 10 15:42:53 crc kubenswrapper[4755]: I1210 15:42:53.629024 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8fb0565-641d-4083-a004-068c8f2da61f-operator-scripts\") pod \"neutron-a190-account-create-update-brt5j\" (UID: \"d8fb0565-641d-4083-a004-068c8f2da61f\") " pod="openstack/neutron-a190-account-create-update-brt5j" Dec 10 15:42:53 crc kubenswrapper[4755]: I1210 15:42:53.629058 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9wf6\" (UniqueName: \"kubernetes.io/projected/d8fb0565-641d-4083-a004-068c8f2da61f-kube-api-access-d9wf6\") pod \"neutron-a190-account-create-update-brt5j\" (UID: \"d8fb0565-641d-4083-a004-068c8f2da61f\") " pod="openstack/neutron-a190-account-create-update-brt5j" Dec 10 15:42:53 crc kubenswrapper[4755]: I1210 15:42:53.629102 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2zgd\" (UniqueName: \"kubernetes.io/projected/5c5b35e4-4153-4f26-89bb-80e480230209-kube-api-access-c2zgd\") pod \"cloudkitty-2b7c-account-create-update-dtbb2\" (UID: \"5c5b35e4-4153-4f26-89bb-80e480230209\") " pod="openstack/cloudkitty-2b7c-account-create-update-dtbb2" Dec 10 15:42:53 crc kubenswrapper[4755]: I1210 15:42:53.629141 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-922n9\" (UniqueName: \"kubernetes.io/projected/7e112f6c-6906-4c34-9a05-ed827a4cd2ff-kube-api-access-922n9\") pod \"neutron-db-create-rs5rh\" (UID: \"7e112f6c-6906-4c34-9a05-ed827a4cd2ff\") " pod="openstack/neutron-db-create-rs5rh" Dec 10 15:42:53 crc kubenswrapper[4755]: I1210 15:42:53.629357 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5c5b35e4-4153-4f26-89bb-80e480230209-operator-scripts\") pod \"cloudkitty-2b7c-account-create-update-dtbb2\" (UID: \"5c5b35e4-4153-4f26-89bb-80e480230209\") " pod="openstack/cloudkitty-2b7c-account-create-update-dtbb2" Dec 10 15:42:53 crc kubenswrapper[4755]: I1210 15:42:53.629393 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7e112f6c-6906-4c34-9a05-ed827a4cd2ff-operator-scripts\") pod \"neutron-db-create-rs5rh\" (UID: \"7e112f6c-6906-4c34-9a05-ed827a4cd2ff\") " pod="openstack/neutron-db-create-rs5rh" Dec 10 15:42:53 crc kubenswrapper[4755]: I1210 15:42:53.629442 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqb6c\" (UniqueName: \"kubernetes.io/projected/4b53b622-751c-4098-95ef-86d7bbb6f03b-kube-api-access-hqb6c\") pod \"cinder-eb88-account-create-update-t2zx9\" (UID: \"4b53b622-751c-4098-95ef-86d7bbb6f03b\") " pod="openstack/cinder-eb88-account-create-update-t2zx9" Dec 10 15:42:53 crc kubenswrapper[4755]: I1210 15:42:53.630590 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b53b622-751c-4098-95ef-86d7bbb6f03b-operator-scripts\") pod \"cinder-eb88-account-create-update-t2zx9\" (UID: \"4b53b622-751c-4098-95ef-86d7bbb6f03b\") " pod="openstack/cinder-eb88-account-create-update-t2zx9" Dec 10 15:42:53 crc kubenswrapper[4755]: I1210 15:42:53.631205 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8fb0565-641d-4083-a004-068c8f2da61f-operator-scripts\") pod \"neutron-a190-account-create-update-brt5j\" (UID: \"d8fb0565-641d-4083-a004-068c8f2da61f\") " pod="openstack/neutron-a190-account-create-update-brt5j" Dec 10 15:42:53 crc kubenswrapper[4755]: I1210 15:42:53.652043 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9wf6\" (UniqueName: \"kubernetes.io/projected/d8fb0565-641d-4083-a004-068c8f2da61f-kube-api-access-d9wf6\") pod \"neutron-a190-account-create-update-brt5j\" (UID: \"d8fb0565-641d-4083-a004-068c8f2da61f\") " pod="openstack/neutron-a190-account-create-update-brt5j" Dec 10 15:42:53 crc kubenswrapper[4755]: I1210 15:42:53.653160 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqb6c\" (UniqueName: \"kubernetes.io/projected/4b53b622-751c-4098-95ef-86d7bbb6f03b-kube-api-access-hqb6c\") pod \"cinder-eb88-account-create-update-t2zx9\" (UID: \"4b53b622-751c-4098-95ef-86d7bbb6f03b\") " pod="openstack/cinder-eb88-account-create-update-t2zx9" Dec 10 15:42:53 crc kubenswrapper[4755]: I1210 15:42:53.726875 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-a190-account-create-update-brt5j" Dec 10 15:42:53 crc kubenswrapper[4755]: I1210 15:42:53.730919 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2zgd\" (UniqueName: \"kubernetes.io/projected/5c5b35e4-4153-4f26-89bb-80e480230209-kube-api-access-c2zgd\") pod \"cloudkitty-2b7c-account-create-update-dtbb2\" (UID: \"5c5b35e4-4153-4f26-89bb-80e480230209\") " pod="openstack/cloudkitty-2b7c-account-create-update-dtbb2" Dec 10 15:42:53 crc kubenswrapper[4755]: I1210 15:42:53.730978 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-922n9\" (UniqueName: \"kubernetes.io/projected/7e112f6c-6906-4c34-9a05-ed827a4cd2ff-kube-api-access-922n9\") pod \"neutron-db-create-rs5rh\" (UID: \"7e112f6c-6906-4c34-9a05-ed827a4cd2ff\") " pod="openstack/neutron-db-create-rs5rh" Dec 10 15:42:53 crc kubenswrapper[4755]: I1210 15:42:53.731006 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5c5b35e4-4153-4f26-89bb-80e480230209-operator-scripts\") pod \"cloudkitty-2b7c-account-create-update-dtbb2\" (UID: \"5c5b35e4-4153-4f26-89bb-80e480230209\") " pod="openstack/cloudkitty-2b7c-account-create-update-dtbb2" Dec 10 15:42:53 crc kubenswrapper[4755]: I1210 15:42:53.731040 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7e112f6c-6906-4c34-9a05-ed827a4cd2ff-operator-scripts\") pod \"neutron-db-create-rs5rh\" (UID: \"7e112f6c-6906-4c34-9a05-ed827a4cd2ff\") " pod="openstack/neutron-db-create-rs5rh" Dec 10 15:42:53 crc kubenswrapper[4755]: I1210 15:42:53.731861 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7e112f6c-6906-4c34-9a05-ed827a4cd2ff-operator-scripts\") pod \"neutron-db-create-rs5rh\" (UID: \"7e112f6c-6906-4c34-9a05-ed827a4cd2ff\") " pod="openstack/neutron-db-create-rs5rh" Dec 10 15:42:53 crc kubenswrapper[4755]: I1210 15:42:53.732203 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5c5b35e4-4153-4f26-89bb-80e480230209-operator-scripts\") pod \"cloudkitty-2b7c-account-create-update-dtbb2\" (UID: \"5c5b35e4-4153-4f26-89bb-80e480230209\") " pod="openstack/cloudkitty-2b7c-account-create-update-dtbb2" Dec 10 15:42:53 crc kubenswrapper[4755]: I1210 15:42:53.747541 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2zgd\" (UniqueName: \"kubernetes.io/projected/5c5b35e4-4153-4f26-89bb-80e480230209-kube-api-access-c2zgd\") pod \"cloudkitty-2b7c-account-create-update-dtbb2\" (UID: \"5c5b35e4-4153-4f26-89bb-80e480230209\") " pod="openstack/cloudkitty-2b7c-account-create-update-dtbb2" Dec 10 15:42:53 crc kubenswrapper[4755]: I1210 15:42:53.749514 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-922n9\" (UniqueName: \"kubernetes.io/projected/7e112f6c-6906-4c34-9a05-ed827a4cd2ff-kube-api-access-922n9\") pod \"neutron-db-create-rs5rh\" (UID: \"7e112f6c-6906-4c34-9a05-ed827a4cd2ff\") " pod="openstack/neutron-db-create-rs5rh" Dec 10 15:42:53 crc kubenswrapper[4755]: I1210 15:42:53.845315 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-eb88-account-create-update-t2zx9" Dec 10 15:42:53 crc kubenswrapper[4755]: I1210 15:42:53.869599 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-rs5rh" Dec 10 15:42:53 crc kubenswrapper[4755]: I1210 15:42:53.879048 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-2b7c-account-create-update-dtbb2" Dec 10 15:42:54 crc kubenswrapper[4755]: I1210 15:42:54.969402 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Dec 10 15:42:59 crc kubenswrapper[4755]: I1210 15:42:59.352448 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 15:43:00 crc kubenswrapper[4755]: I1210 15:43:00.212097 4755 scope.go:117] "RemoveContainer" containerID="425c4aaa12065585d9b150bb9bcf966d2169d96d7b0ba5754b7474e16b675184" Dec 10 15:43:00 crc kubenswrapper[4755]: E1210 15:43:00.230831 4755 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" Dec 10 15:43:00 crc kubenswrapper[4755]: E1210 15:43:00.231014 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k8q87,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-pth9b_openstack(5d949f1d-5cb7-49e0-aa0b-52a615dfe4b5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 10 15:43:00 crc kubenswrapper[4755]: E1210 15:43:00.232550 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-pth9b" podUID="5d949f1d-5cb7-49e0-aa0b-52a615dfe4b5" Dec 10 15:43:00 crc kubenswrapper[4755]: I1210 15:43:00.291594 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-q6n4p-config-rt44m" Dec 10 15:43:00 crc kubenswrapper[4755]: I1210 15:43:00.309067 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-qg2cr" Dec 10 15:43:00 crc kubenswrapper[4755]: I1210 15:43:00.358235 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aff2e950-1295-4b9e-996a-f9a6c4a1dedd-combined-ca-bundle\") pod \"aff2e950-1295-4b9e-996a-f9a6c4a1dedd\" (UID: \"aff2e950-1295-4b9e-996a-f9a6c4a1dedd\") " Dec 10 15:43:00 crc kubenswrapper[4755]: I1210 15:43:00.358298 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/49d1edb3-5fcb-4dd6-9f34-6c8f7114aa1b-var-run\") pod \"49d1edb3-5fcb-4dd6-9f34-6c8f7114aa1b\" (UID: \"49d1edb3-5fcb-4dd6-9f34-6c8f7114aa1b\") " Dec 10 15:43:00 crc kubenswrapper[4755]: I1210 15:43:00.358432 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/49d1edb3-5fcb-4dd6-9f34-6c8f7114aa1b-scripts\") pod \"49d1edb3-5fcb-4dd6-9f34-6c8f7114aa1b\" (UID: \"49d1edb3-5fcb-4dd6-9f34-6c8f7114aa1b\") " Dec 10 15:43:00 crc kubenswrapper[4755]: I1210 15:43:00.358508 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/49d1edb3-5fcb-4dd6-9f34-6c8f7114aa1b-var-run-ovn\") pod \"49d1edb3-5fcb-4dd6-9f34-6c8f7114aa1b\" (UID: \"49d1edb3-5fcb-4dd6-9f34-6c8f7114aa1b\") " Dec 10 15:43:00 crc kubenswrapper[4755]: I1210 15:43:00.358548 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/aff2e950-1295-4b9e-996a-f9a6c4a1dedd-scripts\") pod \"aff2e950-1295-4b9e-996a-f9a6c4a1dedd\" (UID: \"aff2e950-1295-4b9e-996a-f9a6c4a1dedd\") " Dec 10 15:43:00 crc kubenswrapper[4755]: I1210 15:43:00.358583 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/49d1edb3-5fcb-4dd6-9f34-6c8f7114aa1b-additional-scripts\") pod \"49d1edb3-5fcb-4dd6-9f34-6c8f7114aa1b\" (UID: \"49d1edb3-5fcb-4dd6-9f34-6c8f7114aa1b\") " Dec 10 15:43:00 crc kubenswrapper[4755]: I1210 15:43:00.358643 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/aff2e950-1295-4b9e-996a-f9a6c4a1dedd-dispersionconf\") pod \"aff2e950-1295-4b9e-996a-f9a6c4a1dedd\" (UID: \"aff2e950-1295-4b9e-996a-f9a6c4a1dedd\") " Dec 10 15:43:00 crc kubenswrapper[4755]: I1210 15:43:00.358689 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/aff2e950-1295-4b9e-996a-f9a6c4a1dedd-ring-data-devices\") pod \"aff2e950-1295-4b9e-996a-f9a6c4a1dedd\" (UID: \"aff2e950-1295-4b9e-996a-f9a6c4a1dedd\") " Dec 10 15:43:00 crc kubenswrapper[4755]: I1210 15:43:00.358731 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rxdtc\" (UniqueName: \"kubernetes.io/projected/aff2e950-1295-4b9e-996a-f9a6c4a1dedd-kube-api-access-rxdtc\") pod \"aff2e950-1295-4b9e-996a-f9a6c4a1dedd\" (UID: \"aff2e950-1295-4b9e-996a-f9a6c4a1dedd\") " Dec 10 15:43:00 crc kubenswrapper[4755]: I1210 15:43:00.358763 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9l8wj\" (UniqueName: \"kubernetes.io/projected/49d1edb3-5fcb-4dd6-9f34-6c8f7114aa1b-kube-api-access-9l8wj\") pod \"49d1edb3-5fcb-4dd6-9f34-6c8f7114aa1b\" (UID: \"49d1edb3-5fcb-4dd6-9f34-6c8f7114aa1b\") " Dec 10 15:43:00 crc kubenswrapper[4755]: I1210 15:43:00.358792 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/49d1edb3-5fcb-4dd6-9f34-6c8f7114aa1b-var-log-ovn\") pod \"49d1edb3-5fcb-4dd6-9f34-6c8f7114aa1b\" (UID: \"49d1edb3-5fcb-4dd6-9f34-6c8f7114aa1b\") " Dec 10 15:43:00 crc kubenswrapper[4755]: I1210 15:43:00.358878 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/aff2e950-1295-4b9e-996a-f9a6c4a1dedd-etc-swift\") pod \"aff2e950-1295-4b9e-996a-f9a6c4a1dedd\" (UID: \"aff2e950-1295-4b9e-996a-f9a6c4a1dedd\") " Dec 10 15:43:00 crc kubenswrapper[4755]: I1210 15:43:00.358933 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/aff2e950-1295-4b9e-996a-f9a6c4a1dedd-swiftconf\") pod \"aff2e950-1295-4b9e-996a-f9a6c4a1dedd\" (UID: \"aff2e950-1295-4b9e-996a-f9a6c4a1dedd\") " Dec 10 15:43:00 crc kubenswrapper[4755]: I1210 15:43:00.362662 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49d1edb3-5fcb-4dd6-9f34-6c8f7114aa1b-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "49d1edb3-5fcb-4dd6-9f34-6c8f7114aa1b" (UID: "49d1edb3-5fcb-4dd6-9f34-6c8f7114aa1b"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:43:00 crc kubenswrapper[4755]: I1210 15:43:00.362711 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49d1edb3-5fcb-4dd6-9f34-6c8f7114aa1b-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "49d1edb3-5fcb-4dd6-9f34-6c8f7114aa1b" (UID: "49d1edb3-5fcb-4dd6-9f34-6c8f7114aa1b"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 15:43:00 crc kubenswrapper[4755]: I1210 15:43:00.363330 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49d1edb3-5fcb-4dd6-9f34-6c8f7114aa1b-scripts" (OuterVolumeSpecName: "scripts") pod "49d1edb3-5fcb-4dd6-9f34-6c8f7114aa1b" (UID: "49d1edb3-5fcb-4dd6-9f34-6c8f7114aa1b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:43:00 crc kubenswrapper[4755]: I1210 15:43:00.363390 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49d1edb3-5fcb-4dd6-9f34-6c8f7114aa1b-var-run" (OuterVolumeSpecName: "var-run") pod "49d1edb3-5fcb-4dd6-9f34-6c8f7114aa1b" (UID: "49d1edb3-5fcb-4dd6-9f34-6c8f7114aa1b"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 15:43:00 crc kubenswrapper[4755]: I1210 15:43:00.363760 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aff2e950-1295-4b9e-996a-f9a6c4a1dedd-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "aff2e950-1295-4b9e-996a-f9a6c4a1dedd" (UID: "aff2e950-1295-4b9e-996a-f9a6c4a1dedd"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:43:00 crc kubenswrapper[4755]: I1210 15:43:00.363826 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49d1edb3-5fcb-4dd6-9f34-6c8f7114aa1b-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "49d1edb3-5fcb-4dd6-9f34-6c8f7114aa1b" (UID: "49d1edb3-5fcb-4dd6-9f34-6c8f7114aa1b"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 15:43:00 crc kubenswrapper[4755]: I1210 15:43:00.366625 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aff2e950-1295-4b9e-996a-f9a6c4a1dedd-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "aff2e950-1295-4b9e-996a-f9a6c4a1dedd" (UID: "aff2e950-1295-4b9e-996a-f9a6c4a1dedd"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:43:00 crc kubenswrapper[4755]: I1210 15:43:00.368666 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aff2e950-1295-4b9e-996a-f9a6c4a1dedd-kube-api-access-rxdtc" (OuterVolumeSpecName: "kube-api-access-rxdtc") pod "aff2e950-1295-4b9e-996a-f9a6c4a1dedd" (UID: "aff2e950-1295-4b9e-996a-f9a6c4a1dedd"). InnerVolumeSpecName "kube-api-access-rxdtc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:43:00 crc kubenswrapper[4755]: I1210 15:43:00.386887 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49d1edb3-5fcb-4dd6-9f34-6c8f7114aa1b-kube-api-access-9l8wj" (OuterVolumeSpecName: "kube-api-access-9l8wj") pod "49d1edb3-5fcb-4dd6-9f34-6c8f7114aa1b" (UID: "49d1edb3-5fcb-4dd6-9f34-6c8f7114aa1b"). InnerVolumeSpecName "kube-api-access-9l8wj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:43:00 crc kubenswrapper[4755]: I1210 15:43:00.394093 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aff2e950-1295-4b9e-996a-f9a6c4a1dedd-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "aff2e950-1295-4b9e-996a-f9a6c4a1dedd" (UID: "aff2e950-1295-4b9e-996a-f9a6c4a1dedd"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:43:00 crc kubenswrapper[4755]: I1210 15:43:00.419749 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aff2e950-1295-4b9e-996a-f9a6c4a1dedd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aff2e950-1295-4b9e-996a-f9a6c4a1dedd" (UID: "aff2e950-1295-4b9e-996a-f9a6c4a1dedd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:43:00 crc kubenswrapper[4755]: I1210 15:43:00.420677 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aff2e950-1295-4b9e-996a-f9a6c4a1dedd-scripts" (OuterVolumeSpecName: "scripts") pod "aff2e950-1295-4b9e-996a-f9a6c4a1dedd" (UID: "aff2e950-1295-4b9e-996a-f9a6c4a1dedd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:43:00 crc kubenswrapper[4755]: I1210 15:43:00.429174 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aff2e950-1295-4b9e-996a-f9a6c4a1dedd-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "aff2e950-1295-4b9e-996a-f9a6c4a1dedd" (UID: "aff2e950-1295-4b9e-996a-f9a6c4a1dedd"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:43:00 crc kubenswrapper[4755]: I1210 15:43:00.458585 4755 scope.go:117] "RemoveContainer" containerID="452d0ac5fe8231a7081c7ffc0e4c94bb5b29f399f5b3db2cca63f1368ea1f265" Dec 10 15:43:00 crc kubenswrapper[4755]: I1210 15:43:00.461760 4755 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/49d1edb3-5fcb-4dd6-9f34-6c8f7114aa1b-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 15:43:00 crc kubenswrapper[4755]: I1210 15:43:00.461790 4755 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/49d1edb3-5fcb-4dd6-9f34-6c8f7114aa1b-var-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 10 15:43:00 crc kubenswrapper[4755]: I1210 15:43:00.461802 4755 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/aff2e950-1295-4b9e-996a-f9a6c4a1dedd-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 15:43:00 crc kubenswrapper[4755]: I1210 15:43:00.461816 4755 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/49d1edb3-5fcb-4dd6-9f34-6c8f7114aa1b-additional-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 15:43:00 crc kubenswrapper[4755]: I1210 15:43:00.461827 4755 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/aff2e950-1295-4b9e-996a-f9a6c4a1dedd-dispersionconf\") on node \"crc\" DevicePath \"\"" Dec 10 15:43:00 crc kubenswrapper[4755]: I1210 15:43:00.461839 4755 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/aff2e950-1295-4b9e-996a-f9a6c4a1dedd-ring-data-devices\") on node \"crc\" DevicePath \"\"" Dec 10 15:43:00 crc kubenswrapper[4755]: I1210 15:43:00.461850 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rxdtc\" (UniqueName: \"kubernetes.io/projected/aff2e950-1295-4b9e-996a-f9a6c4a1dedd-kube-api-access-rxdtc\") on node \"crc\" DevicePath \"\"" Dec 10 15:43:00 crc kubenswrapper[4755]: I1210 15:43:00.461860 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9l8wj\" (UniqueName: \"kubernetes.io/projected/49d1edb3-5fcb-4dd6-9f34-6c8f7114aa1b-kube-api-access-9l8wj\") on node \"crc\" DevicePath \"\"" Dec 10 15:43:00 crc kubenswrapper[4755]: I1210 15:43:00.461869 4755 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/49d1edb3-5fcb-4dd6-9f34-6c8f7114aa1b-var-log-ovn\") on node \"crc\" DevicePath \"\"" Dec 10 15:43:00 crc kubenswrapper[4755]: I1210 15:43:00.461879 4755 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/aff2e950-1295-4b9e-996a-f9a6c4a1dedd-etc-swift\") on node \"crc\" DevicePath \"\"" Dec 10 15:43:00 crc kubenswrapper[4755]: I1210 15:43:00.461889 4755 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/aff2e950-1295-4b9e-996a-f9a6c4a1dedd-swiftconf\") on node \"crc\" DevicePath \"\"" Dec 10 15:43:00 crc kubenswrapper[4755]: I1210 15:43:00.461898 4755 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aff2e950-1295-4b9e-996a-f9a6c4a1dedd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:43:00 crc kubenswrapper[4755]: I1210 15:43:00.461911 4755 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/49d1edb3-5fcb-4dd6-9f34-6c8f7114aa1b-var-run\") on node \"crc\" DevicePath \"\"" Dec 10 15:43:00 crc kubenswrapper[4755]: I1210 15:43:00.545638 4755 scope.go:117] "RemoveContainer" containerID="e4e9be106e57560a42f428ea3a4d1a14f2562c6147b323f800eb0df17b6b3319" Dec 10 15:43:00 crc kubenswrapper[4755]: I1210 15:43:00.565930 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/72a1cce7-93cb-4fe1-9d12-3d4e19692457-etc-swift\") pod \"swift-storage-0\" (UID: \"72a1cce7-93cb-4fe1-9d12-3d4e19692457\") " pod="openstack/swift-storage-0" Dec 10 15:43:00 crc kubenswrapper[4755]: I1210 15:43:00.590015 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/72a1cce7-93cb-4fe1-9d12-3d4e19692457-etc-swift\") pod \"swift-storage-0\" (UID: \"72a1cce7-93cb-4fe1-9d12-3d4e19692457\") " pod="openstack/swift-storage-0" Dec 10 15:43:00 crc kubenswrapper[4755]: I1210 15:43:00.718188 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-q6n4p-config-rt44m" event={"ID":"49d1edb3-5fcb-4dd6-9f34-6c8f7114aa1b","Type":"ContainerDied","Data":"0f1885cbc6adfaf6278dacfee798c8f172ba4b5cf6c1ad4740de5a7c9f38cd1a"} Dec 10 15:43:00 crc kubenswrapper[4755]: I1210 15:43:00.718241 4755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f1885cbc6adfaf6278dacfee798c8f172ba4b5cf6c1ad4740de5a7c9f38cd1a" Dec 10 15:43:00 crc kubenswrapper[4755]: I1210 15:43:00.718325 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-q6n4p-config-rt44m" Dec 10 15:43:00 crc kubenswrapper[4755]: I1210 15:43:00.722386 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-qg2cr" Dec 10 15:43:00 crc kubenswrapper[4755]: I1210 15:43:00.723288 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-qg2cr" event={"ID":"aff2e950-1295-4b9e-996a-f9a6c4a1dedd","Type":"ContainerDied","Data":"59b021ce6148d021c5c4c738623bed47b88bbd5936870bd65be1e6a983abb9b5"} Dec 10 15:43:00 crc kubenswrapper[4755]: I1210 15:43:00.723345 4755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="59b021ce6148d021c5c4c738623bed47b88bbd5936870bd65be1e6a983abb9b5" Dec 10 15:43:00 crc kubenswrapper[4755]: E1210 15:43:00.726723 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api:current-podified\\\"\"" pod="openstack/glance-db-sync-pth9b" podUID="5d949f1d-5cb7-49e0-aa0b-52a615dfe4b5" Dec 10 15:43:00 crc kubenswrapper[4755]: I1210 15:43:00.781948 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Dec 10 15:43:00 crc kubenswrapper[4755]: I1210 15:43:00.988766 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-fxbjx"] Dec 10 15:43:00 crc kubenswrapper[4755]: I1210 15:43:00.995852 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-84g2q"] Dec 10 15:43:00 crc kubenswrapper[4755]: W1210 15:43:00.999338 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4b387caa_3f36_4771_8046_f41a609fc2ba.slice/crio-13960d615a57877711ae402b6c6712c01e991d6e81328046b697230a4cdea7d3 WatchSource:0}: Error finding container 13960d615a57877711ae402b6c6712c01e991d6e81328046b697230a4cdea7d3: Status 404 returned error can't find the container with id 13960d615a57877711ae402b6c6712c01e991d6e81328046b697230a4cdea7d3 Dec 10 15:43:01 crc kubenswrapper[4755]: W1210 15:43:01.001663 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5c9e6f66_2358_4311_98ff_066fe8edd720.slice/crio-6a775f10bf78ec50edee255bfcf19827fbabe2a4ee7a1256ee31e8e84ca55cae WatchSource:0}: Error finding container 6a775f10bf78ec50edee255bfcf19827fbabe2a4ee7a1256ee31e8e84ca55cae: Status 404 returned error can't find the container with id 6a775f10bf78ec50edee255bfcf19827fbabe2a4ee7a1256ee31e8e84ca55cae Dec 10 15:43:01 crc kubenswrapper[4755]: I1210 15:43:01.428050 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-eb88-account-create-update-t2zx9"] Dec 10 15:43:01 crc kubenswrapper[4755]: W1210 15:43:01.440615 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5d99bcd9_160f_4acf_b5a2_048e2cf4e69d.slice/crio-73c0bad567f255cb839b5ab3a699c7387461e5d250ad042bc09ddb27ab2af1a5 WatchSource:0}: Error finding container 73c0bad567f255cb839b5ab3a699c7387461e5d250ad042bc09ddb27ab2af1a5: Status 404 returned error can't find the container with id 73c0bad567f255cb839b5ab3a699c7387461e5d250ad042bc09ddb27ab2af1a5 Dec 10 15:43:01 crc kubenswrapper[4755]: I1210 15:43:01.482527 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-9b26-account-create-update-q9h4x"] Dec 10 15:43:01 crc kubenswrapper[4755]: I1210 15:43:01.527828 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 10 15:43:01 crc kubenswrapper[4755]: I1210 15:43:01.539175 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-a190-account-create-update-brt5j"] Dec 10 15:43:01 crc kubenswrapper[4755]: I1210 15:43:01.578068 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-q6n4p-config-rt44m"] Dec 10 15:43:01 crc kubenswrapper[4755]: I1210 15:43:01.595769 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-q6n4p-config-rt44m"] Dec 10 15:43:01 crc kubenswrapper[4755]: I1210 15:43:01.617481 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-q6n4p-config-9wh5p"] Dec 10 15:43:01 crc kubenswrapper[4755]: E1210 15:43:01.617918 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aff2e950-1295-4b9e-996a-f9a6c4a1dedd" containerName="swift-ring-rebalance" Dec 10 15:43:01 crc kubenswrapper[4755]: I1210 15:43:01.617933 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="aff2e950-1295-4b9e-996a-f9a6c4a1dedd" containerName="swift-ring-rebalance" Dec 10 15:43:01 crc kubenswrapper[4755]: E1210 15:43:01.617966 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49d1edb3-5fcb-4dd6-9f34-6c8f7114aa1b" containerName="ovn-config" Dec 10 15:43:01 crc kubenswrapper[4755]: I1210 15:43:01.617973 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="49d1edb3-5fcb-4dd6-9f34-6c8f7114aa1b" containerName="ovn-config" Dec 10 15:43:01 crc kubenswrapper[4755]: I1210 15:43:01.618184 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="aff2e950-1295-4b9e-996a-f9a6c4a1dedd" containerName="swift-ring-rebalance" Dec 10 15:43:01 crc kubenswrapper[4755]: I1210 15:43:01.618205 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="49d1edb3-5fcb-4dd6-9f34-6c8f7114aa1b" containerName="ovn-config" Dec 10 15:43:01 crc kubenswrapper[4755]: I1210 15:43:01.619048 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-q6n4p-config-9wh5p" Dec 10 15:43:01 crc kubenswrapper[4755]: I1210 15:43:01.627389 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Dec 10 15:43:01 crc kubenswrapper[4755]: I1210 15:43:01.638210 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-create-2872x"] Dec 10 15:43:01 crc kubenswrapper[4755]: W1210 15:43:01.639738 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod006b2612_d428_4655_9b11_7805124e026d.slice/crio-3530f9efac3211e43a3479bed25e3fc00e0694fefda378878970b4d7fa3e65fb WatchSource:0}: Error finding container 3530f9efac3211e43a3479bed25e3fc00e0694fefda378878970b4d7fa3e65fb: Status 404 returned error can't find the container with id 3530f9efac3211e43a3479bed25e3fc00e0694fefda378878970b4d7fa3e65fb Dec 10 15:43:01 crc kubenswrapper[4755]: I1210 15:43:01.643632 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-q6n4p-config-9wh5p"] Dec 10 15:43:01 crc kubenswrapper[4755]: I1210 15:43:01.694531 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-rs5rh"] Dec 10 15:43:01 crc kubenswrapper[4755]: I1210 15:43:01.731318 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfpsq\" (UniqueName: \"kubernetes.io/projected/73750159-9d61-4895-a855-40a58ea7583e-kube-api-access-jfpsq\") pod \"ovn-controller-q6n4p-config-9wh5p\" (UID: \"73750159-9d61-4895-a855-40a58ea7583e\") " pod="openstack/ovn-controller-q6n4p-config-9wh5p" Dec 10 15:43:01 crc kubenswrapper[4755]: I1210 15:43:01.731388 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/73750159-9d61-4895-a855-40a58ea7583e-var-run\") pod \"ovn-controller-q6n4p-config-9wh5p\" (UID: \"73750159-9d61-4895-a855-40a58ea7583e\") " pod="openstack/ovn-controller-q6n4p-config-9wh5p" Dec 10 15:43:01 crc kubenswrapper[4755]: I1210 15:43:01.731417 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/73750159-9d61-4895-a855-40a58ea7583e-var-log-ovn\") pod \"ovn-controller-q6n4p-config-9wh5p\" (UID: \"73750159-9d61-4895-a855-40a58ea7583e\") " pod="openstack/ovn-controller-q6n4p-config-9wh5p" Dec 10 15:43:01 crc kubenswrapper[4755]: I1210 15:43:01.731453 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/73750159-9d61-4895-a855-40a58ea7583e-var-run-ovn\") pod \"ovn-controller-q6n4p-config-9wh5p\" (UID: \"73750159-9d61-4895-a855-40a58ea7583e\") " pod="openstack/ovn-controller-q6n4p-config-9wh5p" Dec 10 15:43:01 crc kubenswrapper[4755]: I1210 15:43:01.731494 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/73750159-9d61-4895-a855-40a58ea7583e-scripts\") pod \"ovn-controller-q6n4p-config-9wh5p\" (UID: \"73750159-9d61-4895-a855-40a58ea7583e\") " pod="openstack/ovn-controller-q6n4p-config-9wh5p" Dec 10 15:43:01 crc kubenswrapper[4755]: I1210 15:43:01.731511 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/73750159-9d61-4895-a855-40a58ea7583e-additional-scripts\") pod \"ovn-controller-q6n4p-config-9wh5p\" (UID: \"73750159-9d61-4895-a855-40a58ea7583e\") " pod="openstack/ovn-controller-q6n4p-config-9wh5p" Dec 10 15:43:01 crc kubenswrapper[4755]: I1210 15:43:01.752586 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-6m9cc"] Dec 10 15:43:01 crc kubenswrapper[4755]: I1210 15:43:01.754263 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"28273c51-8829-45f1-9edb-4f30a83b66e3","Type":"ContainerStarted","Data":"228874ddfabfea69dbb45a70c0762ad3956cc9bc72f528b09cb4877b39fb580b"} Dec 10 15:43:01 crc kubenswrapper[4755]: W1210 15:43:01.781306 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod72a1cce7_93cb_4fe1_9d12_3d4e19692457.slice/crio-e3a6417af6aa7f09fca5c6728ef70e87a1e099bc2d31be7dec8f361e0d0b5cc9 WatchSource:0}: Error finding container e3a6417af6aa7f09fca5c6728ef70e87a1e099bc2d31be7dec8f361e0d0b5cc9: Status 404 returned error can't find the container with id e3a6417af6aa7f09fca5c6728ef70e87a1e099bc2d31be7dec8f361e0d0b5cc9 Dec 10 15:43:01 crc kubenswrapper[4755]: I1210 15:43:01.796813 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49d1edb3-5fcb-4dd6-9f34-6c8f7114aa1b" path="/var/lib/kubelet/pods/49d1edb3-5fcb-4dd6-9f34-6c8f7114aa1b/volumes" Dec 10 15:43:01 crc kubenswrapper[4755]: I1210 15:43:01.797453 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-2b7c-account-create-update-dtbb2"] Dec 10 15:43:01 crc kubenswrapper[4755]: I1210 15:43:01.797497 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Dec 10 15:43:01 crc kubenswrapper[4755]: I1210 15:43:01.797509 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-a190-account-create-update-brt5j" event={"ID":"d8fb0565-641d-4083-a004-068c8f2da61f","Type":"ContainerStarted","Data":"e134247fc544c7d4d3aed2882b1860147c7a68a6d38be51e5bc6ff06274b77f3"} Dec 10 15:43:01 crc kubenswrapper[4755]: I1210 15:43:01.797525 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-9b26-account-create-update-q9h4x" event={"ID":"5d99bcd9-160f-4acf-b5a2-048e2cf4e69d","Type":"ContainerStarted","Data":"73c0bad567f255cb839b5ab3a699c7387461e5d250ad042bc09ddb27ab2af1a5"} Dec 10 15:43:01 crc kubenswrapper[4755]: I1210 15:43:01.797778 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-fxbjx" event={"ID":"5c9e6f66-2358-4311-98ff-066fe8edd720","Type":"ContainerStarted","Data":"d2978f6e5bf73599a55d10ea09c7ee3cead55d0d2d604534c640b2a9be1b78d1"} Dec 10 15:43:01 crc kubenswrapper[4755]: I1210 15:43:01.797799 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-fxbjx" event={"ID":"5c9e6f66-2358-4311-98ff-066fe8edd720","Type":"ContainerStarted","Data":"6a775f10bf78ec50edee255bfcf19827fbabe2a4ee7a1256ee31e8e84ca55cae"} Dec 10 15:43:01 crc kubenswrapper[4755]: I1210 15:43:01.812148 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-fxbjx" podStartSLOduration=9.812132755 podStartE2EDuration="9.812132755s" podCreationTimestamp="2025-12-10 15:42:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:43:01.811803786 +0000 UTC m=+1178.412687418" watchObservedRunningTime="2025-12-10 15:43:01.812132755 +0000 UTC m=+1178.413016387" Dec 10 15:43:01 crc kubenswrapper[4755]: I1210 15:43:01.817332 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-create-2872x" event={"ID":"006b2612-d428-4655-9b11-7805124e026d","Type":"ContainerStarted","Data":"3530f9efac3211e43a3479bed25e3fc00e0694fefda378878970b4d7fa3e65fb"} Dec 10 15:43:01 crc kubenswrapper[4755]: W1210 15:43:01.817233 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod86e3285c_eddd_4bd4_bca9_8d9ccf2019e7.slice/crio-e79bdc3e5fd79ef1436926ba6b19cc0b2fadb302fe78cef110087ff94ad43375 WatchSource:0}: Error finding container e79bdc3e5fd79ef1436926ba6b19cc0b2fadb302fe78cef110087ff94ad43375: Status 404 returned error can't find the container with id e79bdc3e5fd79ef1436926ba6b19cc0b2fadb302fe78cef110087ff94ad43375 Dec 10 15:43:01 crc kubenswrapper[4755]: I1210 15:43:01.830238 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-rs5rh" event={"ID":"7e112f6c-6906-4c34-9a05-ed827a4cd2ff","Type":"ContainerStarted","Data":"ecd6203296452c25264cb3591622e14ae3c2c9fc7e181afb826425da1f1799d9"} Dec 10 15:43:01 crc kubenswrapper[4755]: I1210 15:43:01.833895 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/73750159-9d61-4895-a855-40a58ea7583e-var-run\") pod \"ovn-controller-q6n4p-config-9wh5p\" (UID: \"73750159-9d61-4895-a855-40a58ea7583e\") " pod="openstack/ovn-controller-q6n4p-config-9wh5p" Dec 10 15:43:01 crc kubenswrapper[4755]: I1210 15:43:01.833977 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/73750159-9d61-4895-a855-40a58ea7583e-var-log-ovn\") pod \"ovn-controller-q6n4p-config-9wh5p\" (UID: \"73750159-9d61-4895-a855-40a58ea7583e\") " pod="openstack/ovn-controller-q6n4p-config-9wh5p" Dec 10 15:43:01 crc kubenswrapper[4755]: I1210 15:43:01.834038 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/73750159-9d61-4895-a855-40a58ea7583e-var-run-ovn\") pod \"ovn-controller-q6n4p-config-9wh5p\" (UID: \"73750159-9d61-4895-a855-40a58ea7583e\") " pod="openstack/ovn-controller-q6n4p-config-9wh5p" Dec 10 15:43:01 crc kubenswrapper[4755]: I1210 15:43:01.834079 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/73750159-9d61-4895-a855-40a58ea7583e-scripts\") pod \"ovn-controller-q6n4p-config-9wh5p\" (UID: \"73750159-9d61-4895-a855-40a58ea7583e\") " pod="openstack/ovn-controller-q6n4p-config-9wh5p" Dec 10 15:43:01 crc kubenswrapper[4755]: I1210 15:43:01.834099 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/73750159-9d61-4895-a855-40a58ea7583e-additional-scripts\") pod \"ovn-controller-q6n4p-config-9wh5p\" (UID: \"73750159-9d61-4895-a855-40a58ea7583e\") " pod="openstack/ovn-controller-q6n4p-config-9wh5p" Dec 10 15:43:01 crc kubenswrapper[4755]: I1210 15:43:01.834227 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfpsq\" (UniqueName: \"kubernetes.io/projected/73750159-9d61-4895-a855-40a58ea7583e-kube-api-access-jfpsq\") pod \"ovn-controller-q6n4p-config-9wh5p\" (UID: \"73750159-9d61-4895-a855-40a58ea7583e\") " pod="openstack/ovn-controller-q6n4p-config-9wh5p" Dec 10 15:43:01 crc kubenswrapper[4755]: I1210 15:43:01.835041 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/73750159-9d61-4895-a855-40a58ea7583e-var-run\") pod \"ovn-controller-q6n4p-config-9wh5p\" (UID: \"73750159-9d61-4895-a855-40a58ea7583e\") " pod="openstack/ovn-controller-q6n4p-config-9wh5p" Dec 10 15:43:01 crc kubenswrapper[4755]: I1210 15:43:01.835091 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/73750159-9d61-4895-a855-40a58ea7583e-var-log-ovn\") pod \"ovn-controller-q6n4p-config-9wh5p\" (UID: \"73750159-9d61-4895-a855-40a58ea7583e\") " pod="openstack/ovn-controller-q6n4p-config-9wh5p" Dec 10 15:43:01 crc kubenswrapper[4755]: I1210 15:43:01.835124 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/73750159-9d61-4895-a855-40a58ea7583e-var-run-ovn\") pod \"ovn-controller-q6n4p-config-9wh5p\" (UID: \"73750159-9d61-4895-a855-40a58ea7583e\") " pod="openstack/ovn-controller-q6n4p-config-9wh5p" Dec 10 15:43:01 crc kubenswrapper[4755]: I1210 15:43:01.835676 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/73750159-9d61-4895-a855-40a58ea7583e-additional-scripts\") pod \"ovn-controller-q6n4p-config-9wh5p\" (UID: \"73750159-9d61-4895-a855-40a58ea7583e\") " pod="openstack/ovn-controller-q6n4p-config-9wh5p" Dec 10 15:43:01 crc kubenswrapper[4755]: I1210 15:43:01.837406 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/73750159-9d61-4895-a855-40a58ea7583e-scripts\") pod \"ovn-controller-q6n4p-config-9wh5p\" (UID: \"73750159-9d61-4895-a855-40a58ea7583e\") " pod="openstack/ovn-controller-q6n4p-config-9wh5p" Dec 10 15:43:01 crc kubenswrapper[4755]: I1210 15:43:01.845934 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-eb88-account-create-update-t2zx9" event={"ID":"4b53b622-751c-4098-95ef-86d7bbb6f03b","Type":"ContainerStarted","Data":"a9051f6d5427111f7d926f0396b2538995b8b8a5c871722d24dc0d10ac9d9d41"} Dec 10 15:43:01 crc kubenswrapper[4755]: I1210 15:43:01.847662 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-84g2q" event={"ID":"4b387caa-3f36-4771-8046-f41a609fc2ba","Type":"ContainerStarted","Data":"260f253881fd52021fdfb65abe24349cbd6c938147e2574bf76715507c6ad9c0"} Dec 10 15:43:01 crc kubenswrapper[4755]: I1210 15:43:01.847705 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-84g2q" event={"ID":"4b387caa-3f36-4771-8046-f41a609fc2ba","Type":"ContainerStarted","Data":"13960d615a57877711ae402b6c6712c01e991d6e81328046b697230a4cdea7d3"} Dec 10 15:43:01 crc kubenswrapper[4755]: I1210 15:43:01.860302 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfpsq\" (UniqueName: \"kubernetes.io/projected/73750159-9d61-4895-a855-40a58ea7583e-kube-api-access-jfpsq\") pod \"ovn-controller-q6n4p-config-9wh5p\" (UID: \"73750159-9d61-4895-a855-40a58ea7583e\") " pod="openstack/ovn-controller-q6n4p-config-9wh5p" Dec 10 15:43:01 crc kubenswrapper[4755]: I1210 15:43:01.866511 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-84g2q" podStartSLOduration=8.86649355 podStartE2EDuration="8.86649355s" podCreationTimestamp="2025-12-10 15:42:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:43:01.863992252 +0000 UTC m=+1178.464875884" watchObservedRunningTime="2025-12-10 15:43:01.86649355 +0000 UTC m=+1178.467377192" Dec 10 15:43:02 crc kubenswrapper[4755]: I1210 15:43:02.037238 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-q6n4p-config-9wh5p" Dec 10 15:43:02 crc kubenswrapper[4755]: I1210 15:43:02.726794 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-q6n4p-config-9wh5p"] Dec 10 15:43:02 crc kubenswrapper[4755]: W1210 15:43:02.738308 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod73750159_9d61_4895_a855_40a58ea7583e.slice/crio-ca1033c4908377d9ced60f3bda3627661854fbae3ead3c68f67165f53ae83309 WatchSource:0}: Error finding container ca1033c4908377d9ced60f3bda3627661854fbae3ead3c68f67165f53ae83309: Status 404 returned error can't find the container with id ca1033c4908377d9ced60f3bda3627661854fbae3ead3c68f67165f53ae83309 Dec 10 15:43:02 crc kubenswrapper[4755]: I1210 15:43:02.872893 4755 generic.go:334] "Generic (PLEG): container finished" podID="5d99bcd9-160f-4acf-b5a2-048e2cf4e69d" containerID="b60495d86167df4328db1da5dbc39e9fb8da8762367164584bd9c6e44d926493" exitCode=0 Dec 10 15:43:02 crc kubenswrapper[4755]: I1210 15:43:02.875704 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-9b26-account-create-update-q9h4x" event={"ID":"5d99bcd9-160f-4acf-b5a2-048e2cf4e69d","Type":"ContainerDied","Data":"b60495d86167df4328db1da5dbc39e9fb8da8762367164584bd9c6e44d926493"} Dec 10 15:43:02 crc kubenswrapper[4755]: I1210 15:43:02.877027 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-2b7c-account-create-update-dtbb2" event={"ID":"5c5b35e4-4153-4f26-89bb-80e480230209","Type":"ContainerStarted","Data":"e0f33564831859754eb1d2efb4a50c6195080ce672e09ed18acc6a683324fbd3"} Dec 10 15:43:02 crc kubenswrapper[4755]: I1210 15:43:02.877061 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-2b7c-account-create-update-dtbb2" event={"ID":"5c5b35e4-4153-4f26-89bb-80e480230209","Type":"ContainerStarted","Data":"52a8fca65ff596791f4af433a75d9e0d4980441a1416a461fd8e69d6bd835539"} Dec 10 15:43:02 crc kubenswrapper[4755]: I1210 15:43:02.880244 4755 generic.go:334] "Generic (PLEG): container finished" podID="006b2612-d428-4655-9b11-7805124e026d" containerID="d99d38c264a34cb298485ed0e13ebafece8b9b5d7d1ec13c7ad9fa3bac9d019b" exitCode=0 Dec 10 15:43:02 crc kubenswrapper[4755]: I1210 15:43:02.880293 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-create-2872x" event={"ID":"006b2612-d428-4655-9b11-7805124e026d","Type":"ContainerDied","Data":"d99d38c264a34cb298485ed0e13ebafece8b9b5d7d1ec13c7ad9fa3bac9d019b"} Dec 10 15:43:02 crc kubenswrapper[4755]: I1210 15:43:02.884150 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-rs5rh" event={"ID":"7e112f6c-6906-4c34-9a05-ed827a4cd2ff","Type":"ContainerStarted","Data":"0cdbd04441704faaf6f184a7008fcd33e82429220859169d7a2e08cdd2d6febd"} Dec 10 15:43:02 crc kubenswrapper[4755]: I1210 15:43:02.905894 4755 generic.go:334] "Generic (PLEG): container finished" podID="d8fb0565-641d-4083-a004-068c8f2da61f" containerID="9f80471a6d75d13b8062f27f69f866d01e00835c6e4dab5ad8395ca69344d598" exitCode=0 Dec 10 15:43:02 crc kubenswrapper[4755]: I1210 15:43:02.905968 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-a190-account-create-update-brt5j" event={"ID":"d8fb0565-641d-4083-a004-068c8f2da61f","Type":"ContainerDied","Data":"9f80471a6d75d13b8062f27f69f866d01e00835c6e4dab5ad8395ca69344d598"} Dec 10 15:43:02 crc kubenswrapper[4755]: I1210 15:43:02.909145 4755 generic.go:334] "Generic (PLEG): container finished" podID="5c9e6f66-2358-4311-98ff-066fe8edd720" containerID="d2978f6e5bf73599a55d10ea09c7ee3cead55d0d2d604534c640b2a9be1b78d1" exitCode=0 Dec 10 15:43:02 crc kubenswrapper[4755]: I1210 15:43:02.909214 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-fxbjx" event={"ID":"5c9e6f66-2358-4311-98ff-066fe8edd720","Type":"ContainerDied","Data":"d2978f6e5bf73599a55d10ea09c7ee3cead55d0d2d604534c640b2a9be1b78d1"} Dec 10 15:43:02 crc kubenswrapper[4755]: I1210 15:43:02.911125 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-6m9cc" event={"ID":"86e3285c-eddd-4bd4-bca9-8d9ccf2019e7","Type":"ContainerStarted","Data":"e79bdc3e5fd79ef1436926ba6b19cc0b2fadb302fe78cef110087ff94ad43375"} Dec 10 15:43:02 crc kubenswrapper[4755]: I1210 15:43:02.913834 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"72a1cce7-93cb-4fe1-9d12-3d4e19692457","Type":"ContainerStarted","Data":"e3a6417af6aa7f09fca5c6728ef70e87a1e099bc2d31be7dec8f361e0d0b5cc9"} Dec 10 15:43:02 crc kubenswrapper[4755]: I1210 15:43:02.922503 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-2b7c-account-create-update-dtbb2" podStartSLOduration=9.922486644 podStartE2EDuration="9.922486644s" podCreationTimestamp="2025-12-10 15:42:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:43:02.914175759 +0000 UTC m=+1179.515059401" watchObservedRunningTime="2025-12-10 15:43:02.922486644 +0000 UTC m=+1179.523370276" Dec 10 15:43:02 crc kubenswrapper[4755]: I1210 15:43:02.923499 4755 generic.go:334] "Generic (PLEG): container finished" podID="4b53b622-751c-4098-95ef-86d7bbb6f03b" containerID="52acf371ab5abc9a88cf3be82423c8e55a5ad893555c09f94c508b3d3e2e64b2" exitCode=0 Dec 10 15:43:02 crc kubenswrapper[4755]: I1210 15:43:02.923559 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-eb88-account-create-update-t2zx9" event={"ID":"4b53b622-751c-4098-95ef-86d7bbb6f03b","Type":"ContainerDied","Data":"52acf371ab5abc9a88cf3be82423c8e55a5ad893555c09f94c508b3d3e2e64b2"} Dec 10 15:43:02 crc kubenswrapper[4755]: I1210 15:43:02.925352 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-q6n4p-config-9wh5p" event={"ID":"73750159-9d61-4895-a855-40a58ea7583e","Type":"ContainerStarted","Data":"ca1033c4908377d9ced60f3bda3627661854fbae3ead3c68f67165f53ae83309"} Dec 10 15:43:02 crc kubenswrapper[4755]: I1210 15:43:02.929229 4755 generic.go:334] "Generic (PLEG): container finished" podID="4b387caa-3f36-4771-8046-f41a609fc2ba" containerID="260f253881fd52021fdfb65abe24349cbd6c938147e2574bf76715507c6ad9c0" exitCode=0 Dec 10 15:43:02 crc kubenswrapper[4755]: I1210 15:43:02.929295 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-84g2q" event={"ID":"4b387caa-3f36-4771-8046-f41a609fc2ba","Type":"ContainerDied","Data":"260f253881fd52021fdfb65abe24349cbd6c938147e2574bf76715507c6ad9c0"} Dec 10 15:43:03 crc kubenswrapper[4755]: I1210 15:43:03.939305 4755 generic.go:334] "Generic (PLEG): container finished" podID="7e112f6c-6906-4c34-9a05-ed827a4cd2ff" containerID="0cdbd04441704faaf6f184a7008fcd33e82429220859169d7a2e08cdd2d6febd" exitCode=0 Dec 10 15:43:03 crc kubenswrapper[4755]: I1210 15:43:03.939589 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-rs5rh" event={"ID":"7e112f6c-6906-4c34-9a05-ed827a4cd2ff","Type":"ContainerDied","Data":"0cdbd04441704faaf6f184a7008fcd33e82429220859169d7a2e08cdd2d6febd"} Dec 10 15:43:03 crc kubenswrapper[4755]: I1210 15:43:03.941717 4755 generic.go:334] "Generic (PLEG): container finished" podID="73750159-9d61-4895-a855-40a58ea7583e" containerID="5b099ca757ac2e34473fbea32c250e94393a2cf6309d4f94cee5e4b8e9156cf9" exitCode=0 Dec 10 15:43:03 crc kubenswrapper[4755]: I1210 15:43:03.942230 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-q6n4p-config-9wh5p" event={"ID":"73750159-9d61-4895-a855-40a58ea7583e","Type":"ContainerDied","Data":"5b099ca757ac2e34473fbea32c250e94393a2cf6309d4f94cee5e4b8e9156cf9"} Dec 10 15:43:03 crc kubenswrapper[4755]: I1210 15:43:03.948022 4755 generic.go:334] "Generic (PLEG): container finished" podID="5c5b35e4-4153-4f26-89bb-80e480230209" containerID="e0f33564831859754eb1d2efb4a50c6195080ce672e09ed18acc6a683324fbd3" exitCode=0 Dec 10 15:43:03 crc kubenswrapper[4755]: I1210 15:43:03.948271 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-2b7c-account-create-update-dtbb2" event={"ID":"5c5b35e4-4153-4f26-89bb-80e480230209","Type":"ContainerDied","Data":"e0f33564831859754eb1d2efb4a50c6195080ce672e09ed18acc6a683324fbd3"} Dec 10 15:43:04 crc kubenswrapper[4755]: I1210 15:43:04.507211 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-fxbjx" Dec 10 15:43:04 crc kubenswrapper[4755]: I1210 15:43:04.596289 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5c9e6f66-2358-4311-98ff-066fe8edd720-operator-scripts\") pod \"5c9e6f66-2358-4311-98ff-066fe8edd720\" (UID: \"5c9e6f66-2358-4311-98ff-066fe8edd720\") " Dec 10 15:43:04 crc kubenswrapper[4755]: I1210 15:43:04.596440 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nnnlj\" (UniqueName: \"kubernetes.io/projected/5c9e6f66-2358-4311-98ff-066fe8edd720-kube-api-access-nnnlj\") pod \"5c9e6f66-2358-4311-98ff-066fe8edd720\" (UID: \"5c9e6f66-2358-4311-98ff-066fe8edd720\") " Dec 10 15:43:04 crc kubenswrapper[4755]: I1210 15:43:04.596930 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c9e6f66-2358-4311-98ff-066fe8edd720-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5c9e6f66-2358-4311-98ff-066fe8edd720" (UID: "5c9e6f66-2358-4311-98ff-066fe8edd720"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:43:04 crc kubenswrapper[4755]: I1210 15:43:04.597186 4755 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5c9e6f66-2358-4311-98ff-066fe8edd720-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 15:43:04 crc kubenswrapper[4755]: I1210 15:43:04.634391 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c9e6f66-2358-4311-98ff-066fe8edd720-kube-api-access-nnnlj" (OuterVolumeSpecName: "kube-api-access-nnnlj") pod "5c9e6f66-2358-4311-98ff-066fe8edd720" (UID: "5c9e6f66-2358-4311-98ff-066fe8edd720"). InnerVolumeSpecName "kube-api-access-nnnlj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:43:04 crc kubenswrapper[4755]: I1210 15:43:04.698733 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nnnlj\" (UniqueName: \"kubernetes.io/projected/5c9e6f66-2358-4311-98ff-066fe8edd720-kube-api-access-nnnlj\") on node \"crc\" DevicePath \"\"" Dec 10 15:43:04 crc kubenswrapper[4755]: I1210 15:43:04.960119 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"72a1cce7-93cb-4fe1-9d12-3d4e19692457","Type":"ContainerStarted","Data":"c6b66b52aeac1067f58265ca593cb4b92d3c98877c6bd0e2ba5f7dfff7e6fa9b"} Dec 10 15:43:04 crc kubenswrapper[4755]: I1210 15:43:04.966696 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"28273c51-8829-45f1-9edb-4f30a83b66e3","Type":"ContainerStarted","Data":"135e505c7d8108ff91c14255c0cd984e129761ac96038142027e9e59b71fe575"} Dec 10 15:43:04 crc kubenswrapper[4755]: I1210 15:43:04.969793 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-fxbjx" event={"ID":"5c9e6f66-2358-4311-98ff-066fe8edd720","Type":"ContainerDied","Data":"6a775f10bf78ec50edee255bfcf19827fbabe2a4ee7a1256ee31e8e84ca55cae"} Dec 10 15:43:04 crc kubenswrapper[4755]: I1210 15:43:04.969843 4755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a775f10bf78ec50edee255bfcf19827fbabe2a4ee7a1256ee31e8e84ca55cae" Dec 10 15:43:04 crc kubenswrapper[4755]: I1210 15:43:04.969841 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-fxbjx" Dec 10 15:43:07 crc kubenswrapper[4755]: I1210 15:43:07.579164 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-rs5rh" Dec 10 15:43:07 crc kubenswrapper[4755]: I1210 15:43:07.585068 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-q6n4p-config-9wh5p" Dec 10 15:43:07 crc kubenswrapper[4755]: I1210 15:43:07.585265 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-922n9\" (UniqueName: \"kubernetes.io/projected/7e112f6c-6906-4c34-9a05-ed827a4cd2ff-kube-api-access-922n9\") pod \"7e112f6c-6906-4c34-9a05-ed827a4cd2ff\" (UID: \"7e112f6c-6906-4c34-9a05-ed827a4cd2ff\") " Dec 10 15:43:07 crc kubenswrapper[4755]: I1210 15:43:07.585374 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7e112f6c-6906-4c34-9a05-ed827a4cd2ff-operator-scripts\") pod \"7e112f6c-6906-4c34-9a05-ed827a4cd2ff\" (UID: \"7e112f6c-6906-4c34-9a05-ed827a4cd2ff\") " Dec 10 15:43:07 crc kubenswrapper[4755]: I1210 15:43:07.586642 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e112f6c-6906-4c34-9a05-ed827a4cd2ff-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7e112f6c-6906-4c34-9a05-ed827a4cd2ff" (UID: "7e112f6c-6906-4c34-9a05-ed827a4cd2ff"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:43:07 crc kubenswrapper[4755]: I1210 15:43:07.593635 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e112f6c-6906-4c34-9a05-ed827a4cd2ff-kube-api-access-922n9" (OuterVolumeSpecName: "kube-api-access-922n9") pod "7e112f6c-6906-4c34-9a05-ed827a4cd2ff" (UID: "7e112f6c-6906-4c34-9a05-ed827a4cd2ff"). InnerVolumeSpecName "kube-api-access-922n9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:43:07 crc kubenswrapper[4755]: I1210 15:43:07.597722 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-2b7c-account-create-update-dtbb2" Dec 10 15:43:07 crc kubenswrapper[4755]: I1210 15:43:07.666781 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-eb88-account-create-update-t2zx9" Dec 10 15:43:07 crc kubenswrapper[4755]: I1210 15:43:07.672971 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-a190-account-create-update-brt5j" Dec 10 15:43:07 crc kubenswrapper[4755]: I1210 15:43:07.686423 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/73750159-9d61-4895-a855-40a58ea7583e-var-log-ovn\") pod \"73750159-9d61-4895-a855-40a58ea7583e\" (UID: \"73750159-9d61-4895-a855-40a58ea7583e\") " Dec 10 15:43:07 crc kubenswrapper[4755]: I1210 15:43:07.686486 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/73750159-9d61-4895-a855-40a58ea7583e-var-run-ovn\") pod \"73750159-9d61-4895-a855-40a58ea7583e\" (UID: \"73750159-9d61-4895-a855-40a58ea7583e\") " Dec 10 15:43:07 crc kubenswrapper[4755]: I1210 15:43:07.686511 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hqb6c\" (UniqueName: \"kubernetes.io/projected/4b53b622-751c-4098-95ef-86d7bbb6f03b-kube-api-access-hqb6c\") pod \"4b53b622-751c-4098-95ef-86d7bbb6f03b\" (UID: \"4b53b622-751c-4098-95ef-86d7bbb6f03b\") " Dec 10 15:43:07 crc kubenswrapper[4755]: I1210 15:43:07.686539 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d9wf6\" (UniqueName: \"kubernetes.io/projected/d8fb0565-641d-4083-a004-068c8f2da61f-kube-api-access-d9wf6\") pod \"d8fb0565-641d-4083-a004-068c8f2da61f\" (UID: \"d8fb0565-641d-4083-a004-068c8f2da61f\") " Dec 10 15:43:07 crc kubenswrapper[4755]: I1210 15:43:07.686643 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c2zgd\" (UniqueName: \"kubernetes.io/projected/5c5b35e4-4153-4f26-89bb-80e480230209-kube-api-access-c2zgd\") pod \"5c5b35e4-4153-4f26-89bb-80e480230209\" (UID: \"5c5b35e4-4153-4f26-89bb-80e480230209\") " Dec 10 15:43:07 crc kubenswrapper[4755]: I1210 15:43:07.686665 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5c5b35e4-4153-4f26-89bb-80e480230209-operator-scripts\") pod \"5c5b35e4-4153-4f26-89bb-80e480230209\" (UID: \"5c5b35e4-4153-4f26-89bb-80e480230209\") " Dec 10 15:43:07 crc kubenswrapper[4755]: I1210 15:43:07.686663 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/73750159-9d61-4895-a855-40a58ea7583e-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "73750159-9d61-4895-a855-40a58ea7583e" (UID: "73750159-9d61-4895-a855-40a58ea7583e"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 15:43:07 crc kubenswrapper[4755]: I1210 15:43:07.686681 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/73750159-9d61-4895-a855-40a58ea7583e-scripts\") pod \"73750159-9d61-4895-a855-40a58ea7583e\" (UID: \"73750159-9d61-4895-a855-40a58ea7583e\") " Dec 10 15:43:07 crc kubenswrapper[4755]: I1210 15:43:07.686701 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/73750159-9d61-4895-a855-40a58ea7583e-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "73750159-9d61-4895-a855-40a58ea7583e" (UID: "73750159-9d61-4895-a855-40a58ea7583e"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 15:43:07 crc kubenswrapper[4755]: I1210 15:43:07.686730 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/73750159-9d61-4895-a855-40a58ea7583e-var-run\") pod \"73750159-9d61-4895-a855-40a58ea7583e\" (UID: \"73750159-9d61-4895-a855-40a58ea7583e\") " Dec 10 15:43:07 crc kubenswrapper[4755]: I1210 15:43:07.686758 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/73750159-9d61-4895-a855-40a58ea7583e-additional-scripts\") pod \"73750159-9d61-4895-a855-40a58ea7583e\" (UID: \"73750159-9d61-4895-a855-40a58ea7583e\") " Dec 10 15:43:07 crc kubenswrapper[4755]: I1210 15:43:07.686792 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b53b622-751c-4098-95ef-86d7bbb6f03b-operator-scripts\") pod \"4b53b622-751c-4098-95ef-86d7bbb6f03b\" (UID: \"4b53b622-751c-4098-95ef-86d7bbb6f03b\") " Dec 10 15:43:07 crc kubenswrapper[4755]: I1210 15:43:07.686821 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jfpsq\" (UniqueName: \"kubernetes.io/projected/73750159-9d61-4895-a855-40a58ea7583e-kube-api-access-jfpsq\") pod \"73750159-9d61-4895-a855-40a58ea7583e\" (UID: \"73750159-9d61-4895-a855-40a58ea7583e\") " Dec 10 15:43:07 crc kubenswrapper[4755]: I1210 15:43:07.686854 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8fb0565-641d-4083-a004-068c8f2da61f-operator-scripts\") pod \"d8fb0565-641d-4083-a004-068c8f2da61f\" (UID: \"d8fb0565-641d-4083-a004-068c8f2da61f\") " Dec 10 15:43:07 crc kubenswrapper[4755]: I1210 15:43:07.687193 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-922n9\" (UniqueName: \"kubernetes.io/projected/7e112f6c-6906-4c34-9a05-ed827a4cd2ff-kube-api-access-922n9\") on node \"crc\" DevicePath \"\"" Dec 10 15:43:07 crc kubenswrapper[4755]: I1210 15:43:07.687205 4755 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7e112f6c-6906-4c34-9a05-ed827a4cd2ff-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 15:43:07 crc kubenswrapper[4755]: I1210 15:43:07.687214 4755 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/73750159-9d61-4895-a855-40a58ea7583e-var-log-ovn\") on node \"crc\" DevicePath \"\"" Dec 10 15:43:07 crc kubenswrapper[4755]: I1210 15:43:07.687223 4755 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/73750159-9d61-4895-a855-40a58ea7583e-var-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 10 15:43:07 crc kubenswrapper[4755]: I1210 15:43:07.687202 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/73750159-9d61-4895-a855-40a58ea7583e-var-run" (OuterVolumeSpecName: "var-run") pod "73750159-9d61-4895-a855-40a58ea7583e" (UID: "73750159-9d61-4895-a855-40a58ea7583e"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 15:43:07 crc kubenswrapper[4755]: I1210 15:43:07.688390 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73750159-9d61-4895-a855-40a58ea7583e-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "73750159-9d61-4895-a855-40a58ea7583e" (UID: "73750159-9d61-4895-a855-40a58ea7583e"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:43:07 crc kubenswrapper[4755]: I1210 15:43:07.688748 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b53b622-751c-4098-95ef-86d7bbb6f03b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4b53b622-751c-4098-95ef-86d7bbb6f03b" (UID: "4b53b622-751c-4098-95ef-86d7bbb6f03b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:43:07 crc kubenswrapper[4755]: I1210 15:43:07.690960 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c5b35e4-4153-4f26-89bb-80e480230209-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5c5b35e4-4153-4f26-89bb-80e480230209" (UID: "5c5b35e4-4153-4f26-89bb-80e480230209"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:43:07 crc kubenswrapper[4755]: I1210 15:43:07.691245 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8fb0565-641d-4083-a004-068c8f2da61f-kube-api-access-d9wf6" (OuterVolumeSpecName: "kube-api-access-d9wf6") pod "d8fb0565-641d-4083-a004-068c8f2da61f" (UID: "d8fb0565-641d-4083-a004-068c8f2da61f"). InnerVolumeSpecName "kube-api-access-d9wf6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:43:07 crc kubenswrapper[4755]: I1210 15:43:07.691289 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8fb0565-641d-4083-a004-068c8f2da61f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d8fb0565-641d-4083-a004-068c8f2da61f" (UID: "d8fb0565-641d-4083-a004-068c8f2da61f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:43:07 crc kubenswrapper[4755]: I1210 15:43:07.692159 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c5b35e4-4153-4f26-89bb-80e480230209-kube-api-access-c2zgd" (OuterVolumeSpecName: "kube-api-access-c2zgd") pod "5c5b35e4-4153-4f26-89bb-80e480230209" (UID: "5c5b35e4-4153-4f26-89bb-80e480230209"). InnerVolumeSpecName "kube-api-access-c2zgd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:43:07 crc kubenswrapper[4755]: I1210 15:43:07.692232 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73750159-9d61-4895-a855-40a58ea7583e-kube-api-access-jfpsq" (OuterVolumeSpecName: "kube-api-access-jfpsq") pod "73750159-9d61-4895-a855-40a58ea7583e" (UID: "73750159-9d61-4895-a855-40a58ea7583e"). InnerVolumeSpecName "kube-api-access-jfpsq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:43:07 crc kubenswrapper[4755]: I1210 15:43:07.692871 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73750159-9d61-4895-a855-40a58ea7583e-scripts" (OuterVolumeSpecName: "scripts") pod "73750159-9d61-4895-a855-40a58ea7583e" (UID: "73750159-9d61-4895-a855-40a58ea7583e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:43:07 crc kubenswrapper[4755]: I1210 15:43:07.693060 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b53b622-751c-4098-95ef-86d7bbb6f03b-kube-api-access-hqb6c" (OuterVolumeSpecName: "kube-api-access-hqb6c") pod "4b53b622-751c-4098-95ef-86d7bbb6f03b" (UID: "4b53b622-751c-4098-95ef-86d7bbb6f03b"). InnerVolumeSpecName "kube-api-access-hqb6c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:43:07 crc kubenswrapper[4755]: I1210 15:43:07.705076 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-84g2q" Dec 10 15:43:07 crc kubenswrapper[4755]: I1210 15:43:07.705814 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-create-2872x" Dec 10 15:43:07 crc kubenswrapper[4755]: I1210 15:43:07.710235 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-9b26-account-create-update-q9h4x" Dec 10 15:43:07 crc kubenswrapper[4755]: I1210 15:43:07.787887 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hkf2q\" (UniqueName: \"kubernetes.io/projected/006b2612-d428-4655-9b11-7805124e026d-kube-api-access-hkf2q\") pod \"006b2612-d428-4655-9b11-7805124e026d\" (UID: \"006b2612-d428-4655-9b11-7805124e026d\") " Dec 10 15:43:07 crc kubenswrapper[4755]: I1210 15:43:07.787963 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l4qmh\" (UniqueName: \"kubernetes.io/projected/4b387caa-3f36-4771-8046-f41a609fc2ba-kube-api-access-l4qmh\") pod \"4b387caa-3f36-4771-8046-f41a609fc2ba\" (UID: \"4b387caa-3f36-4771-8046-f41a609fc2ba\") " Dec 10 15:43:07 crc kubenswrapper[4755]: I1210 15:43:07.788601 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5jqhp\" (UniqueName: \"kubernetes.io/projected/5d99bcd9-160f-4acf-b5a2-048e2cf4e69d-kube-api-access-5jqhp\") pod \"5d99bcd9-160f-4acf-b5a2-048e2cf4e69d\" (UID: \"5d99bcd9-160f-4acf-b5a2-048e2cf4e69d\") " Dec 10 15:43:07 crc kubenswrapper[4755]: I1210 15:43:07.788671 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d99bcd9-160f-4acf-b5a2-048e2cf4e69d-operator-scripts\") pod \"5d99bcd9-160f-4acf-b5a2-048e2cf4e69d\" (UID: \"5d99bcd9-160f-4acf-b5a2-048e2cf4e69d\") " Dec 10 15:43:07 crc kubenswrapper[4755]: I1210 15:43:07.788730 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b387caa-3f36-4771-8046-f41a609fc2ba-operator-scripts\") pod \"4b387caa-3f36-4771-8046-f41a609fc2ba\" (UID: \"4b387caa-3f36-4771-8046-f41a609fc2ba\") " Dec 10 15:43:07 crc kubenswrapper[4755]: I1210 15:43:07.788758 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/006b2612-d428-4655-9b11-7805124e026d-operator-scripts\") pod \"006b2612-d428-4655-9b11-7805124e026d\" (UID: \"006b2612-d428-4655-9b11-7805124e026d\") " Dec 10 15:43:07 crc kubenswrapper[4755]: I1210 15:43:07.789082 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b387caa-3f36-4771-8046-f41a609fc2ba-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4b387caa-3f36-4771-8046-f41a609fc2ba" (UID: "4b387caa-3f36-4771-8046-f41a609fc2ba"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:43:07 crc kubenswrapper[4755]: I1210 15:43:07.789410 4755 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b387caa-3f36-4771-8046-f41a609fc2ba-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 15:43:07 crc kubenswrapper[4755]: I1210 15:43:07.789434 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c2zgd\" (UniqueName: \"kubernetes.io/projected/5c5b35e4-4153-4f26-89bb-80e480230209-kube-api-access-c2zgd\") on node \"crc\" DevicePath \"\"" Dec 10 15:43:07 crc kubenswrapper[4755]: I1210 15:43:07.789445 4755 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5c5b35e4-4153-4f26-89bb-80e480230209-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 15:43:07 crc kubenswrapper[4755]: I1210 15:43:07.789488 4755 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/73750159-9d61-4895-a855-40a58ea7583e-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 15:43:07 crc kubenswrapper[4755]: I1210 15:43:07.789497 4755 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/73750159-9d61-4895-a855-40a58ea7583e-var-run\") on node \"crc\" DevicePath \"\"" Dec 10 15:43:07 crc kubenswrapper[4755]: I1210 15:43:07.789506 4755 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/73750159-9d61-4895-a855-40a58ea7583e-additional-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 15:43:07 crc kubenswrapper[4755]: I1210 15:43:07.789516 4755 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b53b622-751c-4098-95ef-86d7bbb6f03b-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 15:43:07 crc kubenswrapper[4755]: I1210 15:43:07.789524 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jfpsq\" (UniqueName: \"kubernetes.io/projected/73750159-9d61-4895-a855-40a58ea7583e-kube-api-access-jfpsq\") on node \"crc\" DevicePath \"\"" Dec 10 15:43:07 crc kubenswrapper[4755]: I1210 15:43:07.789533 4755 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8fb0565-641d-4083-a004-068c8f2da61f-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 15:43:07 crc kubenswrapper[4755]: I1210 15:43:07.789542 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hqb6c\" (UniqueName: \"kubernetes.io/projected/4b53b622-751c-4098-95ef-86d7bbb6f03b-kube-api-access-hqb6c\") on node \"crc\" DevicePath \"\"" Dec 10 15:43:07 crc kubenswrapper[4755]: I1210 15:43:07.789550 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d9wf6\" (UniqueName: \"kubernetes.io/projected/d8fb0565-641d-4083-a004-068c8f2da61f-kube-api-access-d9wf6\") on node \"crc\" DevicePath \"\"" Dec 10 15:43:07 crc kubenswrapper[4755]: I1210 15:43:07.789410 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d99bcd9-160f-4acf-b5a2-048e2cf4e69d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5d99bcd9-160f-4acf-b5a2-048e2cf4e69d" (UID: "5d99bcd9-160f-4acf-b5a2-048e2cf4e69d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:43:07 crc kubenswrapper[4755]: I1210 15:43:07.789569 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/006b2612-d428-4655-9b11-7805124e026d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "006b2612-d428-4655-9b11-7805124e026d" (UID: "006b2612-d428-4655-9b11-7805124e026d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:43:07 crc kubenswrapper[4755]: I1210 15:43:07.791451 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b387caa-3f36-4771-8046-f41a609fc2ba-kube-api-access-l4qmh" (OuterVolumeSpecName: "kube-api-access-l4qmh") pod "4b387caa-3f36-4771-8046-f41a609fc2ba" (UID: "4b387caa-3f36-4771-8046-f41a609fc2ba"). InnerVolumeSpecName "kube-api-access-l4qmh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:43:07 crc kubenswrapper[4755]: I1210 15:43:07.793309 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/006b2612-d428-4655-9b11-7805124e026d-kube-api-access-hkf2q" (OuterVolumeSpecName: "kube-api-access-hkf2q") pod "006b2612-d428-4655-9b11-7805124e026d" (UID: "006b2612-d428-4655-9b11-7805124e026d"). InnerVolumeSpecName "kube-api-access-hkf2q". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:43:07 crc kubenswrapper[4755]: I1210 15:43:07.793378 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d99bcd9-160f-4acf-b5a2-048e2cf4e69d-kube-api-access-5jqhp" (OuterVolumeSpecName: "kube-api-access-5jqhp") pod "5d99bcd9-160f-4acf-b5a2-048e2cf4e69d" (UID: "5d99bcd9-160f-4acf-b5a2-048e2cf4e69d"). InnerVolumeSpecName "kube-api-access-5jqhp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:43:07 crc kubenswrapper[4755]: I1210 15:43:07.890983 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5jqhp\" (UniqueName: \"kubernetes.io/projected/5d99bcd9-160f-4acf-b5a2-048e2cf4e69d-kube-api-access-5jqhp\") on node \"crc\" DevicePath \"\"" Dec 10 15:43:07 crc kubenswrapper[4755]: I1210 15:43:07.891012 4755 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d99bcd9-160f-4acf-b5a2-048e2cf4e69d-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 15:43:07 crc kubenswrapper[4755]: I1210 15:43:07.891027 4755 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/006b2612-d428-4655-9b11-7805124e026d-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 15:43:07 crc kubenswrapper[4755]: I1210 15:43:07.891035 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hkf2q\" (UniqueName: \"kubernetes.io/projected/006b2612-d428-4655-9b11-7805124e026d-kube-api-access-hkf2q\") on node \"crc\" DevicePath \"\"" Dec 10 15:43:07 crc kubenswrapper[4755]: I1210 15:43:07.891044 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l4qmh\" (UniqueName: \"kubernetes.io/projected/4b387caa-3f36-4771-8046-f41a609fc2ba-kube-api-access-l4qmh\") on node \"crc\" DevicePath \"\"" Dec 10 15:43:08 crc kubenswrapper[4755]: I1210 15:43:08.000055 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-eb88-account-create-update-t2zx9" event={"ID":"4b53b622-751c-4098-95ef-86d7bbb6f03b","Type":"ContainerDied","Data":"a9051f6d5427111f7d926f0396b2538995b8b8a5c871722d24dc0d10ac9d9d41"} Dec 10 15:43:08 crc kubenswrapper[4755]: I1210 15:43:08.000107 4755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9051f6d5427111f7d926f0396b2538995b8b8a5c871722d24dc0d10ac9d9d41" Dec 10 15:43:08 crc kubenswrapper[4755]: I1210 15:43:08.000169 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-eb88-account-create-update-t2zx9" Dec 10 15:43:08 crc kubenswrapper[4755]: I1210 15:43:08.002279 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-q6n4p-config-9wh5p" event={"ID":"73750159-9d61-4895-a855-40a58ea7583e","Type":"ContainerDied","Data":"ca1033c4908377d9ced60f3bda3627661854fbae3ead3c68f67165f53ae83309"} Dec 10 15:43:08 crc kubenswrapper[4755]: I1210 15:43:08.002304 4755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca1033c4908377d9ced60f3bda3627661854fbae3ead3c68f67165f53ae83309" Dec 10 15:43:08 crc kubenswrapper[4755]: I1210 15:43:08.002341 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-q6n4p-config-9wh5p" Dec 10 15:43:08 crc kubenswrapper[4755]: I1210 15:43:08.008331 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-84g2q" event={"ID":"4b387caa-3f36-4771-8046-f41a609fc2ba","Type":"ContainerDied","Data":"13960d615a57877711ae402b6c6712c01e991d6e81328046b697230a4cdea7d3"} Dec 10 15:43:08 crc kubenswrapper[4755]: I1210 15:43:08.008356 4755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="13960d615a57877711ae402b6c6712c01e991d6e81328046b697230a4cdea7d3" Dec 10 15:43:08 crc kubenswrapper[4755]: I1210 15:43:08.008428 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-84g2q" Dec 10 15:43:08 crc kubenswrapper[4755]: I1210 15:43:08.022679 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-a190-account-create-update-brt5j" Dec 10 15:43:08 crc kubenswrapper[4755]: I1210 15:43:08.022676 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-a190-account-create-update-brt5j" event={"ID":"d8fb0565-641d-4083-a004-068c8f2da61f","Type":"ContainerDied","Data":"e134247fc544c7d4d3aed2882b1860147c7a68a6d38be51e5bc6ff06274b77f3"} Dec 10 15:43:08 crc kubenswrapper[4755]: I1210 15:43:08.023426 4755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e134247fc544c7d4d3aed2882b1860147c7a68a6d38be51e5bc6ff06274b77f3" Dec 10 15:43:08 crc kubenswrapper[4755]: I1210 15:43:08.027286 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-2b7c-account-create-update-dtbb2" event={"ID":"5c5b35e4-4153-4f26-89bb-80e480230209","Type":"ContainerDied","Data":"52a8fca65ff596791f4af433a75d9e0d4980441a1416a461fd8e69d6bd835539"} Dec 10 15:43:08 crc kubenswrapper[4755]: I1210 15:43:08.027323 4755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="52a8fca65ff596791f4af433a75d9e0d4980441a1416a461fd8e69d6bd835539" Dec 10 15:43:08 crc kubenswrapper[4755]: I1210 15:43:08.027385 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-2b7c-account-create-update-dtbb2" Dec 10 15:43:08 crc kubenswrapper[4755]: I1210 15:43:08.030350 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-rs5rh" Dec 10 15:43:08 crc kubenswrapper[4755]: I1210 15:43:08.030361 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-rs5rh" event={"ID":"7e112f6c-6906-4c34-9a05-ed827a4cd2ff","Type":"ContainerDied","Data":"ecd6203296452c25264cb3591622e14ae3c2c9fc7e181afb826425da1f1799d9"} Dec 10 15:43:08 crc kubenswrapper[4755]: I1210 15:43:08.030396 4755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ecd6203296452c25264cb3591622e14ae3c2c9fc7e181afb826425da1f1799d9" Dec 10 15:43:08 crc kubenswrapper[4755]: I1210 15:43:08.033895 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"72a1cce7-93cb-4fe1-9d12-3d4e19692457","Type":"ContainerStarted","Data":"055307128ac4705fc3bcfc5f1a8836944b19301db17925da5683a8c3dd4362c9"} Dec 10 15:43:08 crc kubenswrapper[4755]: I1210 15:43:08.033927 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"72a1cce7-93cb-4fe1-9d12-3d4e19692457","Type":"ContainerStarted","Data":"81fc5e63523bdd4fc2f89ace3fa5c75c2eb11a561fdab1da7ce363f0babc8984"} Dec 10 15:43:08 crc kubenswrapper[4755]: I1210 15:43:08.035972 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-9b26-account-create-update-q9h4x" event={"ID":"5d99bcd9-160f-4acf-b5a2-048e2cf4e69d","Type":"ContainerDied","Data":"73c0bad567f255cb839b5ab3a699c7387461e5d250ad042bc09ddb27ab2af1a5"} Dec 10 15:43:08 crc kubenswrapper[4755]: I1210 15:43:08.035994 4755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="73c0bad567f255cb839b5ab3a699c7387461e5d250ad042bc09ddb27ab2af1a5" Dec 10 15:43:08 crc kubenswrapper[4755]: I1210 15:43:08.036051 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-9b26-account-create-update-q9h4x" Dec 10 15:43:08 crc kubenswrapper[4755]: I1210 15:43:08.038551 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-create-2872x" event={"ID":"006b2612-d428-4655-9b11-7805124e026d","Type":"ContainerDied","Data":"3530f9efac3211e43a3479bed25e3fc00e0694fefda378878970b4d7fa3e65fb"} Dec 10 15:43:08 crc kubenswrapper[4755]: I1210 15:43:08.038580 4755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3530f9efac3211e43a3479bed25e3fc00e0694fefda378878970b4d7fa3e65fb" Dec 10 15:43:08 crc kubenswrapper[4755]: I1210 15:43:08.038624 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-create-2872x" Dec 10 15:43:08 crc kubenswrapper[4755]: I1210 15:43:08.042192 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-6m9cc" event={"ID":"86e3285c-eddd-4bd4-bca9-8d9ccf2019e7","Type":"ContainerStarted","Data":"b68cfed6a96d957869624d241d8ebaf0a5235ef279b8fb377b567ee4bd898b64"} Dec 10 15:43:08 crc kubenswrapper[4755]: I1210 15:43:08.063758 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-6m9cc" podStartSLOduration=9.312105624 podStartE2EDuration="15.063739018s" podCreationTimestamp="2025-12-10 15:42:53 +0000 UTC" firstStartedPulling="2025-12-10 15:43:01.833561556 +0000 UTC m=+1178.434445188" lastFinishedPulling="2025-12-10 15:43:07.58519495 +0000 UTC m=+1184.186078582" observedRunningTime="2025-12-10 15:43:08.056817391 +0000 UTC m=+1184.657701023" watchObservedRunningTime="2025-12-10 15:43:08.063739018 +0000 UTC m=+1184.664622650" Dec 10 15:43:08 crc kubenswrapper[4755]: I1210 15:43:08.686645 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-q6n4p-config-9wh5p"] Dec 10 15:43:08 crc kubenswrapper[4755]: I1210 15:43:08.697485 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-q6n4p-config-9wh5p"] Dec 10 15:43:09 crc kubenswrapper[4755]: I1210 15:43:09.056557 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"72a1cce7-93cb-4fe1-9d12-3d4e19692457","Type":"ContainerStarted","Data":"5992702b31d62f5e1892b77ae4fc69096117e6aa5e5f92b196470a93cd0caef7"} Dec 10 15:43:09 crc kubenswrapper[4755]: I1210 15:43:09.768379 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73750159-9d61-4895-a855-40a58ea7583e" path="/var/lib/kubelet/pods/73750159-9d61-4895-a855-40a58ea7583e/volumes" Dec 10 15:43:10 crc kubenswrapper[4755]: I1210 15:43:10.071221 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"72a1cce7-93cb-4fe1-9d12-3d4e19692457","Type":"ContainerStarted","Data":"0b699be276cead267494bdf32c49a1f7ff02d7d756e599a02837c9d765ff7c81"} Dec 10 15:43:10 crc kubenswrapper[4755]: I1210 15:43:10.071272 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"72a1cce7-93cb-4fe1-9d12-3d4e19692457","Type":"ContainerStarted","Data":"2b94a980214d44f0b647e78dccfb7099e099f1cd387f31509e6b0ec81b53975e"} Dec 10 15:43:10 crc kubenswrapper[4755]: I1210 15:43:10.358907 4755 patch_prober.go:28] interesting pod/machine-config-daemon-ggt8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 15:43:10 crc kubenswrapper[4755]: I1210 15:43:10.358960 4755 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 15:43:10 crc kubenswrapper[4755]: I1210 15:43:10.358996 4755 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" Dec 10 15:43:10 crc kubenswrapper[4755]: I1210 15:43:10.359670 4755 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cf5ba83fd616480d24cf584cf15a0ce95565ee5fa4662cb49e23ad86486c0d52"} pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 10 15:43:10 crc kubenswrapper[4755]: I1210 15:43:10.359722 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" containerName="machine-config-daemon" containerID="cri-o://cf5ba83fd616480d24cf584cf15a0ce95565ee5fa4662cb49e23ad86486c0d52" gracePeriod=600 Dec 10 15:43:11 crc kubenswrapper[4755]: I1210 15:43:11.081060 4755 generic.go:334] "Generic (PLEG): container finished" podID="28273c51-8829-45f1-9edb-4f30a83b66e3" containerID="135e505c7d8108ff91c14255c0cd984e129761ac96038142027e9e59b71fe575" exitCode=0 Dec 10 15:43:11 crc kubenswrapper[4755]: I1210 15:43:11.081168 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"28273c51-8829-45f1-9edb-4f30a83b66e3","Type":"ContainerDied","Data":"135e505c7d8108ff91c14255c0cd984e129761ac96038142027e9e59b71fe575"} Dec 10 15:43:13 crc kubenswrapper[4755]: I1210 15:43:13.104759 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"28273c51-8829-45f1-9edb-4f30a83b66e3","Type":"ContainerStarted","Data":"a65aadc220b183d360b38983c6cf935985a9ca8d751922b28d0d431192a37bc0"} Dec 10 15:43:13 crc kubenswrapper[4755]: I1210 15:43:13.109266 4755 generic.go:334] "Generic (PLEG): container finished" podID="b132a8b9-1c99-414d-8773-229bf36b305d" containerID="cf5ba83fd616480d24cf584cf15a0ce95565ee5fa4662cb49e23ad86486c0d52" exitCode=0 Dec 10 15:43:13 crc kubenswrapper[4755]: I1210 15:43:13.109329 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" event={"ID":"b132a8b9-1c99-414d-8773-229bf36b305d","Type":"ContainerDied","Data":"cf5ba83fd616480d24cf584cf15a0ce95565ee5fa4662cb49e23ad86486c0d52"} Dec 10 15:43:13 crc kubenswrapper[4755]: I1210 15:43:13.109361 4755 scope.go:117] "RemoveContainer" containerID="4c970abaaa70f01d1899eae5e78bc6f2bf1fb1ebdd24f00f3de5524057d3b3cd" Dec 10 15:43:13 crc kubenswrapper[4755]: I1210 15:43:13.121067 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"72a1cce7-93cb-4fe1-9d12-3d4e19692457","Type":"ContainerStarted","Data":"10b8e24d9252783ba2dd059c0f1d3835687507cde9d033b43280169de866316b"} Dec 10 15:43:13 crc kubenswrapper[4755]: I1210 15:43:13.121116 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"72a1cce7-93cb-4fe1-9d12-3d4e19692457","Type":"ContainerStarted","Data":"47ae156eb8152ad934acf7fe5210bef516cbfac9dedfe34191db0898d3ed4df6"} Dec 10 15:43:14 crc kubenswrapper[4755]: I1210 15:43:14.145482 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-pth9b" event={"ID":"5d949f1d-5cb7-49e0-aa0b-52a615dfe4b5","Type":"ContainerStarted","Data":"bb7e8108b253bf2aa90140899f0db1509d53c3030162ec47375974cdca5a60c2"} Dec 10 15:43:14 crc kubenswrapper[4755]: I1210 15:43:14.148954 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" event={"ID":"b132a8b9-1c99-414d-8773-229bf36b305d","Type":"ContainerStarted","Data":"b9b7f6e29c3e4593fa445fe830b0d353f5a037cd1634fd06b5f6ef129b3368c3"} Dec 10 15:43:14 crc kubenswrapper[4755]: I1210 15:43:14.150833 4755 generic.go:334] "Generic (PLEG): container finished" podID="86e3285c-eddd-4bd4-bca9-8d9ccf2019e7" containerID="b68cfed6a96d957869624d241d8ebaf0a5235ef279b8fb377b567ee4bd898b64" exitCode=0 Dec 10 15:43:14 crc kubenswrapper[4755]: I1210 15:43:14.150859 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-6m9cc" event={"ID":"86e3285c-eddd-4bd4-bca9-8d9ccf2019e7","Type":"ContainerDied","Data":"b68cfed6a96d957869624d241d8ebaf0a5235ef279b8fb377b567ee4bd898b64"} Dec 10 15:43:14 crc kubenswrapper[4755]: I1210 15:43:14.169791 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-pth9b" podStartSLOduration=3.463037906 podStartE2EDuration="34.169770942s" podCreationTimestamp="2025-12-10 15:42:40 +0000 UTC" firstStartedPulling="2025-12-10 15:42:41.907350948 +0000 UTC m=+1158.508234580" lastFinishedPulling="2025-12-10 15:43:12.614083984 +0000 UTC m=+1189.214967616" observedRunningTime="2025-12-10 15:43:14.165071524 +0000 UTC m=+1190.765955166" watchObservedRunningTime="2025-12-10 15:43:14.169770942 +0000 UTC m=+1190.770654574" Dec 10 15:43:16 crc kubenswrapper[4755]: I1210 15:43:15.712049 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-6m9cc" Dec 10 15:43:16 crc kubenswrapper[4755]: I1210 15:43:15.831667 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86e3285c-eddd-4bd4-bca9-8d9ccf2019e7-config-data\") pod \"86e3285c-eddd-4bd4-bca9-8d9ccf2019e7\" (UID: \"86e3285c-eddd-4bd4-bca9-8d9ccf2019e7\") " Dec 10 15:43:16 crc kubenswrapper[4755]: I1210 15:43:15.831826 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cmw7p\" (UniqueName: \"kubernetes.io/projected/86e3285c-eddd-4bd4-bca9-8d9ccf2019e7-kube-api-access-cmw7p\") pod \"86e3285c-eddd-4bd4-bca9-8d9ccf2019e7\" (UID: \"86e3285c-eddd-4bd4-bca9-8d9ccf2019e7\") " Dec 10 15:43:16 crc kubenswrapper[4755]: I1210 15:43:15.832022 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86e3285c-eddd-4bd4-bca9-8d9ccf2019e7-combined-ca-bundle\") pod \"86e3285c-eddd-4bd4-bca9-8d9ccf2019e7\" (UID: \"86e3285c-eddd-4bd4-bca9-8d9ccf2019e7\") " Dec 10 15:43:16 crc kubenswrapper[4755]: I1210 15:43:15.839126 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86e3285c-eddd-4bd4-bca9-8d9ccf2019e7-kube-api-access-cmw7p" (OuterVolumeSpecName: "kube-api-access-cmw7p") pod "86e3285c-eddd-4bd4-bca9-8d9ccf2019e7" (UID: "86e3285c-eddd-4bd4-bca9-8d9ccf2019e7"). InnerVolumeSpecName "kube-api-access-cmw7p". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:43:16 crc kubenswrapper[4755]: I1210 15:43:15.866248 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86e3285c-eddd-4bd4-bca9-8d9ccf2019e7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "86e3285c-eddd-4bd4-bca9-8d9ccf2019e7" (UID: "86e3285c-eddd-4bd4-bca9-8d9ccf2019e7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:43:16 crc kubenswrapper[4755]: I1210 15:43:15.892665 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86e3285c-eddd-4bd4-bca9-8d9ccf2019e7-config-data" (OuterVolumeSpecName: "config-data") pod "86e3285c-eddd-4bd4-bca9-8d9ccf2019e7" (UID: "86e3285c-eddd-4bd4-bca9-8d9ccf2019e7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:43:16 crc kubenswrapper[4755]: I1210 15:43:15.935322 4755 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86e3285c-eddd-4bd4-bca9-8d9ccf2019e7-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 15:43:16 crc kubenswrapper[4755]: I1210 15:43:15.935349 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cmw7p\" (UniqueName: \"kubernetes.io/projected/86e3285c-eddd-4bd4-bca9-8d9ccf2019e7-kube-api-access-cmw7p\") on node \"crc\" DevicePath \"\"" Dec 10 15:43:16 crc kubenswrapper[4755]: I1210 15:43:15.935362 4755 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86e3285c-eddd-4bd4-bca9-8d9ccf2019e7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:43:16 crc kubenswrapper[4755]: I1210 15:43:16.176934 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-6m9cc" event={"ID":"86e3285c-eddd-4bd4-bca9-8d9ccf2019e7","Type":"ContainerDied","Data":"e79bdc3e5fd79ef1436926ba6b19cc0b2fadb302fe78cef110087ff94ad43375"} Dec 10 15:43:16 crc kubenswrapper[4755]: I1210 15:43:16.176985 4755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e79bdc3e5fd79ef1436926ba6b19cc0b2fadb302fe78cef110087ff94ad43375" Dec 10 15:43:16 crc kubenswrapper[4755]: I1210 15:43:16.177074 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-6m9cc" Dec 10 15:43:16 crc kubenswrapper[4755]: I1210 15:43:16.187016 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"72a1cce7-93cb-4fe1-9d12-3d4e19692457","Type":"ContainerStarted","Data":"f41ba3ba1048a0f7c8405af59e12c98b299425f0acd6587adacbcc4a13ae3211"} Dec 10 15:43:16 crc kubenswrapper[4755]: I1210 15:43:16.187069 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"72a1cce7-93cb-4fe1-9d12-3d4e19692457","Type":"ContainerStarted","Data":"a125015ad039e07c799ed1ee9c5fcb33ae698eda277f1a7a8143702fbd8473cf"} Dec 10 15:43:16 crc kubenswrapper[4755]: I1210 15:43:16.187081 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"72a1cce7-93cb-4fe1-9d12-3d4e19692457","Type":"ContainerStarted","Data":"aa38722811d36a44d6842b112fc0a7fd7df0a758baeb6ff87e84380ed3032aa6"} Dec 10 15:43:16 crc kubenswrapper[4755]: I1210 15:43:16.190509 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"28273c51-8829-45f1-9edb-4f30a83b66e3","Type":"ContainerStarted","Data":"4683fa2d344911754fe53ac0506342e0ddea40e6d8c1deda88c0670b20903b4a"} Dec 10 15:43:16 crc kubenswrapper[4755]: I1210 15:43:16.190542 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"28273c51-8829-45f1-9edb-4f30a83b66e3","Type":"ContainerStarted","Data":"75a794e8aca685e3e52c8c9550560379a441489f469205e7ae376dfc3b1a813c"} Dec 10 15:43:16 crc kubenswrapper[4755]: I1210 15:43:16.232083 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=27.232061671 podStartE2EDuration="27.232061671s" podCreationTimestamp="2025-12-10 15:42:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:43:16.228807283 +0000 UTC m=+1192.829690925" watchObservedRunningTime="2025-12-10 15:43:16.232061671 +0000 UTC m=+1192.832945323" Dec 10 15:43:16 crc kubenswrapper[4755]: I1210 15:43:16.517641 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9d85d47c-spfd2"] Dec 10 15:43:16 crc kubenswrapper[4755]: E1210 15:43:16.518118 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e112f6c-6906-4c34-9a05-ed827a4cd2ff" containerName="mariadb-database-create" Dec 10 15:43:16 crc kubenswrapper[4755]: I1210 15:43:16.518143 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e112f6c-6906-4c34-9a05-ed827a4cd2ff" containerName="mariadb-database-create" Dec 10 15:43:16 crc kubenswrapper[4755]: E1210 15:43:16.518163 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b53b622-751c-4098-95ef-86d7bbb6f03b" containerName="mariadb-account-create-update" Dec 10 15:43:16 crc kubenswrapper[4755]: I1210 15:43:16.518171 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b53b622-751c-4098-95ef-86d7bbb6f03b" containerName="mariadb-account-create-update" Dec 10 15:43:16 crc kubenswrapper[4755]: E1210 15:43:16.518182 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73750159-9d61-4895-a855-40a58ea7583e" containerName="ovn-config" Dec 10 15:43:16 crc kubenswrapper[4755]: I1210 15:43:16.518190 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="73750159-9d61-4895-a855-40a58ea7583e" containerName="ovn-config" Dec 10 15:43:16 crc kubenswrapper[4755]: E1210 15:43:16.518206 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8fb0565-641d-4083-a004-068c8f2da61f" containerName="mariadb-account-create-update" Dec 10 15:43:16 crc kubenswrapper[4755]: I1210 15:43:16.518213 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8fb0565-641d-4083-a004-068c8f2da61f" containerName="mariadb-account-create-update" Dec 10 15:43:16 crc kubenswrapper[4755]: E1210 15:43:16.518230 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="006b2612-d428-4655-9b11-7805124e026d" containerName="mariadb-database-create" Dec 10 15:43:16 crc kubenswrapper[4755]: I1210 15:43:16.518237 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="006b2612-d428-4655-9b11-7805124e026d" containerName="mariadb-database-create" Dec 10 15:43:16 crc kubenswrapper[4755]: E1210 15:43:16.518249 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c5b35e4-4153-4f26-89bb-80e480230209" containerName="mariadb-account-create-update" Dec 10 15:43:16 crc kubenswrapper[4755]: I1210 15:43:16.518256 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c5b35e4-4153-4f26-89bb-80e480230209" containerName="mariadb-account-create-update" Dec 10 15:43:16 crc kubenswrapper[4755]: E1210 15:43:16.518266 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b387caa-3f36-4771-8046-f41a609fc2ba" containerName="mariadb-database-create" Dec 10 15:43:16 crc kubenswrapper[4755]: I1210 15:43:16.518273 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b387caa-3f36-4771-8046-f41a609fc2ba" containerName="mariadb-database-create" Dec 10 15:43:16 crc kubenswrapper[4755]: E1210 15:43:16.518289 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86e3285c-eddd-4bd4-bca9-8d9ccf2019e7" containerName="keystone-db-sync" Dec 10 15:43:16 crc kubenswrapper[4755]: I1210 15:43:16.518298 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="86e3285c-eddd-4bd4-bca9-8d9ccf2019e7" containerName="keystone-db-sync" Dec 10 15:43:16 crc kubenswrapper[4755]: E1210 15:43:16.518318 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c9e6f66-2358-4311-98ff-066fe8edd720" containerName="mariadb-database-create" Dec 10 15:43:16 crc kubenswrapper[4755]: I1210 15:43:16.518325 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c9e6f66-2358-4311-98ff-066fe8edd720" containerName="mariadb-database-create" Dec 10 15:43:16 crc kubenswrapper[4755]: E1210 15:43:16.518340 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d99bcd9-160f-4acf-b5a2-048e2cf4e69d" containerName="mariadb-account-create-update" Dec 10 15:43:16 crc kubenswrapper[4755]: I1210 15:43:16.518348 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d99bcd9-160f-4acf-b5a2-048e2cf4e69d" containerName="mariadb-account-create-update" Dec 10 15:43:16 crc kubenswrapper[4755]: I1210 15:43:16.518575 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d99bcd9-160f-4acf-b5a2-048e2cf4e69d" containerName="mariadb-account-create-update" Dec 10 15:43:16 crc kubenswrapper[4755]: I1210 15:43:16.518597 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="73750159-9d61-4895-a855-40a58ea7583e" containerName="ovn-config" Dec 10 15:43:16 crc kubenswrapper[4755]: I1210 15:43:16.518609 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="86e3285c-eddd-4bd4-bca9-8d9ccf2019e7" containerName="keystone-db-sync" Dec 10 15:43:16 crc kubenswrapper[4755]: I1210 15:43:16.518619 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8fb0565-641d-4083-a004-068c8f2da61f" containerName="mariadb-account-create-update" Dec 10 15:43:16 crc kubenswrapper[4755]: I1210 15:43:16.518630 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b387caa-3f36-4771-8046-f41a609fc2ba" containerName="mariadb-database-create" Dec 10 15:43:16 crc kubenswrapper[4755]: I1210 15:43:16.526381 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e112f6c-6906-4c34-9a05-ed827a4cd2ff" containerName="mariadb-database-create" Dec 10 15:43:16 crc kubenswrapper[4755]: I1210 15:43:16.526447 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c5b35e4-4153-4f26-89bb-80e480230209" containerName="mariadb-account-create-update" Dec 10 15:43:16 crc kubenswrapper[4755]: I1210 15:43:16.526485 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b53b622-751c-4098-95ef-86d7bbb6f03b" containerName="mariadb-account-create-update" Dec 10 15:43:16 crc kubenswrapper[4755]: I1210 15:43:16.526509 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="006b2612-d428-4655-9b11-7805124e026d" containerName="mariadb-database-create" Dec 10 15:43:16 crc kubenswrapper[4755]: I1210 15:43:16.526533 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c9e6f66-2358-4311-98ff-066fe8edd720" containerName="mariadb-database-create" Dec 10 15:43:16 crc kubenswrapper[4755]: I1210 15:43:16.528024 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9d85d47c-spfd2" Dec 10 15:43:16 crc kubenswrapper[4755]: I1210 15:43:16.531920 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9d85d47c-spfd2"] Dec 10 15:43:16 crc kubenswrapper[4755]: I1210 15:43:16.544338 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-jxg9d"] Dec 10 15:43:16 crc kubenswrapper[4755]: I1210 15:43:16.545477 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-jxg9d" Dec 10 15:43:16 crc kubenswrapper[4755]: I1210 15:43:16.551314 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Dec 10 15:43:16 crc kubenswrapper[4755]: I1210 15:43:16.551536 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 10 15:43:16 crc kubenswrapper[4755]: I1210 15:43:16.551703 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 10 15:43:16 crc kubenswrapper[4755]: I1210 15:43:16.551797 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-tp2mw" Dec 10 15:43:16 crc kubenswrapper[4755]: I1210 15:43:16.551760 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 10 15:43:16 crc kubenswrapper[4755]: I1210 15:43:16.565567 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-jxg9d"] Dec 10 15:43:16 crc kubenswrapper[4755]: I1210 15:43:16.655714 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96jnk\" (UniqueName: \"kubernetes.io/projected/f2d8956f-4220-4044-9620-4dd519d81777-kube-api-access-96jnk\") pod \"dnsmasq-dns-5c9d85d47c-spfd2\" (UID: \"f2d8956f-4220-4044-9620-4dd519d81777\") " pod="openstack/dnsmasq-dns-5c9d85d47c-spfd2" Dec 10 15:43:16 crc kubenswrapper[4755]: I1210 15:43:16.655771 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62ecc4bf-c914-4fef-9ba0-099388953d74-config-data\") pod \"keystone-bootstrap-jxg9d\" (UID: \"62ecc4bf-c914-4fef-9ba0-099388953d74\") " pod="openstack/keystone-bootstrap-jxg9d" Dec 10 15:43:16 crc kubenswrapper[4755]: I1210 15:43:16.655806 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/62ecc4bf-c914-4fef-9ba0-099388953d74-scripts\") pod \"keystone-bootstrap-jxg9d\" (UID: \"62ecc4bf-c914-4fef-9ba0-099388953d74\") " pod="openstack/keystone-bootstrap-jxg9d" Dec 10 15:43:16 crc kubenswrapper[4755]: I1210 15:43:16.655837 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f2d8956f-4220-4044-9620-4dd519d81777-dns-svc\") pod \"dnsmasq-dns-5c9d85d47c-spfd2\" (UID: \"f2d8956f-4220-4044-9620-4dd519d81777\") " pod="openstack/dnsmasq-dns-5c9d85d47c-spfd2" Dec 10 15:43:16 crc kubenswrapper[4755]: I1210 15:43:16.655909 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/62ecc4bf-c914-4fef-9ba0-099388953d74-credential-keys\") pod \"keystone-bootstrap-jxg9d\" (UID: \"62ecc4bf-c914-4fef-9ba0-099388953d74\") " pod="openstack/keystone-bootstrap-jxg9d" Dec 10 15:43:16 crc kubenswrapper[4755]: I1210 15:43:16.656051 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62ecc4bf-c914-4fef-9ba0-099388953d74-combined-ca-bundle\") pod \"keystone-bootstrap-jxg9d\" (UID: \"62ecc4bf-c914-4fef-9ba0-099388953d74\") " pod="openstack/keystone-bootstrap-jxg9d" Dec 10 15:43:16 crc kubenswrapper[4755]: I1210 15:43:16.656165 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xg6s8\" (UniqueName: \"kubernetes.io/projected/62ecc4bf-c914-4fef-9ba0-099388953d74-kube-api-access-xg6s8\") pod \"keystone-bootstrap-jxg9d\" (UID: \"62ecc4bf-c914-4fef-9ba0-099388953d74\") " pod="openstack/keystone-bootstrap-jxg9d" Dec 10 15:43:16 crc kubenswrapper[4755]: I1210 15:43:16.656216 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2d8956f-4220-4044-9620-4dd519d81777-config\") pod \"dnsmasq-dns-5c9d85d47c-spfd2\" (UID: \"f2d8956f-4220-4044-9620-4dd519d81777\") " pod="openstack/dnsmasq-dns-5c9d85d47c-spfd2" Dec 10 15:43:16 crc kubenswrapper[4755]: I1210 15:43:16.656329 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f2d8956f-4220-4044-9620-4dd519d81777-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9d85d47c-spfd2\" (UID: \"f2d8956f-4220-4044-9620-4dd519d81777\") " pod="openstack/dnsmasq-dns-5c9d85d47c-spfd2" Dec 10 15:43:16 crc kubenswrapper[4755]: I1210 15:43:16.656390 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/62ecc4bf-c914-4fef-9ba0-099388953d74-fernet-keys\") pod \"keystone-bootstrap-jxg9d\" (UID: \"62ecc4bf-c914-4fef-9ba0-099388953d74\") " pod="openstack/keystone-bootstrap-jxg9d" Dec 10 15:43:16 crc kubenswrapper[4755]: I1210 15:43:16.656426 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f2d8956f-4220-4044-9620-4dd519d81777-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9d85d47c-spfd2\" (UID: \"f2d8956f-4220-4044-9620-4dd519d81777\") " pod="openstack/dnsmasq-dns-5c9d85d47c-spfd2" Dec 10 15:43:16 crc kubenswrapper[4755]: I1210 15:43:16.763678 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f2d8956f-4220-4044-9620-4dd519d81777-dns-svc\") pod \"dnsmasq-dns-5c9d85d47c-spfd2\" (UID: \"f2d8956f-4220-4044-9620-4dd519d81777\") " pod="openstack/dnsmasq-dns-5c9d85d47c-spfd2" Dec 10 15:43:16 crc kubenswrapper[4755]: I1210 15:43:16.763944 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/62ecc4bf-c914-4fef-9ba0-099388953d74-credential-keys\") pod \"keystone-bootstrap-jxg9d\" (UID: \"62ecc4bf-c914-4fef-9ba0-099388953d74\") " pod="openstack/keystone-bootstrap-jxg9d" Dec 10 15:43:16 crc kubenswrapper[4755]: I1210 15:43:16.764033 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62ecc4bf-c914-4fef-9ba0-099388953d74-combined-ca-bundle\") pod \"keystone-bootstrap-jxg9d\" (UID: \"62ecc4bf-c914-4fef-9ba0-099388953d74\") " pod="openstack/keystone-bootstrap-jxg9d" Dec 10 15:43:16 crc kubenswrapper[4755]: I1210 15:43:16.764150 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xg6s8\" (UniqueName: \"kubernetes.io/projected/62ecc4bf-c914-4fef-9ba0-099388953d74-kube-api-access-xg6s8\") pod \"keystone-bootstrap-jxg9d\" (UID: \"62ecc4bf-c914-4fef-9ba0-099388953d74\") " pod="openstack/keystone-bootstrap-jxg9d" Dec 10 15:43:16 crc kubenswrapper[4755]: I1210 15:43:16.764226 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2d8956f-4220-4044-9620-4dd519d81777-config\") pod \"dnsmasq-dns-5c9d85d47c-spfd2\" (UID: \"f2d8956f-4220-4044-9620-4dd519d81777\") " pod="openstack/dnsmasq-dns-5c9d85d47c-spfd2" Dec 10 15:43:16 crc kubenswrapper[4755]: I1210 15:43:16.764319 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f2d8956f-4220-4044-9620-4dd519d81777-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9d85d47c-spfd2\" (UID: \"f2d8956f-4220-4044-9620-4dd519d81777\") " pod="openstack/dnsmasq-dns-5c9d85d47c-spfd2" Dec 10 15:43:16 crc kubenswrapper[4755]: I1210 15:43:16.764394 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/62ecc4bf-c914-4fef-9ba0-099388953d74-fernet-keys\") pod \"keystone-bootstrap-jxg9d\" (UID: \"62ecc4bf-c914-4fef-9ba0-099388953d74\") " pod="openstack/keystone-bootstrap-jxg9d" Dec 10 15:43:16 crc kubenswrapper[4755]: I1210 15:43:16.764513 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f2d8956f-4220-4044-9620-4dd519d81777-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9d85d47c-spfd2\" (UID: \"f2d8956f-4220-4044-9620-4dd519d81777\") " pod="openstack/dnsmasq-dns-5c9d85d47c-spfd2" Dec 10 15:43:16 crc kubenswrapper[4755]: I1210 15:43:16.764609 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96jnk\" (UniqueName: \"kubernetes.io/projected/f2d8956f-4220-4044-9620-4dd519d81777-kube-api-access-96jnk\") pod \"dnsmasq-dns-5c9d85d47c-spfd2\" (UID: \"f2d8956f-4220-4044-9620-4dd519d81777\") " pod="openstack/dnsmasq-dns-5c9d85d47c-spfd2" Dec 10 15:43:16 crc kubenswrapper[4755]: I1210 15:43:16.764721 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62ecc4bf-c914-4fef-9ba0-099388953d74-config-data\") pod \"keystone-bootstrap-jxg9d\" (UID: \"62ecc4bf-c914-4fef-9ba0-099388953d74\") " pod="openstack/keystone-bootstrap-jxg9d" Dec 10 15:43:16 crc kubenswrapper[4755]: I1210 15:43:16.764836 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/62ecc4bf-c914-4fef-9ba0-099388953d74-scripts\") pod \"keystone-bootstrap-jxg9d\" (UID: \"62ecc4bf-c914-4fef-9ba0-099388953d74\") " pod="openstack/keystone-bootstrap-jxg9d" Dec 10 15:43:16 crc kubenswrapper[4755]: I1210 15:43:16.766578 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2d8956f-4220-4044-9620-4dd519d81777-config\") pod \"dnsmasq-dns-5c9d85d47c-spfd2\" (UID: \"f2d8956f-4220-4044-9620-4dd519d81777\") " pod="openstack/dnsmasq-dns-5c9d85d47c-spfd2" Dec 10 15:43:16 crc kubenswrapper[4755]: I1210 15:43:16.767052 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f2d8956f-4220-4044-9620-4dd519d81777-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9d85d47c-spfd2\" (UID: \"f2d8956f-4220-4044-9620-4dd519d81777\") " pod="openstack/dnsmasq-dns-5c9d85d47c-spfd2" Dec 10 15:43:16 crc kubenswrapper[4755]: I1210 15:43:16.767375 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f2d8956f-4220-4044-9620-4dd519d81777-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9d85d47c-spfd2\" (UID: \"f2d8956f-4220-4044-9620-4dd519d81777\") " pod="openstack/dnsmasq-dns-5c9d85d47c-spfd2" Dec 10 15:43:16 crc kubenswrapper[4755]: I1210 15:43:16.767384 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f2d8956f-4220-4044-9620-4dd519d81777-dns-svc\") pod \"dnsmasq-dns-5c9d85d47c-spfd2\" (UID: \"f2d8956f-4220-4044-9620-4dd519d81777\") " pod="openstack/dnsmasq-dns-5c9d85d47c-spfd2" Dec 10 15:43:16 crc kubenswrapper[4755]: I1210 15:43:16.780756 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/62ecc4bf-c914-4fef-9ba0-099388953d74-credential-keys\") pod \"keystone-bootstrap-jxg9d\" (UID: \"62ecc4bf-c914-4fef-9ba0-099388953d74\") " pod="openstack/keystone-bootstrap-jxg9d" Dec 10 15:43:16 crc kubenswrapper[4755]: I1210 15:43:16.782355 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/62ecc4bf-c914-4fef-9ba0-099388953d74-fernet-keys\") pod \"keystone-bootstrap-jxg9d\" (UID: \"62ecc4bf-c914-4fef-9ba0-099388953d74\") " pod="openstack/keystone-bootstrap-jxg9d" Dec 10 15:43:16 crc kubenswrapper[4755]: I1210 15:43:16.787148 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62ecc4bf-c914-4fef-9ba0-099388953d74-combined-ca-bundle\") pod \"keystone-bootstrap-jxg9d\" (UID: \"62ecc4bf-c914-4fef-9ba0-099388953d74\") " pod="openstack/keystone-bootstrap-jxg9d" Dec 10 15:43:16 crc kubenswrapper[4755]: I1210 15:43:16.787914 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/62ecc4bf-c914-4fef-9ba0-099388953d74-scripts\") pod \"keystone-bootstrap-jxg9d\" (UID: \"62ecc4bf-c914-4fef-9ba0-099388953d74\") " pod="openstack/keystone-bootstrap-jxg9d" Dec 10 15:43:16 crc kubenswrapper[4755]: I1210 15:43:16.807160 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62ecc4bf-c914-4fef-9ba0-099388953d74-config-data\") pod \"keystone-bootstrap-jxg9d\" (UID: \"62ecc4bf-c914-4fef-9ba0-099388953d74\") " pod="openstack/keystone-bootstrap-jxg9d" Dec 10 15:43:16 crc kubenswrapper[4755]: I1210 15:43:16.824732 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-db-sync-jr6l4"] Dec 10 15:43:16 crc kubenswrapper[4755]: I1210 15:43:16.829776 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-jr6l4" Dec 10 15:43:16 crc kubenswrapper[4755]: I1210 15:43:16.847287 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xg6s8\" (UniqueName: \"kubernetes.io/projected/62ecc4bf-c914-4fef-9ba0-099388953d74-kube-api-access-xg6s8\") pod \"keystone-bootstrap-jxg9d\" (UID: \"62ecc4bf-c914-4fef-9ba0-099388953d74\") " pod="openstack/keystone-bootstrap-jxg9d" Dec 10 15:43:16 crc kubenswrapper[4755]: I1210 15:43:16.849709 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-scripts" Dec 10 15:43:16 crc kubenswrapper[4755]: I1210 15:43:16.857220 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-cloudkitty-dockercfg-6f74p" Dec 10 15:43:16 crc kubenswrapper[4755]: I1210 15:43:16.857623 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cloudkitty-client-internal" Dec 10 15:43:16 crc kubenswrapper[4755]: I1210 15:43:16.862059 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-sync-jr6l4"] Dec 10 15:43:16 crc kubenswrapper[4755]: I1210 15:43:16.864249 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96jnk\" (UniqueName: \"kubernetes.io/projected/f2d8956f-4220-4044-9620-4dd519d81777-kube-api-access-96jnk\") pod \"dnsmasq-dns-5c9d85d47c-spfd2\" (UID: \"f2d8956f-4220-4044-9620-4dd519d81777\") " pod="openstack/dnsmasq-dns-5c9d85d47c-spfd2" Dec 10 15:43:16 crc kubenswrapper[4755]: I1210 15:43:16.873811 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-config-data" Dec 10 15:43:16 crc kubenswrapper[4755]: I1210 15:43:16.889942 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9d85d47c-spfd2" Dec 10 15:43:16 crc kubenswrapper[4755]: I1210 15:43:16.954530 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-cwjsz"] Dec 10 15:43:16 crc kubenswrapper[4755]: I1210 15:43:16.955765 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-cwjsz" Dec 10 15:43:16 crc kubenswrapper[4755]: I1210 15:43:16.962439 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-cwjsz"] Dec 10 15:43:16 crc kubenswrapper[4755]: I1210 15:43:16.969339 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Dec 10 15:43:16 crc kubenswrapper[4755]: I1210 15:43:16.970496 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbc4e627-8238-49b1-a0ac-48d07a29c23a-scripts\") pod \"cloudkitty-db-sync-jr6l4\" (UID: \"cbc4e627-8238-49b1-a0ac-48d07a29c23a\") " pod="openstack/cloudkitty-db-sync-jr6l4" Dec 10 15:43:16 crc kubenswrapper[4755]: I1210 15:43:16.970532 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbc4e627-8238-49b1-a0ac-48d07a29c23a-config-data\") pod \"cloudkitty-db-sync-jr6l4\" (UID: \"cbc4e627-8238-49b1-a0ac-48d07a29c23a\") " pod="openstack/cloudkitty-db-sync-jr6l4" Dec 10 15:43:16 crc kubenswrapper[4755]: I1210 15:43:16.970548 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbc4e627-8238-49b1-a0ac-48d07a29c23a-combined-ca-bundle\") pod \"cloudkitty-db-sync-jr6l4\" (UID: \"cbc4e627-8238-49b1-a0ac-48d07a29c23a\") " pod="openstack/cloudkitty-db-sync-jr6l4" Dec 10 15:43:16 crc kubenswrapper[4755]: I1210 15:43:16.970606 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/cbc4e627-8238-49b1-a0ac-48d07a29c23a-certs\") pod \"cloudkitty-db-sync-jr6l4\" (UID: \"cbc4e627-8238-49b1-a0ac-48d07a29c23a\") " pod="openstack/cloudkitty-db-sync-jr6l4" Dec 10 15:43:16 crc kubenswrapper[4755]: I1210 15:43:16.970645 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hvvl\" (UniqueName: \"kubernetes.io/projected/cbc4e627-8238-49b1-a0ac-48d07a29c23a-kube-api-access-7hvvl\") pod \"cloudkitty-db-sync-jr6l4\" (UID: \"cbc4e627-8238-49b1-a0ac-48d07a29c23a\") " pod="openstack/cloudkitty-db-sync-jr6l4" Dec 10 15:43:16 crc kubenswrapper[4755]: I1210 15:43:16.977875 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-jxg9d" Dec 10 15:43:16 crc kubenswrapper[4755]: I1210 15:43:16.979015 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-gjn7t" Dec 10 15:43:16 crc kubenswrapper[4755]: I1210 15:43:16.979191 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.064645 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-9l6v9"] Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.071678 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-9l6v9" Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.073861 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hvvl\" (UniqueName: \"kubernetes.io/projected/cbc4e627-8238-49b1-a0ac-48d07a29c23a-kube-api-access-7hvvl\") pod \"cloudkitty-db-sync-jr6l4\" (UID: \"cbc4e627-8238-49b1-a0ac-48d07a29c23a\") " pod="openstack/cloudkitty-db-sync-jr6l4" Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.074118 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9b9ab1e5-2daa-4057-84e3-50bef68bbaca-db-sync-config-data\") pod \"cinder-db-sync-cwjsz\" (UID: \"9b9ab1e5-2daa-4057-84e3-50bef68bbaca\") " pod="openstack/cinder-db-sync-cwjsz" Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.082814 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9b9ab1e5-2daa-4057-84e3-50bef68bbaca-etc-machine-id\") pod \"cinder-db-sync-cwjsz\" (UID: \"9b9ab1e5-2daa-4057-84e3-50bef68bbaca\") " pod="openstack/cinder-db-sync-cwjsz" Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.083008 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbc4e627-8238-49b1-a0ac-48d07a29c23a-scripts\") pod \"cloudkitty-db-sync-jr6l4\" (UID: \"cbc4e627-8238-49b1-a0ac-48d07a29c23a\") " pod="openstack/cloudkitty-db-sync-jr6l4" Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.083091 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b9ab1e5-2daa-4057-84e3-50bef68bbaca-scripts\") pod \"cinder-db-sync-cwjsz\" (UID: \"9b9ab1e5-2daa-4057-84e3-50bef68bbaca\") " pod="openstack/cinder-db-sync-cwjsz" Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.083159 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ff59l\" (UniqueName: \"kubernetes.io/projected/9b9ab1e5-2daa-4057-84e3-50bef68bbaca-kube-api-access-ff59l\") pod \"cinder-db-sync-cwjsz\" (UID: \"9b9ab1e5-2daa-4057-84e3-50bef68bbaca\") " pod="openstack/cinder-db-sync-cwjsz" Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.083236 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbc4e627-8238-49b1-a0ac-48d07a29c23a-config-data\") pod \"cloudkitty-db-sync-jr6l4\" (UID: \"cbc4e627-8238-49b1-a0ac-48d07a29c23a\") " pod="openstack/cloudkitty-db-sync-jr6l4" Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.083301 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbc4e627-8238-49b1-a0ac-48d07a29c23a-combined-ca-bundle\") pod \"cloudkitty-db-sync-jr6l4\" (UID: \"cbc4e627-8238-49b1-a0ac-48d07a29c23a\") " pod="openstack/cloudkitty-db-sync-jr6l4" Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.083382 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b9ab1e5-2daa-4057-84e3-50bef68bbaca-config-data\") pod \"cinder-db-sync-cwjsz\" (UID: \"9b9ab1e5-2daa-4057-84e3-50bef68bbaca\") " pod="openstack/cinder-db-sync-cwjsz" Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.083484 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b9ab1e5-2daa-4057-84e3-50bef68bbaca-combined-ca-bundle\") pod \"cinder-db-sync-cwjsz\" (UID: \"9b9ab1e5-2daa-4057-84e3-50bef68bbaca\") " pod="openstack/cinder-db-sync-cwjsz" Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.083696 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/cbc4e627-8238-49b1-a0ac-48d07a29c23a-certs\") pod \"cloudkitty-db-sync-jr6l4\" (UID: \"cbc4e627-8238-49b1-a0ac-48d07a29c23a\") " pod="openstack/cloudkitty-db-sync-jr6l4" Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.080685 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-fqzhg" Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.082056 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.082099 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.092883 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbc4e627-8238-49b1-a0ac-48d07a29c23a-scripts\") pod \"cloudkitty-db-sync-jr6l4\" (UID: \"cbc4e627-8238-49b1-a0ac-48d07a29c23a\") " pod="openstack/cloudkitty-db-sync-jr6l4" Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.100495 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/cbc4e627-8238-49b1-a0ac-48d07a29c23a-certs\") pod \"cloudkitty-db-sync-jr6l4\" (UID: \"cbc4e627-8238-49b1-a0ac-48d07a29c23a\") " pod="openstack/cloudkitty-db-sync-jr6l4" Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.103140 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbc4e627-8238-49b1-a0ac-48d07a29c23a-combined-ca-bundle\") pod \"cloudkitty-db-sync-jr6l4\" (UID: \"cbc4e627-8238-49b1-a0ac-48d07a29c23a\") " pod="openstack/cloudkitty-db-sync-jr6l4" Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.106279 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbc4e627-8238-49b1-a0ac-48d07a29c23a-config-data\") pod \"cloudkitty-db-sync-jr6l4\" (UID: \"cbc4e627-8238-49b1-a0ac-48d07a29c23a\") " pod="openstack/cloudkitty-db-sync-jr6l4" Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.128748 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hvvl\" (UniqueName: \"kubernetes.io/projected/cbc4e627-8238-49b1-a0ac-48d07a29c23a-kube-api-access-7hvvl\") pod \"cloudkitty-db-sync-jr6l4\" (UID: \"cbc4e627-8238-49b1-a0ac-48d07a29c23a\") " pod="openstack/cloudkitty-db-sync-jr6l4" Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.150412 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-9l6v9"] Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.166600 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.168856 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.172764 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.173148 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.185975 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9b9ab1e5-2daa-4057-84e3-50bef68bbaca-db-sync-config-data\") pod \"cinder-db-sync-cwjsz\" (UID: \"9b9ab1e5-2daa-4057-84e3-50bef68bbaca\") " pod="openstack/cinder-db-sync-cwjsz" Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.186034 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7e81227-e01b-4851-ac1f-d4ff480c0993-combined-ca-bundle\") pod \"neutron-db-sync-9l6v9\" (UID: \"c7e81227-e01b-4851-ac1f-d4ff480c0993\") " pod="openstack/neutron-db-sync-9l6v9" Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.186079 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9b9ab1e5-2daa-4057-84e3-50bef68bbaca-etc-machine-id\") pod \"cinder-db-sync-cwjsz\" (UID: \"9b9ab1e5-2daa-4057-84e3-50bef68bbaca\") " pod="openstack/cinder-db-sync-cwjsz" Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.186119 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ff59l\" (UniqueName: \"kubernetes.io/projected/9b9ab1e5-2daa-4057-84e3-50bef68bbaca-kube-api-access-ff59l\") pod \"cinder-db-sync-cwjsz\" (UID: \"9b9ab1e5-2daa-4057-84e3-50bef68bbaca\") " pod="openstack/cinder-db-sync-cwjsz" Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.186135 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b9ab1e5-2daa-4057-84e3-50bef68bbaca-scripts\") pod \"cinder-db-sync-cwjsz\" (UID: \"9b9ab1e5-2daa-4057-84e3-50bef68bbaca\") " pod="openstack/cinder-db-sync-cwjsz" Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.186163 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b9ab1e5-2daa-4057-84e3-50bef68bbaca-config-data\") pod \"cinder-db-sync-cwjsz\" (UID: \"9b9ab1e5-2daa-4057-84e3-50bef68bbaca\") " pod="openstack/cinder-db-sync-cwjsz" Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.186181 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b9ab1e5-2daa-4057-84e3-50bef68bbaca-combined-ca-bundle\") pod \"cinder-db-sync-cwjsz\" (UID: \"9b9ab1e5-2daa-4057-84e3-50bef68bbaca\") " pod="openstack/cinder-db-sync-cwjsz" Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.186239 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c7e81227-e01b-4851-ac1f-d4ff480c0993-config\") pod \"neutron-db-sync-9l6v9\" (UID: \"c7e81227-e01b-4851-ac1f-d4ff480c0993\") " pod="openstack/neutron-db-sync-9l6v9" Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.186255 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62cbt\" (UniqueName: \"kubernetes.io/projected/c7e81227-e01b-4851-ac1f-d4ff480c0993-kube-api-access-62cbt\") pod \"neutron-db-sync-9l6v9\" (UID: \"c7e81227-e01b-4851-ac1f-d4ff480c0993\") " pod="openstack/neutron-db-sync-9l6v9" Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.190237 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9b9ab1e5-2daa-4057-84e3-50bef68bbaca-db-sync-config-data\") pod \"cinder-db-sync-cwjsz\" (UID: \"9b9ab1e5-2daa-4057-84e3-50bef68bbaca\") " pod="openstack/cinder-db-sync-cwjsz" Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.190324 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9b9ab1e5-2daa-4057-84e3-50bef68bbaca-etc-machine-id\") pod \"cinder-db-sync-cwjsz\" (UID: \"9b9ab1e5-2daa-4057-84e3-50bef68bbaca\") " pod="openstack/cinder-db-sync-cwjsz" Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.197358 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b9ab1e5-2daa-4057-84e3-50bef68bbaca-scripts\") pod \"cinder-db-sync-cwjsz\" (UID: \"9b9ab1e5-2daa-4057-84e3-50bef68bbaca\") " pod="openstack/cinder-db-sync-cwjsz" Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.210036 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.216050 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b9ab1e5-2daa-4057-84e3-50bef68bbaca-combined-ca-bundle\") pod \"cinder-db-sync-cwjsz\" (UID: \"9b9ab1e5-2daa-4057-84e3-50bef68bbaca\") " pod="openstack/cinder-db-sync-cwjsz" Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.228309 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ff59l\" (UniqueName: \"kubernetes.io/projected/9b9ab1e5-2daa-4057-84e3-50bef68bbaca-kube-api-access-ff59l\") pod \"cinder-db-sync-cwjsz\" (UID: \"9b9ab1e5-2daa-4057-84e3-50bef68bbaca\") " pod="openstack/cinder-db-sync-cwjsz" Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.275883 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b9ab1e5-2daa-4057-84e3-50bef68bbaca-config-data\") pod \"cinder-db-sync-cwjsz\" (UID: \"9b9ab1e5-2daa-4057-84e3-50bef68bbaca\") " pod="openstack/cinder-db-sync-cwjsz" Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.308309 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sllbg\" (UniqueName: \"kubernetes.io/projected/0ca4e52f-2a99-42bb-abb3-20a9ee8594b5-kube-api-access-sllbg\") pod \"ceilometer-0\" (UID: \"0ca4e52f-2a99-42bb-abb3-20a9ee8594b5\") " pod="openstack/ceilometer-0" Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.308412 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c7e81227-e01b-4851-ac1f-d4ff480c0993-config\") pod \"neutron-db-sync-9l6v9\" (UID: \"c7e81227-e01b-4851-ac1f-d4ff480c0993\") " pod="openstack/neutron-db-sync-9l6v9" Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.308441 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62cbt\" (UniqueName: \"kubernetes.io/projected/c7e81227-e01b-4851-ac1f-d4ff480c0993-kube-api-access-62cbt\") pod \"neutron-db-sync-9l6v9\" (UID: \"c7e81227-e01b-4851-ac1f-d4ff480c0993\") " pod="openstack/neutron-db-sync-9l6v9" Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.308638 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7e81227-e01b-4851-ac1f-d4ff480c0993-combined-ca-bundle\") pod \"neutron-db-sync-9l6v9\" (UID: \"c7e81227-e01b-4851-ac1f-d4ff480c0993\") " pod="openstack/neutron-db-sync-9l6v9" Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.308678 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ca4e52f-2a99-42bb-abb3-20a9ee8594b5-run-httpd\") pod \"ceilometer-0\" (UID: \"0ca4e52f-2a99-42bb-abb3-20a9ee8594b5\") " pod="openstack/ceilometer-0" Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.308745 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ca4e52f-2a99-42bb-abb3-20a9ee8594b5-scripts\") pod \"ceilometer-0\" (UID: \"0ca4e52f-2a99-42bb-abb3-20a9ee8594b5\") " pod="openstack/ceilometer-0" Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.308826 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ca4e52f-2a99-42bb-abb3-20a9ee8594b5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0ca4e52f-2a99-42bb-abb3-20a9ee8594b5\") " pod="openstack/ceilometer-0" Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.308905 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0ca4e52f-2a99-42bb-abb3-20a9ee8594b5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0ca4e52f-2a99-42bb-abb3-20a9ee8594b5\") " pod="openstack/ceilometer-0" Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.308928 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ca4e52f-2a99-42bb-abb3-20a9ee8594b5-config-data\") pod \"ceilometer-0\" (UID: \"0ca4e52f-2a99-42bb-abb3-20a9ee8594b5\") " pod="openstack/ceilometer-0" Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.308953 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ca4e52f-2a99-42bb-abb3-20a9ee8594b5-log-httpd\") pod \"ceilometer-0\" (UID: \"0ca4e52f-2a99-42bb-abb3-20a9ee8594b5\") " pod="openstack/ceilometer-0" Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.318109 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-jr6l4" Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.334386 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7e81227-e01b-4851-ac1f-d4ff480c0993-combined-ca-bundle\") pod \"neutron-db-sync-9l6v9\" (UID: \"c7e81227-e01b-4851-ac1f-d4ff480c0993\") " pod="openstack/neutron-db-sync-9l6v9" Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.366284 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/c7e81227-e01b-4851-ac1f-d4ff480c0993-config\") pod \"neutron-db-sync-9l6v9\" (UID: \"c7e81227-e01b-4851-ac1f-d4ff480c0993\") " pod="openstack/neutron-db-sync-9l6v9" Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.374220 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-cwjsz" Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.374914 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-zttv2"] Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.377242 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-zttv2" Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.431181 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.431191 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62cbt\" (UniqueName: \"kubernetes.io/projected/c7e81227-e01b-4851-ac1f-d4ff480c0993-kube-api-access-62cbt\") pod \"neutron-db-sync-9l6v9\" (UID: \"c7e81227-e01b-4851-ac1f-d4ff480c0993\") " pod="openstack/neutron-db-sync-9l6v9" Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.431382 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.433141 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-jtzzl" Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.434768 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-zttv2"] Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.442036 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sllbg\" (UniqueName: \"kubernetes.io/projected/0ca4e52f-2a99-42bb-abb3-20a9ee8594b5-kube-api-access-sllbg\") pod \"ceilometer-0\" (UID: \"0ca4e52f-2a99-42bb-abb3-20a9ee8594b5\") " pod="openstack/ceilometer-0" Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.442310 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ca4e52f-2a99-42bb-abb3-20a9ee8594b5-run-httpd\") pod \"ceilometer-0\" (UID: \"0ca4e52f-2a99-42bb-abb3-20a9ee8594b5\") " pod="openstack/ceilometer-0" Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.442351 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ca4e52f-2a99-42bb-abb3-20a9ee8594b5-scripts\") pod \"ceilometer-0\" (UID: \"0ca4e52f-2a99-42bb-abb3-20a9ee8594b5\") " pod="openstack/ceilometer-0" Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.442406 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ca4e52f-2a99-42bb-abb3-20a9ee8594b5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0ca4e52f-2a99-42bb-abb3-20a9ee8594b5\") " pod="openstack/ceilometer-0" Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.442478 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0ca4e52f-2a99-42bb-abb3-20a9ee8594b5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0ca4e52f-2a99-42bb-abb3-20a9ee8594b5\") " pod="openstack/ceilometer-0" Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.442496 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ca4e52f-2a99-42bb-abb3-20a9ee8594b5-config-data\") pod \"ceilometer-0\" (UID: \"0ca4e52f-2a99-42bb-abb3-20a9ee8594b5\") " pod="openstack/ceilometer-0" Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.442512 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ca4e52f-2a99-42bb-abb3-20a9ee8594b5-log-httpd\") pod \"ceilometer-0\" (UID: \"0ca4e52f-2a99-42bb-abb3-20a9ee8594b5\") " pod="openstack/ceilometer-0" Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.442952 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ca4e52f-2a99-42bb-abb3-20a9ee8594b5-log-httpd\") pod \"ceilometer-0\" (UID: \"0ca4e52f-2a99-42bb-abb3-20a9ee8594b5\") " pod="openstack/ceilometer-0" Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.443417 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ca4e52f-2a99-42bb-abb3-20a9ee8594b5-run-httpd\") pod \"ceilometer-0\" (UID: \"0ca4e52f-2a99-42bb-abb3-20a9ee8594b5\") " pod="openstack/ceilometer-0" Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.446779 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ca4e52f-2a99-42bb-abb3-20a9ee8594b5-scripts\") pod \"ceilometer-0\" (UID: \"0ca4e52f-2a99-42bb-abb3-20a9ee8594b5\") " pod="openstack/ceilometer-0" Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.449479 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0ca4e52f-2a99-42bb-abb3-20a9ee8594b5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0ca4e52f-2a99-42bb-abb3-20a9ee8594b5\") " pod="openstack/ceilometer-0" Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.459932 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ca4e52f-2a99-42bb-abb3-20a9ee8594b5-config-data\") pod \"ceilometer-0\" (UID: \"0ca4e52f-2a99-42bb-abb3-20a9ee8594b5\") " pod="openstack/ceilometer-0" Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.459991 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-6fnmv"] Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.463234 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-6fnmv" Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.463591 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ca4e52f-2a99-42bb-abb3-20a9ee8594b5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0ca4e52f-2a99-42bb-abb3-20a9ee8594b5\") " pod="openstack/ceilometer-0" Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.475966 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-lnwnx" Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.476208 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.490014 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sllbg\" (UniqueName: \"kubernetes.io/projected/0ca4e52f-2a99-42bb-abb3-20a9ee8594b5-kube-api-access-sllbg\") pod \"ceilometer-0\" (UID: \"0ca4e52f-2a99-42bb-abb3-20a9ee8594b5\") " pod="openstack/ceilometer-0" Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.499028 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9d85d47c-spfd2"] Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.518690 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-6fnmv"] Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.518746 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6ffb94d8ff-7h7zf"] Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.520304 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ffb94d8ff-7h7zf" Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.528461 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.550923 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07e5d790-41c9-4f66-87e4-6088fe8bbc8f-combined-ca-bundle\") pod \"placement-db-sync-zttv2\" (UID: \"07e5d790-41c9-4f66-87e4-6088fe8bbc8f\") " pod="openstack/placement-db-sync-zttv2" Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.550978 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07e5d790-41c9-4f66-87e4-6088fe8bbc8f-logs\") pod \"placement-db-sync-zttv2\" (UID: \"07e5d790-41c9-4f66-87e4-6088fe8bbc8f\") " pod="openstack/placement-db-sync-zttv2" Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.551085 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07e5d790-41c9-4f66-87e4-6088fe8bbc8f-scripts\") pod \"placement-db-sync-zttv2\" (UID: \"07e5d790-41c9-4f66-87e4-6088fe8bbc8f\") " pod="openstack/placement-db-sync-zttv2" Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.551114 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07e5d790-41c9-4f66-87e4-6088fe8bbc8f-config-data\") pod \"placement-db-sync-zttv2\" (UID: \"07e5d790-41c9-4f66-87e4-6088fe8bbc8f\") " pod="openstack/placement-db-sync-zttv2" Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.551152 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sk854\" (UniqueName: \"kubernetes.io/projected/07e5d790-41c9-4f66-87e4-6088fe8bbc8f-kube-api-access-sk854\") pod \"placement-db-sync-zttv2\" (UID: \"07e5d790-41c9-4f66-87e4-6088fe8bbc8f\") " pod="openstack/placement-db-sync-zttv2" Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.562370 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6ffb94d8ff-7h7zf"] Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.612715 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"72a1cce7-93cb-4fe1-9d12-3d4e19692457","Type":"ContainerStarted","Data":"14edc39ee47b1add3d829638a23160fb93544e4f598322af1eed93d0c579f16c"} Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.612774 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"72a1cce7-93cb-4fe1-9d12-3d4e19692457","Type":"ContainerStarted","Data":"dab2834c883a8db8d7bcf7d80e297b1074e192abd7d414272f3837b5ca9f8e36"} Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.652355 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0672e1d3-4c3f-405c-8ffc-362fb524209d-config\") pod \"dnsmasq-dns-6ffb94d8ff-7h7zf\" (UID: \"0672e1d3-4c3f-405c-8ffc-362fb524209d\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-7h7zf" Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.652420 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07e5d790-41c9-4f66-87e4-6088fe8bbc8f-scripts\") pod \"placement-db-sync-zttv2\" (UID: \"07e5d790-41c9-4f66-87e4-6088fe8bbc8f\") " pod="openstack/placement-db-sync-zttv2" Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.652444 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0672e1d3-4c3f-405c-8ffc-362fb524209d-ovsdbserver-nb\") pod \"dnsmasq-dns-6ffb94d8ff-7h7zf\" (UID: \"0672e1d3-4c3f-405c-8ffc-362fb524209d\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-7h7zf" Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.652478 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07e5d790-41c9-4f66-87e4-6088fe8bbc8f-config-data\") pod \"placement-db-sync-zttv2\" (UID: \"07e5d790-41c9-4f66-87e4-6088fe8bbc8f\") " pod="openstack/placement-db-sync-zttv2" Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.652511 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sk854\" (UniqueName: \"kubernetes.io/projected/07e5d790-41c9-4f66-87e4-6088fe8bbc8f-kube-api-access-sk854\") pod \"placement-db-sync-zttv2\" (UID: \"07e5d790-41c9-4f66-87e4-6088fe8bbc8f\") " pod="openstack/placement-db-sync-zttv2" Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.652550 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0672e1d3-4c3f-405c-8ffc-362fb524209d-dns-svc\") pod \"dnsmasq-dns-6ffb94d8ff-7h7zf\" (UID: \"0672e1d3-4c3f-405c-8ffc-362fb524209d\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-7h7zf" Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.652574 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3821a978-80ec-4434-a871-ed026186f498-combined-ca-bundle\") pod \"barbican-db-sync-6fnmv\" (UID: \"3821a978-80ec-4434-a871-ed026186f498\") " pod="openstack/barbican-db-sync-6fnmv" Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.652598 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6t25v\" (UniqueName: \"kubernetes.io/projected/3821a978-80ec-4434-a871-ed026186f498-kube-api-access-6t25v\") pod \"barbican-db-sync-6fnmv\" (UID: \"3821a978-80ec-4434-a871-ed026186f498\") " pod="openstack/barbican-db-sync-6fnmv" Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.652627 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3821a978-80ec-4434-a871-ed026186f498-db-sync-config-data\") pod \"barbican-db-sync-6fnmv\" (UID: \"3821a978-80ec-4434-a871-ed026186f498\") " pod="openstack/barbican-db-sync-6fnmv" Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.652645 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7s2ns\" (UniqueName: \"kubernetes.io/projected/0672e1d3-4c3f-405c-8ffc-362fb524209d-kube-api-access-7s2ns\") pod \"dnsmasq-dns-6ffb94d8ff-7h7zf\" (UID: \"0672e1d3-4c3f-405c-8ffc-362fb524209d\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-7h7zf" Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.652671 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07e5d790-41c9-4f66-87e4-6088fe8bbc8f-combined-ca-bundle\") pod \"placement-db-sync-zttv2\" (UID: \"07e5d790-41c9-4f66-87e4-6088fe8bbc8f\") " pod="openstack/placement-db-sync-zttv2" Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.652694 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07e5d790-41c9-4f66-87e4-6088fe8bbc8f-logs\") pod \"placement-db-sync-zttv2\" (UID: \"07e5d790-41c9-4f66-87e4-6088fe8bbc8f\") " pod="openstack/placement-db-sync-zttv2" Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.652714 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0672e1d3-4c3f-405c-8ffc-362fb524209d-ovsdbserver-sb\") pod \"dnsmasq-dns-6ffb94d8ff-7h7zf\" (UID: \"0672e1d3-4c3f-405c-8ffc-362fb524209d\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-7h7zf" Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.658346 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07e5d790-41c9-4f66-87e4-6088fe8bbc8f-config-data\") pod \"placement-db-sync-zttv2\" (UID: \"07e5d790-41c9-4f66-87e4-6088fe8bbc8f\") " pod="openstack/placement-db-sync-zttv2" Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.660499 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07e5d790-41c9-4f66-87e4-6088fe8bbc8f-logs\") pod \"placement-db-sync-zttv2\" (UID: \"07e5d790-41c9-4f66-87e4-6088fe8bbc8f\") " pod="openstack/placement-db-sync-zttv2" Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.664455 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07e5d790-41c9-4f66-87e4-6088fe8bbc8f-scripts\") pod \"placement-db-sync-zttv2\" (UID: \"07e5d790-41c9-4f66-87e4-6088fe8bbc8f\") " pod="openstack/placement-db-sync-zttv2" Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.673263 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07e5d790-41c9-4f66-87e4-6088fe8bbc8f-combined-ca-bundle\") pod \"placement-db-sync-zttv2\" (UID: \"07e5d790-41c9-4f66-87e4-6088fe8bbc8f\") " pod="openstack/placement-db-sync-zttv2" Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.698947 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-9l6v9" Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.699967 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sk854\" (UniqueName: \"kubernetes.io/projected/07e5d790-41c9-4f66-87e4-6088fe8bbc8f-kube-api-access-sk854\") pod \"placement-db-sync-zttv2\" (UID: \"07e5d790-41c9-4f66-87e4-6088fe8bbc8f\") " pod="openstack/placement-db-sync-zttv2" Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.733953 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9d85d47c-spfd2"] Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.756194 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0672e1d3-4c3f-405c-8ffc-362fb524209d-ovsdbserver-nb\") pod \"dnsmasq-dns-6ffb94d8ff-7h7zf\" (UID: \"0672e1d3-4c3f-405c-8ffc-362fb524209d\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-7h7zf" Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.756568 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0672e1d3-4c3f-405c-8ffc-362fb524209d-dns-svc\") pod \"dnsmasq-dns-6ffb94d8ff-7h7zf\" (UID: \"0672e1d3-4c3f-405c-8ffc-362fb524209d\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-7h7zf" Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.756602 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3821a978-80ec-4434-a871-ed026186f498-combined-ca-bundle\") pod \"barbican-db-sync-6fnmv\" (UID: \"3821a978-80ec-4434-a871-ed026186f498\") " pod="openstack/barbican-db-sync-6fnmv" Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.756624 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6t25v\" (UniqueName: \"kubernetes.io/projected/3821a978-80ec-4434-a871-ed026186f498-kube-api-access-6t25v\") pod \"barbican-db-sync-6fnmv\" (UID: \"3821a978-80ec-4434-a871-ed026186f498\") " pod="openstack/barbican-db-sync-6fnmv" Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.756653 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3821a978-80ec-4434-a871-ed026186f498-db-sync-config-data\") pod \"barbican-db-sync-6fnmv\" (UID: \"3821a978-80ec-4434-a871-ed026186f498\") " pod="openstack/barbican-db-sync-6fnmv" Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.756675 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7s2ns\" (UniqueName: \"kubernetes.io/projected/0672e1d3-4c3f-405c-8ffc-362fb524209d-kube-api-access-7s2ns\") pod \"dnsmasq-dns-6ffb94d8ff-7h7zf\" (UID: \"0672e1d3-4c3f-405c-8ffc-362fb524209d\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-7h7zf" Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.756712 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0672e1d3-4c3f-405c-8ffc-362fb524209d-ovsdbserver-sb\") pod \"dnsmasq-dns-6ffb94d8ff-7h7zf\" (UID: \"0672e1d3-4c3f-405c-8ffc-362fb524209d\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-7h7zf" Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.756821 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0672e1d3-4c3f-405c-8ffc-362fb524209d-config\") pod \"dnsmasq-dns-6ffb94d8ff-7h7zf\" (UID: \"0672e1d3-4c3f-405c-8ffc-362fb524209d\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-7h7zf" Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.757700 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0672e1d3-4c3f-405c-8ffc-362fb524209d-config\") pod \"dnsmasq-dns-6ffb94d8ff-7h7zf\" (UID: \"0672e1d3-4c3f-405c-8ffc-362fb524209d\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-7h7zf" Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.759151 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0672e1d3-4c3f-405c-8ffc-362fb524209d-ovsdbserver-nb\") pod \"dnsmasq-dns-6ffb94d8ff-7h7zf\" (UID: \"0672e1d3-4c3f-405c-8ffc-362fb524209d\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-7h7zf" Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.760093 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0672e1d3-4c3f-405c-8ffc-362fb524209d-dns-svc\") pod \"dnsmasq-dns-6ffb94d8ff-7h7zf\" (UID: \"0672e1d3-4c3f-405c-8ffc-362fb524209d\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-7h7zf" Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.761442 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0672e1d3-4c3f-405c-8ffc-362fb524209d-ovsdbserver-sb\") pod \"dnsmasq-dns-6ffb94d8ff-7h7zf\" (UID: \"0672e1d3-4c3f-405c-8ffc-362fb524209d\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-7h7zf" Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.767193 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-zttv2" Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.778209 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3821a978-80ec-4434-a871-ed026186f498-db-sync-config-data\") pod \"barbican-db-sync-6fnmv\" (UID: \"3821a978-80ec-4434-a871-ed026186f498\") " pod="openstack/barbican-db-sync-6fnmv" Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.783703 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3821a978-80ec-4434-a871-ed026186f498-combined-ca-bundle\") pod \"barbican-db-sync-6fnmv\" (UID: \"3821a978-80ec-4434-a871-ed026186f498\") " pod="openstack/barbican-db-sync-6fnmv" Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.802535 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7s2ns\" (UniqueName: \"kubernetes.io/projected/0672e1d3-4c3f-405c-8ffc-362fb524209d-kube-api-access-7s2ns\") pod \"dnsmasq-dns-6ffb94d8ff-7h7zf\" (UID: \"0672e1d3-4c3f-405c-8ffc-362fb524209d\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-7h7zf" Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.839243 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6t25v\" (UniqueName: \"kubernetes.io/projected/3821a978-80ec-4434-a871-ed026186f498-kube-api-access-6t25v\") pod \"barbican-db-sync-6fnmv\" (UID: \"3821a978-80ec-4434-a871-ed026186f498\") " pod="openstack/barbican-db-sync-6fnmv" Dec 10 15:43:17 crc kubenswrapper[4755]: I1210 15:43:17.856536 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ffb94d8ff-7h7zf" Dec 10 15:43:18 crc kubenswrapper[4755]: I1210 15:43:18.103187 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-6fnmv" Dec 10 15:43:18 crc kubenswrapper[4755]: I1210 15:43:18.130095 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-jxg9d"] Dec 10 15:43:18 crc kubenswrapper[4755]: I1210 15:43:18.454512 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-cwjsz"] Dec 10 15:43:18 crc kubenswrapper[4755]: I1210 15:43:18.522692 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 10 15:43:18 crc kubenswrapper[4755]: I1210 15:43:18.532287 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-sync-jr6l4"] Dec 10 15:43:18 crc kubenswrapper[4755]: W1210 15:43:18.543404 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9b9ab1e5_2daa_4057_84e3_50bef68bbaca.slice/crio-1c2aa86a6dd7c5c73d503209d54262107b8c28431522cc048320276dcd8b4426 WatchSource:0}: Error finding container 1c2aa86a6dd7c5c73d503209d54262107b8c28431522cc048320276dcd8b4426: Status 404 returned error can't find the container with id 1c2aa86a6dd7c5c73d503209d54262107b8c28431522cc048320276dcd8b4426 Dec 10 15:43:18 crc kubenswrapper[4755]: W1210 15:43:18.558122 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcbc4e627_8238_49b1_a0ac_48d07a29c23a.slice/crio-c08ece9847a439ede0169a8bee4c81ad7536b4ffcc9b7b12ea55f6e32538368f WatchSource:0}: Error finding container c08ece9847a439ede0169a8bee4c81ad7536b4ffcc9b7b12ea55f6e32538368f: Status 404 returned error can't find the container with id c08ece9847a439ede0169a8bee4c81ad7536b4ffcc9b7b12ea55f6e32538368f Dec 10 15:43:18 crc kubenswrapper[4755]: I1210 15:43:18.639668 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ca4e52f-2a99-42bb-abb3-20a9ee8594b5","Type":"ContainerStarted","Data":"fc8b0507fb5263093be4c6eeeb13faf83f94628ca65a23e2a6d4d687727e6c5e"} Dec 10 15:43:18 crc kubenswrapper[4755]: I1210 15:43:18.648997 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-cwjsz" event={"ID":"9b9ab1e5-2daa-4057-84e3-50bef68bbaca","Type":"ContainerStarted","Data":"1c2aa86a6dd7c5c73d503209d54262107b8c28431522cc048320276dcd8b4426"} Dec 10 15:43:18 crc kubenswrapper[4755]: I1210 15:43:18.659960 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-jxg9d" event={"ID":"62ecc4bf-c914-4fef-9ba0-099388953d74","Type":"ContainerStarted","Data":"61ec576673b119492ef55ae7157c1045cf64bccd293b94417ad3fc220afdb373"} Dec 10 15:43:18 crc kubenswrapper[4755]: I1210 15:43:18.683764 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"72a1cce7-93cb-4fe1-9d12-3d4e19692457","Type":"ContainerStarted","Data":"c1de1645a2b863c18a5d3877128ea1f38cb3e458705c62475270719fec2519c8"} Dec 10 15:43:18 crc kubenswrapper[4755]: I1210 15:43:18.684076 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"72a1cce7-93cb-4fe1-9d12-3d4e19692457","Type":"ContainerStarted","Data":"006c8e7dc3188403fe820d9dc77a3e9eff608fa7fece1efc6e1f641698cbccb8"} Dec 10 15:43:18 crc kubenswrapper[4755]: I1210 15:43:18.687403 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-jxg9d" podStartSLOduration=2.687386109 podStartE2EDuration="2.687386109s" podCreationTimestamp="2025-12-10 15:43:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:43:18.679245938 +0000 UTC m=+1195.280129570" watchObservedRunningTime="2025-12-10 15:43:18.687386109 +0000 UTC m=+1195.288269741" Dec 10 15:43:18 crc kubenswrapper[4755]: I1210 15:43:18.693184 4755 generic.go:334] "Generic (PLEG): container finished" podID="f2d8956f-4220-4044-9620-4dd519d81777" containerID="219066a1da03b572c819a91535652266ace2247d8049c5875efb7bd8167900de" exitCode=0 Dec 10 15:43:18 crc kubenswrapper[4755]: I1210 15:43:18.693315 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9d85d47c-spfd2" event={"ID":"f2d8956f-4220-4044-9620-4dd519d81777","Type":"ContainerDied","Data":"219066a1da03b572c819a91535652266ace2247d8049c5875efb7bd8167900de"} Dec 10 15:43:18 crc kubenswrapper[4755]: I1210 15:43:18.693387 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9d85d47c-spfd2" event={"ID":"f2d8956f-4220-4044-9620-4dd519d81777","Type":"ContainerStarted","Data":"9db32d2c1a6e58cdd806acc8c51d16ccd565a5c6076066b581340ff0487a5238"} Dec 10 15:43:18 crc kubenswrapper[4755]: I1210 15:43:18.695658 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-jr6l4" event={"ID":"cbc4e627-8238-49b1-a0ac-48d07a29c23a","Type":"ContainerStarted","Data":"c08ece9847a439ede0169a8bee4c81ad7536b4ffcc9b7b12ea55f6e32538368f"} Dec 10 15:43:18 crc kubenswrapper[4755]: I1210 15:43:18.746362 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-9l6v9"] Dec 10 15:43:18 crc kubenswrapper[4755]: I1210 15:43:18.756359 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-zttv2"] Dec 10 15:43:18 crc kubenswrapper[4755]: I1210 15:43:18.768549 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6ffb94d8ff-7h7zf"] Dec 10 15:43:18 crc kubenswrapper[4755]: I1210 15:43:18.780508 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=37.975634086 podStartE2EDuration="51.780489986s" podCreationTimestamp="2025-12-10 15:42:27 +0000 UTC" firstStartedPulling="2025-12-10 15:43:01.798154026 +0000 UTC m=+1178.399037648" lastFinishedPulling="2025-12-10 15:43:15.603009916 +0000 UTC m=+1192.203893548" observedRunningTime="2025-12-10 15:43:18.743107442 +0000 UTC m=+1195.343991074" watchObservedRunningTime="2025-12-10 15:43:18.780489986 +0000 UTC m=+1195.381373618" Dec 10 15:43:18 crc kubenswrapper[4755]: W1210 15:43:18.802391 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0672e1d3_4c3f_405c_8ffc_362fb524209d.slice/crio-d11fc31beab57a5f47f0dca4fab0f5ddd023138da0026b1bb51b4b2890dfdac0 WatchSource:0}: Error finding container d11fc31beab57a5f47f0dca4fab0f5ddd023138da0026b1bb51b4b2890dfdac0: Status 404 returned error can't find the container with id d11fc31beab57a5f47f0dca4fab0f5ddd023138da0026b1bb51b4b2890dfdac0 Dec 10 15:43:19 crc kubenswrapper[4755]: I1210 15:43:19.005731 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-6fnmv"] Dec 10 15:43:19 crc kubenswrapper[4755]: W1210 15:43:19.022019 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3821a978_80ec_4434_a871_ed026186f498.slice/crio-e6c3fbda279e21f480c71909873a5abec80c8a66d531a36d9ac01498c4abee44 WatchSource:0}: Error finding container e6c3fbda279e21f480c71909873a5abec80c8a66d531a36d9ac01498c4abee44: Status 404 returned error can't find the container with id e6c3fbda279e21f480c71909873a5abec80c8a66d531a36d9ac01498c4abee44 Dec 10 15:43:19 crc kubenswrapper[4755]: I1210 15:43:19.273929 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6ffb94d8ff-7h7zf"] Dec 10 15:43:19 crc kubenswrapper[4755]: I1210 15:43:19.287520 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-9gscg"] Dec 10 15:43:19 crc kubenswrapper[4755]: I1210 15:43:19.289107 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cf78879c9-9gscg" Dec 10 15:43:19 crc kubenswrapper[4755]: I1210 15:43:19.296428 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Dec 10 15:43:19 crc kubenswrapper[4755]: I1210 15:43:19.312221 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9d85d47c-spfd2" Dec 10 15:43:19 crc kubenswrapper[4755]: I1210 15:43:19.329354 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-9gscg"] Dec 10 15:43:19 crc kubenswrapper[4755]: I1210 15:43:19.431692 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2d8956f-4220-4044-9620-4dd519d81777-config\") pod \"f2d8956f-4220-4044-9620-4dd519d81777\" (UID: \"f2d8956f-4220-4044-9620-4dd519d81777\") " Dec 10 15:43:19 crc kubenswrapper[4755]: I1210 15:43:19.431838 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f2d8956f-4220-4044-9620-4dd519d81777-ovsdbserver-sb\") pod \"f2d8956f-4220-4044-9620-4dd519d81777\" (UID: \"f2d8956f-4220-4044-9620-4dd519d81777\") " Dec 10 15:43:19 crc kubenswrapper[4755]: I1210 15:43:19.431902 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f2d8956f-4220-4044-9620-4dd519d81777-dns-svc\") pod \"f2d8956f-4220-4044-9620-4dd519d81777\" (UID: \"f2d8956f-4220-4044-9620-4dd519d81777\") " Dec 10 15:43:19 crc kubenswrapper[4755]: I1210 15:43:19.431994 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-96jnk\" (UniqueName: \"kubernetes.io/projected/f2d8956f-4220-4044-9620-4dd519d81777-kube-api-access-96jnk\") pod \"f2d8956f-4220-4044-9620-4dd519d81777\" (UID: \"f2d8956f-4220-4044-9620-4dd519d81777\") " Dec 10 15:43:19 crc kubenswrapper[4755]: I1210 15:43:19.432051 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f2d8956f-4220-4044-9620-4dd519d81777-ovsdbserver-nb\") pod \"f2d8956f-4220-4044-9620-4dd519d81777\" (UID: \"f2d8956f-4220-4044-9620-4dd519d81777\") " Dec 10 15:43:19 crc kubenswrapper[4755]: I1210 15:43:19.432363 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b52de9e0-e981-4d39-addb-6c732611ea50-config\") pod \"dnsmasq-dns-cf78879c9-9gscg\" (UID: \"b52de9e0-e981-4d39-addb-6c732611ea50\") " pod="openstack/dnsmasq-dns-cf78879c9-9gscg" Dec 10 15:43:19 crc kubenswrapper[4755]: I1210 15:43:19.432387 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b52de9e0-e981-4d39-addb-6c732611ea50-ovsdbserver-sb\") pod \"dnsmasq-dns-cf78879c9-9gscg\" (UID: \"b52de9e0-e981-4d39-addb-6c732611ea50\") " pod="openstack/dnsmasq-dns-cf78879c9-9gscg" Dec 10 15:43:19 crc kubenswrapper[4755]: I1210 15:43:19.432426 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wf4ct\" (UniqueName: \"kubernetes.io/projected/b52de9e0-e981-4d39-addb-6c732611ea50-kube-api-access-wf4ct\") pod \"dnsmasq-dns-cf78879c9-9gscg\" (UID: \"b52de9e0-e981-4d39-addb-6c732611ea50\") " pod="openstack/dnsmasq-dns-cf78879c9-9gscg" Dec 10 15:43:19 crc kubenswrapper[4755]: I1210 15:43:19.432457 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b52de9e0-e981-4d39-addb-6c732611ea50-dns-swift-storage-0\") pod \"dnsmasq-dns-cf78879c9-9gscg\" (UID: \"b52de9e0-e981-4d39-addb-6c732611ea50\") " pod="openstack/dnsmasq-dns-cf78879c9-9gscg" Dec 10 15:43:19 crc kubenswrapper[4755]: I1210 15:43:19.432503 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b52de9e0-e981-4d39-addb-6c732611ea50-ovsdbserver-nb\") pod \"dnsmasq-dns-cf78879c9-9gscg\" (UID: \"b52de9e0-e981-4d39-addb-6c732611ea50\") " pod="openstack/dnsmasq-dns-cf78879c9-9gscg" Dec 10 15:43:19 crc kubenswrapper[4755]: I1210 15:43:19.432538 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b52de9e0-e981-4d39-addb-6c732611ea50-dns-svc\") pod \"dnsmasq-dns-cf78879c9-9gscg\" (UID: \"b52de9e0-e981-4d39-addb-6c732611ea50\") " pod="openstack/dnsmasq-dns-cf78879c9-9gscg" Dec 10 15:43:19 crc kubenswrapper[4755]: I1210 15:43:19.451628 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2d8956f-4220-4044-9620-4dd519d81777-kube-api-access-96jnk" (OuterVolumeSpecName: "kube-api-access-96jnk") pod "f2d8956f-4220-4044-9620-4dd519d81777" (UID: "f2d8956f-4220-4044-9620-4dd519d81777"). InnerVolumeSpecName "kube-api-access-96jnk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:43:19 crc kubenswrapper[4755]: I1210 15:43:19.464942 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2d8956f-4220-4044-9620-4dd519d81777-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f2d8956f-4220-4044-9620-4dd519d81777" (UID: "f2d8956f-4220-4044-9620-4dd519d81777"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:43:19 crc kubenswrapper[4755]: I1210 15:43:19.470364 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2d8956f-4220-4044-9620-4dd519d81777-config" (OuterVolumeSpecName: "config") pod "f2d8956f-4220-4044-9620-4dd519d81777" (UID: "f2d8956f-4220-4044-9620-4dd519d81777"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:43:19 crc kubenswrapper[4755]: I1210 15:43:19.514480 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2d8956f-4220-4044-9620-4dd519d81777-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f2d8956f-4220-4044-9620-4dd519d81777" (UID: "f2d8956f-4220-4044-9620-4dd519d81777"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:43:19 crc kubenswrapper[4755]: I1210 15:43:19.517015 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2d8956f-4220-4044-9620-4dd519d81777-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f2d8956f-4220-4044-9620-4dd519d81777" (UID: "f2d8956f-4220-4044-9620-4dd519d81777"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:43:19 crc kubenswrapper[4755]: I1210 15:43:19.534982 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b52de9e0-e981-4d39-addb-6c732611ea50-config\") pod \"dnsmasq-dns-cf78879c9-9gscg\" (UID: \"b52de9e0-e981-4d39-addb-6c732611ea50\") " pod="openstack/dnsmasq-dns-cf78879c9-9gscg" Dec 10 15:43:19 crc kubenswrapper[4755]: I1210 15:43:19.535026 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b52de9e0-e981-4d39-addb-6c732611ea50-ovsdbserver-sb\") pod \"dnsmasq-dns-cf78879c9-9gscg\" (UID: \"b52de9e0-e981-4d39-addb-6c732611ea50\") " pod="openstack/dnsmasq-dns-cf78879c9-9gscg" Dec 10 15:43:19 crc kubenswrapper[4755]: I1210 15:43:19.535067 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wf4ct\" (UniqueName: \"kubernetes.io/projected/b52de9e0-e981-4d39-addb-6c732611ea50-kube-api-access-wf4ct\") pod \"dnsmasq-dns-cf78879c9-9gscg\" (UID: \"b52de9e0-e981-4d39-addb-6c732611ea50\") " pod="openstack/dnsmasq-dns-cf78879c9-9gscg" Dec 10 15:43:19 crc kubenswrapper[4755]: I1210 15:43:19.535097 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b52de9e0-e981-4d39-addb-6c732611ea50-dns-swift-storage-0\") pod \"dnsmasq-dns-cf78879c9-9gscg\" (UID: \"b52de9e0-e981-4d39-addb-6c732611ea50\") " pod="openstack/dnsmasq-dns-cf78879c9-9gscg" Dec 10 15:43:19 crc kubenswrapper[4755]: I1210 15:43:19.535136 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b52de9e0-e981-4d39-addb-6c732611ea50-ovsdbserver-nb\") pod \"dnsmasq-dns-cf78879c9-9gscg\" (UID: \"b52de9e0-e981-4d39-addb-6c732611ea50\") " pod="openstack/dnsmasq-dns-cf78879c9-9gscg" Dec 10 15:43:19 crc kubenswrapper[4755]: I1210 15:43:19.535174 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b52de9e0-e981-4d39-addb-6c732611ea50-dns-svc\") pod \"dnsmasq-dns-cf78879c9-9gscg\" (UID: \"b52de9e0-e981-4d39-addb-6c732611ea50\") " pod="openstack/dnsmasq-dns-cf78879c9-9gscg" Dec 10 15:43:19 crc kubenswrapper[4755]: I1210 15:43:19.535291 4755 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f2d8956f-4220-4044-9620-4dd519d81777-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 10 15:43:19 crc kubenswrapper[4755]: I1210 15:43:19.535305 4755 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2d8956f-4220-4044-9620-4dd519d81777-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:43:19 crc kubenswrapper[4755]: I1210 15:43:19.535313 4755 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f2d8956f-4220-4044-9620-4dd519d81777-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 10 15:43:19 crc kubenswrapper[4755]: I1210 15:43:19.535321 4755 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f2d8956f-4220-4044-9620-4dd519d81777-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 10 15:43:19 crc kubenswrapper[4755]: I1210 15:43:19.535329 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-96jnk\" (UniqueName: \"kubernetes.io/projected/f2d8956f-4220-4044-9620-4dd519d81777-kube-api-access-96jnk\") on node \"crc\" DevicePath \"\"" Dec 10 15:43:19 crc kubenswrapper[4755]: I1210 15:43:19.538976 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b52de9e0-e981-4d39-addb-6c732611ea50-ovsdbserver-sb\") pod \"dnsmasq-dns-cf78879c9-9gscg\" (UID: \"b52de9e0-e981-4d39-addb-6c732611ea50\") " pod="openstack/dnsmasq-dns-cf78879c9-9gscg" Dec 10 15:43:19 crc kubenswrapper[4755]: I1210 15:43:19.539051 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b52de9e0-e981-4d39-addb-6c732611ea50-config\") pod \"dnsmasq-dns-cf78879c9-9gscg\" (UID: \"b52de9e0-e981-4d39-addb-6c732611ea50\") " pod="openstack/dnsmasq-dns-cf78879c9-9gscg" Dec 10 15:43:19 crc kubenswrapper[4755]: I1210 15:43:19.539379 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b52de9e0-e981-4d39-addb-6c732611ea50-ovsdbserver-nb\") pod \"dnsmasq-dns-cf78879c9-9gscg\" (UID: \"b52de9e0-e981-4d39-addb-6c732611ea50\") " pod="openstack/dnsmasq-dns-cf78879c9-9gscg" Dec 10 15:43:19 crc kubenswrapper[4755]: I1210 15:43:19.539684 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b52de9e0-e981-4d39-addb-6c732611ea50-dns-swift-storage-0\") pod \"dnsmasq-dns-cf78879c9-9gscg\" (UID: \"b52de9e0-e981-4d39-addb-6c732611ea50\") " pod="openstack/dnsmasq-dns-cf78879c9-9gscg" Dec 10 15:43:19 crc kubenswrapper[4755]: I1210 15:43:19.540202 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b52de9e0-e981-4d39-addb-6c732611ea50-dns-svc\") pod \"dnsmasq-dns-cf78879c9-9gscg\" (UID: \"b52de9e0-e981-4d39-addb-6c732611ea50\") " pod="openstack/dnsmasq-dns-cf78879c9-9gscg" Dec 10 15:43:19 crc kubenswrapper[4755]: I1210 15:43:19.554661 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wf4ct\" (UniqueName: \"kubernetes.io/projected/b52de9e0-e981-4d39-addb-6c732611ea50-kube-api-access-wf4ct\") pod \"dnsmasq-dns-cf78879c9-9gscg\" (UID: \"b52de9e0-e981-4d39-addb-6c732611ea50\") " pod="openstack/dnsmasq-dns-cf78879c9-9gscg" Dec 10 15:43:19 crc kubenswrapper[4755]: I1210 15:43:19.681176 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cf78879c9-9gscg" Dec 10 15:43:19 crc kubenswrapper[4755]: I1210 15:43:19.755877 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-jxg9d" event={"ID":"62ecc4bf-c914-4fef-9ba0-099388953d74","Type":"ContainerStarted","Data":"f6f038b2833539a2083146e7c928b3f14dfe295137a5074a381347edec0d9d9c"} Dec 10 15:43:19 crc kubenswrapper[4755]: I1210 15:43:19.815703 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9d85d47c-spfd2" Dec 10 15:43:19 crc kubenswrapper[4755]: I1210 15:43:19.849030 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-zttv2" event={"ID":"07e5d790-41c9-4f66-87e4-6088fe8bbc8f","Type":"ContainerStarted","Data":"1f6ed92a3d6c24de97595bdc9d0c78344b496c1688609ab2fa71da3f448babf1"} Dec 10 15:43:19 crc kubenswrapper[4755]: I1210 15:43:19.849070 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9d85d47c-spfd2" event={"ID":"f2d8956f-4220-4044-9620-4dd519d81777","Type":"ContainerDied","Data":"9db32d2c1a6e58cdd806acc8c51d16ccd565a5c6076066b581340ff0487a5238"} Dec 10 15:43:19 crc kubenswrapper[4755]: I1210 15:43:19.849094 4755 scope.go:117] "RemoveContainer" containerID="219066a1da03b572c819a91535652266ace2247d8049c5875efb7bd8167900de" Dec 10 15:43:19 crc kubenswrapper[4755]: I1210 15:43:19.881523 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-9l6v9" event={"ID":"c7e81227-e01b-4851-ac1f-d4ff480c0993","Type":"ContainerStarted","Data":"b4742b126f04bd73fc89a611417ca7e607aa6268f96f09a20199c1a37ce9f841"} Dec 10 15:43:19 crc kubenswrapper[4755]: I1210 15:43:19.881575 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-9l6v9" event={"ID":"c7e81227-e01b-4851-ac1f-d4ff480c0993","Type":"ContainerStarted","Data":"50a69470631e39cc191d910a9becadead74499f0dcb7c9ea3449f7b35a258e52"} Dec 10 15:43:19 crc kubenswrapper[4755]: I1210 15:43:19.887885 4755 generic.go:334] "Generic (PLEG): container finished" podID="0672e1d3-4c3f-405c-8ffc-362fb524209d" containerID="18a39bebfc1cc0dae38807dc15328a224c4cc01d89e27bc3b13465b772429ab7" exitCode=0 Dec 10 15:43:19 crc kubenswrapper[4755]: I1210 15:43:19.887968 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ffb94d8ff-7h7zf" event={"ID":"0672e1d3-4c3f-405c-8ffc-362fb524209d","Type":"ContainerDied","Data":"18a39bebfc1cc0dae38807dc15328a224c4cc01d89e27bc3b13465b772429ab7"} Dec 10 15:43:19 crc kubenswrapper[4755]: I1210 15:43:19.887993 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ffb94d8ff-7h7zf" event={"ID":"0672e1d3-4c3f-405c-8ffc-362fb524209d","Type":"ContainerStarted","Data":"d11fc31beab57a5f47f0dca4fab0f5ddd023138da0026b1bb51b4b2890dfdac0"} Dec 10 15:43:19 crc kubenswrapper[4755]: I1210 15:43:19.929030 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-6fnmv" event={"ID":"3821a978-80ec-4434-a871-ed026186f498","Type":"ContainerStarted","Data":"e6c3fbda279e21f480c71909873a5abec80c8a66d531a36d9ac01498c4abee44"} Dec 10 15:43:19 crc kubenswrapper[4755]: I1210 15:43:19.954775 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-9l6v9" podStartSLOduration=3.954757851 podStartE2EDuration="3.954757851s" podCreationTimestamp="2025-12-10 15:43:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:43:19.949152398 +0000 UTC m=+1196.550036020" watchObservedRunningTime="2025-12-10 15:43:19.954757851 +0000 UTC m=+1196.555641473" Dec 10 15:43:20 crc kubenswrapper[4755]: I1210 15:43:20.086599 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 10 15:43:20 crc kubenswrapper[4755]: I1210 15:43:20.165661 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9d85d47c-spfd2"] Dec 10 15:43:20 crc kubenswrapper[4755]: I1210 15:43:20.195576 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9d85d47c-spfd2"] Dec 10 15:43:20 crc kubenswrapper[4755]: I1210 15:43:20.326578 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Dec 10 15:43:20 crc kubenswrapper[4755]: I1210 15:43:20.326617 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Dec 10 15:43:20 crc kubenswrapper[4755]: I1210 15:43:20.341904 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Dec 10 15:43:20 crc kubenswrapper[4755]: I1210 15:43:20.828648 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-9gscg"] Dec 10 15:43:20 crc kubenswrapper[4755]: W1210 15:43:20.878178 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb52de9e0_e981_4d39_addb_6c732611ea50.slice/crio-ae676adee217d0d01dac6ff12cad01968ca51415886da1b545f76b2674ce8bd6 WatchSource:0}: Error finding container ae676adee217d0d01dac6ff12cad01968ca51415886da1b545f76b2674ce8bd6: Status 404 returned error can't find the container with id ae676adee217d0d01dac6ff12cad01968ca51415886da1b545f76b2674ce8bd6 Dec 10 15:43:20 crc kubenswrapper[4755]: I1210 15:43:20.980973 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ffb94d8ff-7h7zf" event={"ID":"0672e1d3-4c3f-405c-8ffc-362fb524209d","Type":"ContainerStarted","Data":"7707b09d5ee71565b88438479df9b283f32841c70a42f411415f8a39402c3c54"} Dec 10 15:43:20 crc kubenswrapper[4755]: I1210 15:43:20.981145 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6ffb94d8ff-7h7zf" podUID="0672e1d3-4c3f-405c-8ffc-362fb524209d" containerName="dnsmasq-dns" containerID="cri-o://7707b09d5ee71565b88438479df9b283f32841c70a42f411415f8a39402c3c54" gracePeriod=10 Dec 10 15:43:20 crc kubenswrapper[4755]: I1210 15:43:20.981386 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6ffb94d8ff-7h7zf" Dec 10 15:43:20 crc kubenswrapper[4755]: I1210 15:43:20.989891 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-9gscg" event={"ID":"b52de9e0-e981-4d39-addb-6c732611ea50","Type":"ContainerStarted","Data":"ae676adee217d0d01dac6ff12cad01968ca51415886da1b545f76b2674ce8bd6"} Dec 10 15:43:21 crc kubenswrapper[4755]: I1210 15:43:21.008193 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6ffb94d8ff-7h7zf" podStartSLOduration=4.008173705 podStartE2EDuration="4.008173705s" podCreationTimestamp="2025-12-10 15:43:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:43:21.00763798 +0000 UTC m=+1197.608521612" watchObservedRunningTime="2025-12-10 15:43:21.008173705 +0000 UTC m=+1197.609057337" Dec 10 15:43:21 crc kubenswrapper[4755]: I1210 15:43:21.012083 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Dec 10 15:43:21 crc kubenswrapper[4755]: I1210 15:43:21.798673 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2d8956f-4220-4044-9620-4dd519d81777" path="/var/lib/kubelet/pods/f2d8956f-4220-4044-9620-4dd519d81777/volumes" Dec 10 15:43:22 crc kubenswrapper[4755]: I1210 15:43:22.021218 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ffb94d8ff-7h7zf" Dec 10 15:43:22 crc kubenswrapper[4755]: I1210 15:43:22.063701 4755 generic.go:334] "Generic (PLEG): container finished" podID="b52de9e0-e981-4d39-addb-6c732611ea50" containerID="6eb54c8013ad59f9914cf37119f3febd41540b21f0dd34434e38ec9508aa118b" exitCode=0 Dec 10 15:43:22 crc kubenswrapper[4755]: I1210 15:43:22.063791 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-9gscg" event={"ID":"b52de9e0-e981-4d39-addb-6c732611ea50","Type":"ContainerDied","Data":"6eb54c8013ad59f9914cf37119f3febd41540b21f0dd34434e38ec9508aa118b"} Dec 10 15:43:22 crc kubenswrapper[4755]: I1210 15:43:22.092603 4755 generic.go:334] "Generic (PLEG): container finished" podID="0672e1d3-4c3f-405c-8ffc-362fb524209d" containerID="7707b09d5ee71565b88438479df9b283f32841c70a42f411415f8a39402c3c54" exitCode=0 Dec 10 15:43:22 crc kubenswrapper[4755]: I1210 15:43:22.092672 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ffb94d8ff-7h7zf" Dec 10 15:43:22 crc kubenswrapper[4755]: I1210 15:43:22.092715 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ffb94d8ff-7h7zf" event={"ID":"0672e1d3-4c3f-405c-8ffc-362fb524209d","Type":"ContainerDied","Data":"7707b09d5ee71565b88438479df9b283f32841c70a42f411415f8a39402c3c54"} Dec 10 15:43:22 crc kubenswrapper[4755]: I1210 15:43:22.092739 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ffb94d8ff-7h7zf" event={"ID":"0672e1d3-4c3f-405c-8ffc-362fb524209d","Type":"ContainerDied","Data":"d11fc31beab57a5f47f0dca4fab0f5ddd023138da0026b1bb51b4b2890dfdac0"} Dec 10 15:43:22 crc kubenswrapper[4755]: I1210 15:43:22.092753 4755 scope.go:117] "RemoveContainer" containerID="7707b09d5ee71565b88438479df9b283f32841c70a42f411415f8a39402c3c54" Dec 10 15:43:22 crc kubenswrapper[4755]: I1210 15:43:22.146654 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0672e1d3-4c3f-405c-8ffc-362fb524209d-ovsdbserver-sb\") pod \"0672e1d3-4c3f-405c-8ffc-362fb524209d\" (UID: \"0672e1d3-4c3f-405c-8ffc-362fb524209d\") " Dec 10 15:43:22 crc kubenswrapper[4755]: I1210 15:43:22.146734 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7s2ns\" (UniqueName: \"kubernetes.io/projected/0672e1d3-4c3f-405c-8ffc-362fb524209d-kube-api-access-7s2ns\") pod \"0672e1d3-4c3f-405c-8ffc-362fb524209d\" (UID: \"0672e1d3-4c3f-405c-8ffc-362fb524209d\") " Dec 10 15:43:22 crc kubenswrapper[4755]: I1210 15:43:22.146814 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0672e1d3-4c3f-405c-8ffc-362fb524209d-dns-svc\") pod \"0672e1d3-4c3f-405c-8ffc-362fb524209d\" (UID: \"0672e1d3-4c3f-405c-8ffc-362fb524209d\") " Dec 10 15:43:22 crc kubenswrapper[4755]: I1210 15:43:22.146918 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0672e1d3-4c3f-405c-8ffc-362fb524209d-config\") pod \"0672e1d3-4c3f-405c-8ffc-362fb524209d\" (UID: \"0672e1d3-4c3f-405c-8ffc-362fb524209d\") " Dec 10 15:43:22 crc kubenswrapper[4755]: I1210 15:43:22.146955 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0672e1d3-4c3f-405c-8ffc-362fb524209d-ovsdbserver-nb\") pod \"0672e1d3-4c3f-405c-8ffc-362fb524209d\" (UID: \"0672e1d3-4c3f-405c-8ffc-362fb524209d\") " Dec 10 15:43:22 crc kubenswrapper[4755]: I1210 15:43:22.155752 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0672e1d3-4c3f-405c-8ffc-362fb524209d-kube-api-access-7s2ns" (OuterVolumeSpecName: "kube-api-access-7s2ns") pod "0672e1d3-4c3f-405c-8ffc-362fb524209d" (UID: "0672e1d3-4c3f-405c-8ffc-362fb524209d"). InnerVolumeSpecName "kube-api-access-7s2ns". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:43:22 crc kubenswrapper[4755]: I1210 15:43:22.222752 4755 scope.go:117] "RemoveContainer" containerID="18a39bebfc1cc0dae38807dc15328a224c4cc01d89e27bc3b13465b772429ab7" Dec 10 15:43:22 crc kubenswrapper[4755]: I1210 15:43:22.241927 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0672e1d3-4c3f-405c-8ffc-362fb524209d-config" (OuterVolumeSpecName: "config") pod "0672e1d3-4c3f-405c-8ffc-362fb524209d" (UID: "0672e1d3-4c3f-405c-8ffc-362fb524209d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:43:22 crc kubenswrapper[4755]: I1210 15:43:22.250073 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7s2ns\" (UniqueName: \"kubernetes.io/projected/0672e1d3-4c3f-405c-8ffc-362fb524209d-kube-api-access-7s2ns\") on node \"crc\" DevicePath \"\"" Dec 10 15:43:22 crc kubenswrapper[4755]: I1210 15:43:22.250661 4755 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0672e1d3-4c3f-405c-8ffc-362fb524209d-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:43:22 crc kubenswrapper[4755]: I1210 15:43:22.264858 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0672e1d3-4c3f-405c-8ffc-362fb524209d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0672e1d3-4c3f-405c-8ffc-362fb524209d" (UID: "0672e1d3-4c3f-405c-8ffc-362fb524209d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:43:22 crc kubenswrapper[4755]: I1210 15:43:22.272320 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0672e1d3-4c3f-405c-8ffc-362fb524209d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0672e1d3-4c3f-405c-8ffc-362fb524209d" (UID: "0672e1d3-4c3f-405c-8ffc-362fb524209d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:43:22 crc kubenswrapper[4755]: I1210 15:43:22.274412 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0672e1d3-4c3f-405c-8ffc-362fb524209d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0672e1d3-4c3f-405c-8ffc-362fb524209d" (UID: "0672e1d3-4c3f-405c-8ffc-362fb524209d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:43:22 crc kubenswrapper[4755]: I1210 15:43:22.304896 4755 scope.go:117] "RemoveContainer" containerID="7707b09d5ee71565b88438479df9b283f32841c70a42f411415f8a39402c3c54" Dec 10 15:43:22 crc kubenswrapper[4755]: E1210 15:43:22.305219 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7707b09d5ee71565b88438479df9b283f32841c70a42f411415f8a39402c3c54\": container with ID starting with 7707b09d5ee71565b88438479df9b283f32841c70a42f411415f8a39402c3c54 not found: ID does not exist" containerID="7707b09d5ee71565b88438479df9b283f32841c70a42f411415f8a39402c3c54" Dec 10 15:43:22 crc kubenswrapper[4755]: I1210 15:43:22.305242 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7707b09d5ee71565b88438479df9b283f32841c70a42f411415f8a39402c3c54"} err="failed to get container status \"7707b09d5ee71565b88438479df9b283f32841c70a42f411415f8a39402c3c54\": rpc error: code = NotFound desc = could not find container \"7707b09d5ee71565b88438479df9b283f32841c70a42f411415f8a39402c3c54\": container with ID starting with 7707b09d5ee71565b88438479df9b283f32841c70a42f411415f8a39402c3c54 not found: ID does not exist" Dec 10 15:43:22 crc kubenswrapper[4755]: I1210 15:43:22.305264 4755 scope.go:117] "RemoveContainer" containerID="18a39bebfc1cc0dae38807dc15328a224c4cc01d89e27bc3b13465b772429ab7" Dec 10 15:43:22 crc kubenswrapper[4755]: E1210 15:43:22.305441 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18a39bebfc1cc0dae38807dc15328a224c4cc01d89e27bc3b13465b772429ab7\": container with ID starting with 18a39bebfc1cc0dae38807dc15328a224c4cc01d89e27bc3b13465b772429ab7 not found: ID does not exist" containerID="18a39bebfc1cc0dae38807dc15328a224c4cc01d89e27bc3b13465b772429ab7" Dec 10 15:43:22 crc kubenswrapper[4755]: I1210 15:43:22.305457 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18a39bebfc1cc0dae38807dc15328a224c4cc01d89e27bc3b13465b772429ab7"} err="failed to get container status \"18a39bebfc1cc0dae38807dc15328a224c4cc01d89e27bc3b13465b772429ab7\": rpc error: code = NotFound desc = could not find container \"18a39bebfc1cc0dae38807dc15328a224c4cc01d89e27bc3b13465b772429ab7\": container with ID starting with 18a39bebfc1cc0dae38807dc15328a224c4cc01d89e27bc3b13465b772429ab7 not found: ID does not exist" Dec 10 15:43:22 crc kubenswrapper[4755]: I1210 15:43:22.352330 4755 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0672e1d3-4c3f-405c-8ffc-362fb524209d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 10 15:43:22 crc kubenswrapper[4755]: I1210 15:43:22.352360 4755 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0672e1d3-4c3f-405c-8ffc-362fb524209d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 10 15:43:22 crc kubenswrapper[4755]: I1210 15:43:22.352369 4755 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0672e1d3-4c3f-405c-8ffc-362fb524209d-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 10 15:43:22 crc kubenswrapper[4755]: I1210 15:43:22.445447 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6ffb94d8ff-7h7zf"] Dec 10 15:43:22 crc kubenswrapper[4755]: I1210 15:43:22.455345 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6ffb94d8ff-7h7zf"] Dec 10 15:43:23 crc kubenswrapper[4755]: I1210 15:43:23.107690 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-9gscg" event={"ID":"b52de9e0-e981-4d39-addb-6c732611ea50","Type":"ContainerStarted","Data":"8e8277ff2cd2c02b7e758c2788d5e790e80a676a4daf40aff4669c6c26ffa148"} Dec 10 15:43:23 crc kubenswrapper[4755]: I1210 15:43:23.108913 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-cf78879c9-9gscg" Dec 10 15:43:23 crc kubenswrapper[4755]: I1210 15:43:23.139422 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-cf78879c9-9gscg" podStartSLOduration=4.139399245 podStartE2EDuration="4.139399245s" podCreationTimestamp="2025-12-10 15:43:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:43:23.124316596 +0000 UTC m=+1199.725200258" watchObservedRunningTime="2025-12-10 15:43:23.139399245 +0000 UTC m=+1199.740282877" Dec 10 15:43:23 crc kubenswrapper[4755]: I1210 15:43:23.833504 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0672e1d3-4c3f-405c-8ffc-362fb524209d" path="/var/lib/kubelet/pods/0672e1d3-4c3f-405c-8ffc-362fb524209d/volumes" Dec 10 15:43:24 crc kubenswrapper[4755]: I1210 15:43:24.125928 4755 generic.go:334] "Generic (PLEG): container finished" podID="62ecc4bf-c914-4fef-9ba0-099388953d74" containerID="f6f038b2833539a2083146e7c928b3f14dfe295137a5074a381347edec0d9d9c" exitCode=0 Dec 10 15:43:24 crc kubenswrapper[4755]: I1210 15:43:24.126454 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-jxg9d" event={"ID":"62ecc4bf-c914-4fef-9ba0-099388953d74","Type":"ContainerDied","Data":"f6f038b2833539a2083146e7c928b3f14dfe295137a5074a381347edec0d9d9c"} Dec 10 15:43:26 crc kubenswrapper[4755]: I1210 15:43:26.398351 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-jxg9d" Dec 10 15:43:26 crc kubenswrapper[4755]: I1210 15:43:26.450421 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xg6s8\" (UniqueName: \"kubernetes.io/projected/62ecc4bf-c914-4fef-9ba0-099388953d74-kube-api-access-xg6s8\") pod \"62ecc4bf-c914-4fef-9ba0-099388953d74\" (UID: \"62ecc4bf-c914-4fef-9ba0-099388953d74\") " Dec 10 15:43:26 crc kubenswrapper[4755]: I1210 15:43:26.450540 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/62ecc4bf-c914-4fef-9ba0-099388953d74-fernet-keys\") pod \"62ecc4bf-c914-4fef-9ba0-099388953d74\" (UID: \"62ecc4bf-c914-4fef-9ba0-099388953d74\") " Dec 10 15:43:26 crc kubenswrapper[4755]: I1210 15:43:26.450615 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62ecc4bf-c914-4fef-9ba0-099388953d74-combined-ca-bundle\") pod \"62ecc4bf-c914-4fef-9ba0-099388953d74\" (UID: \"62ecc4bf-c914-4fef-9ba0-099388953d74\") " Dec 10 15:43:26 crc kubenswrapper[4755]: I1210 15:43:26.450662 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62ecc4bf-c914-4fef-9ba0-099388953d74-config-data\") pod \"62ecc4bf-c914-4fef-9ba0-099388953d74\" (UID: \"62ecc4bf-c914-4fef-9ba0-099388953d74\") " Dec 10 15:43:26 crc kubenswrapper[4755]: I1210 15:43:26.450710 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/62ecc4bf-c914-4fef-9ba0-099388953d74-scripts\") pod \"62ecc4bf-c914-4fef-9ba0-099388953d74\" (UID: \"62ecc4bf-c914-4fef-9ba0-099388953d74\") " Dec 10 15:43:26 crc kubenswrapper[4755]: I1210 15:43:26.450760 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/62ecc4bf-c914-4fef-9ba0-099388953d74-credential-keys\") pod \"62ecc4bf-c914-4fef-9ba0-099388953d74\" (UID: \"62ecc4bf-c914-4fef-9ba0-099388953d74\") " Dec 10 15:43:26 crc kubenswrapper[4755]: I1210 15:43:26.458386 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62ecc4bf-c914-4fef-9ba0-099388953d74-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "62ecc4bf-c914-4fef-9ba0-099388953d74" (UID: "62ecc4bf-c914-4fef-9ba0-099388953d74"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:43:26 crc kubenswrapper[4755]: I1210 15:43:26.458639 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62ecc4bf-c914-4fef-9ba0-099388953d74-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "62ecc4bf-c914-4fef-9ba0-099388953d74" (UID: "62ecc4bf-c914-4fef-9ba0-099388953d74"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:43:26 crc kubenswrapper[4755]: I1210 15:43:26.458905 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62ecc4bf-c914-4fef-9ba0-099388953d74-kube-api-access-xg6s8" (OuterVolumeSpecName: "kube-api-access-xg6s8") pod "62ecc4bf-c914-4fef-9ba0-099388953d74" (UID: "62ecc4bf-c914-4fef-9ba0-099388953d74"). InnerVolumeSpecName "kube-api-access-xg6s8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:43:26 crc kubenswrapper[4755]: I1210 15:43:26.462007 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62ecc4bf-c914-4fef-9ba0-099388953d74-scripts" (OuterVolumeSpecName: "scripts") pod "62ecc4bf-c914-4fef-9ba0-099388953d74" (UID: "62ecc4bf-c914-4fef-9ba0-099388953d74"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:43:26 crc kubenswrapper[4755]: I1210 15:43:26.488082 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62ecc4bf-c914-4fef-9ba0-099388953d74-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "62ecc4bf-c914-4fef-9ba0-099388953d74" (UID: "62ecc4bf-c914-4fef-9ba0-099388953d74"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:43:26 crc kubenswrapper[4755]: I1210 15:43:26.497442 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62ecc4bf-c914-4fef-9ba0-099388953d74-config-data" (OuterVolumeSpecName: "config-data") pod "62ecc4bf-c914-4fef-9ba0-099388953d74" (UID: "62ecc4bf-c914-4fef-9ba0-099388953d74"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:43:26 crc kubenswrapper[4755]: I1210 15:43:26.553543 4755 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/62ecc4bf-c914-4fef-9ba0-099388953d74-credential-keys\") on node \"crc\" DevicePath \"\"" Dec 10 15:43:26 crc kubenswrapper[4755]: I1210 15:43:26.553606 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xg6s8\" (UniqueName: \"kubernetes.io/projected/62ecc4bf-c914-4fef-9ba0-099388953d74-kube-api-access-xg6s8\") on node \"crc\" DevicePath \"\"" Dec 10 15:43:26 crc kubenswrapper[4755]: I1210 15:43:26.553619 4755 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/62ecc4bf-c914-4fef-9ba0-099388953d74-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 10 15:43:26 crc kubenswrapper[4755]: I1210 15:43:26.553629 4755 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62ecc4bf-c914-4fef-9ba0-099388953d74-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:43:26 crc kubenswrapper[4755]: I1210 15:43:26.553638 4755 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62ecc4bf-c914-4fef-9ba0-099388953d74-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 15:43:26 crc kubenswrapper[4755]: I1210 15:43:26.553646 4755 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/62ecc4bf-c914-4fef-9ba0-099388953d74-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 15:43:27 crc kubenswrapper[4755]: I1210 15:43:27.157757 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-jxg9d" event={"ID":"62ecc4bf-c914-4fef-9ba0-099388953d74","Type":"ContainerDied","Data":"61ec576673b119492ef55ae7157c1045cf64bccd293b94417ad3fc220afdb373"} Dec 10 15:43:27 crc kubenswrapper[4755]: I1210 15:43:27.158127 4755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="61ec576673b119492ef55ae7157c1045cf64bccd293b94417ad3fc220afdb373" Dec 10 15:43:27 crc kubenswrapper[4755]: I1210 15:43:27.157824 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-jxg9d" Dec 10 15:43:27 crc kubenswrapper[4755]: I1210 15:43:27.485582 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-jxg9d"] Dec 10 15:43:27 crc kubenswrapper[4755]: I1210 15:43:27.494192 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-jxg9d"] Dec 10 15:43:27 crc kubenswrapper[4755]: I1210 15:43:27.577590 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-wstp4"] Dec 10 15:43:27 crc kubenswrapper[4755]: E1210 15:43:27.577960 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2d8956f-4220-4044-9620-4dd519d81777" containerName="init" Dec 10 15:43:27 crc kubenswrapper[4755]: I1210 15:43:27.577974 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2d8956f-4220-4044-9620-4dd519d81777" containerName="init" Dec 10 15:43:27 crc kubenswrapper[4755]: E1210 15:43:27.577995 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0672e1d3-4c3f-405c-8ffc-362fb524209d" containerName="dnsmasq-dns" Dec 10 15:43:27 crc kubenswrapper[4755]: I1210 15:43:27.578001 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="0672e1d3-4c3f-405c-8ffc-362fb524209d" containerName="dnsmasq-dns" Dec 10 15:43:27 crc kubenswrapper[4755]: E1210 15:43:27.578018 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0672e1d3-4c3f-405c-8ffc-362fb524209d" containerName="init" Dec 10 15:43:27 crc kubenswrapper[4755]: I1210 15:43:27.578023 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="0672e1d3-4c3f-405c-8ffc-362fb524209d" containerName="init" Dec 10 15:43:27 crc kubenswrapper[4755]: E1210 15:43:27.578038 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62ecc4bf-c914-4fef-9ba0-099388953d74" containerName="keystone-bootstrap" Dec 10 15:43:27 crc kubenswrapper[4755]: I1210 15:43:27.578044 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="62ecc4bf-c914-4fef-9ba0-099388953d74" containerName="keystone-bootstrap" Dec 10 15:43:27 crc kubenswrapper[4755]: I1210 15:43:27.578208 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="62ecc4bf-c914-4fef-9ba0-099388953d74" containerName="keystone-bootstrap" Dec 10 15:43:27 crc kubenswrapper[4755]: I1210 15:43:27.578217 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="0672e1d3-4c3f-405c-8ffc-362fb524209d" containerName="dnsmasq-dns" Dec 10 15:43:27 crc kubenswrapper[4755]: I1210 15:43:27.578227 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2d8956f-4220-4044-9620-4dd519d81777" containerName="init" Dec 10 15:43:27 crc kubenswrapper[4755]: I1210 15:43:27.578910 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wstp4" Dec 10 15:43:27 crc kubenswrapper[4755]: I1210 15:43:27.580760 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 10 15:43:27 crc kubenswrapper[4755]: I1210 15:43:27.581299 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 10 15:43:27 crc kubenswrapper[4755]: I1210 15:43:27.581338 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 10 15:43:27 crc kubenswrapper[4755]: I1210 15:43:27.581371 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-tp2mw" Dec 10 15:43:27 crc kubenswrapper[4755]: I1210 15:43:27.592992 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-wstp4"] Dec 10 15:43:27 crc kubenswrapper[4755]: I1210 15:43:27.681680 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nn665\" (UniqueName: \"kubernetes.io/projected/0009273d-a6d2-43da-99f9-993f2aba0e3a-kube-api-access-nn665\") pod \"keystone-bootstrap-wstp4\" (UID: \"0009273d-a6d2-43da-99f9-993f2aba0e3a\") " pod="openstack/keystone-bootstrap-wstp4" Dec 10 15:43:27 crc kubenswrapper[4755]: I1210 15:43:27.682013 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0009273d-a6d2-43da-99f9-993f2aba0e3a-config-data\") pod \"keystone-bootstrap-wstp4\" (UID: \"0009273d-a6d2-43da-99f9-993f2aba0e3a\") " pod="openstack/keystone-bootstrap-wstp4" Dec 10 15:43:27 crc kubenswrapper[4755]: I1210 15:43:27.682161 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0009273d-a6d2-43da-99f9-993f2aba0e3a-scripts\") pod \"keystone-bootstrap-wstp4\" (UID: \"0009273d-a6d2-43da-99f9-993f2aba0e3a\") " pod="openstack/keystone-bootstrap-wstp4" Dec 10 15:43:27 crc kubenswrapper[4755]: I1210 15:43:27.682288 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0009273d-a6d2-43da-99f9-993f2aba0e3a-fernet-keys\") pod \"keystone-bootstrap-wstp4\" (UID: \"0009273d-a6d2-43da-99f9-993f2aba0e3a\") " pod="openstack/keystone-bootstrap-wstp4" Dec 10 15:43:27 crc kubenswrapper[4755]: I1210 15:43:27.682479 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0009273d-a6d2-43da-99f9-993f2aba0e3a-credential-keys\") pod \"keystone-bootstrap-wstp4\" (UID: \"0009273d-a6d2-43da-99f9-993f2aba0e3a\") " pod="openstack/keystone-bootstrap-wstp4" Dec 10 15:43:27 crc kubenswrapper[4755]: I1210 15:43:27.682677 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0009273d-a6d2-43da-99f9-993f2aba0e3a-combined-ca-bundle\") pod \"keystone-bootstrap-wstp4\" (UID: \"0009273d-a6d2-43da-99f9-993f2aba0e3a\") " pod="openstack/keystone-bootstrap-wstp4" Dec 10 15:43:27 crc kubenswrapper[4755]: I1210 15:43:27.783984 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0009273d-a6d2-43da-99f9-993f2aba0e3a-combined-ca-bundle\") pod \"keystone-bootstrap-wstp4\" (UID: \"0009273d-a6d2-43da-99f9-993f2aba0e3a\") " pod="openstack/keystone-bootstrap-wstp4" Dec 10 15:43:27 crc kubenswrapper[4755]: I1210 15:43:27.784070 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nn665\" (UniqueName: \"kubernetes.io/projected/0009273d-a6d2-43da-99f9-993f2aba0e3a-kube-api-access-nn665\") pod \"keystone-bootstrap-wstp4\" (UID: \"0009273d-a6d2-43da-99f9-993f2aba0e3a\") " pod="openstack/keystone-bootstrap-wstp4" Dec 10 15:43:27 crc kubenswrapper[4755]: I1210 15:43:27.784109 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0009273d-a6d2-43da-99f9-993f2aba0e3a-config-data\") pod \"keystone-bootstrap-wstp4\" (UID: \"0009273d-a6d2-43da-99f9-993f2aba0e3a\") " pod="openstack/keystone-bootstrap-wstp4" Dec 10 15:43:27 crc kubenswrapper[4755]: I1210 15:43:27.784132 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0009273d-a6d2-43da-99f9-993f2aba0e3a-scripts\") pod \"keystone-bootstrap-wstp4\" (UID: \"0009273d-a6d2-43da-99f9-993f2aba0e3a\") " pod="openstack/keystone-bootstrap-wstp4" Dec 10 15:43:27 crc kubenswrapper[4755]: I1210 15:43:27.784149 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0009273d-a6d2-43da-99f9-993f2aba0e3a-fernet-keys\") pod \"keystone-bootstrap-wstp4\" (UID: \"0009273d-a6d2-43da-99f9-993f2aba0e3a\") " pod="openstack/keystone-bootstrap-wstp4" Dec 10 15:43:27 crc kubenswrapper[4755]: I1210 15:43:27.784200 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0009273d-a6d2-43da-99f9-993f2aba0e3a-credential-keys\") pod \"keystone-bootstrap-wstp4\" (UID: \"0009273d-a6d2-43da-99f9-993f2aba0e3a\") " pod="openstack/keystone-bootstrap-wstp4" Dec 10 15:43:27 crc kubenswrapper[4755]: I1210 15:43:27.815500 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62ecc4bf-c914-4fef-9ba0-099388953d74" path="/var/lib/kubelet/pods/62ecc4bf-c914-4fef-9ba0-099388953d74/volumes" Dec 10 15:43:27 crc kubenswrapper[4755]: I1210 15:43:27.821943 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0009273d-a6d2-43da-99f9-993f2aba0e3a-scripts\") pod \"keystone-bootstrap-wstp4\" (UID: \"0009273d-a6d2-43da-99f9-993f2aba0e3a\") " pod="openstack/keystone-bootstrap-wstp4" Dec 10 15:43:27 crc kubenswrapper[4755]: I1210 15:43:27.822334 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0009273d-a6d2-43da-99f9-993f2aba0e3a-fernet-keys\") pod \"keystone-bootstrap-wstp4\" (UID: \"0009273d-a6d2-43da-99f9-993f2aba0e3a\") " pod="openstack/keystone-bootstrap-wstp4" Dec 10 15:43:27 crc kubenswrapper[4755]: I1210 15:43:27.824179 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0009273d-a6d2-43da-99f9-993f2aba0e3a-combined-ca-bundle\") pod \"keystone-bootstrap-wstp4\" (UID: \"0009273d-a6d2-43da-99f9-993f2aba0e3a\") " pod="openstack/keystone-bootstrap-wstp4" Dec 10 15:43:27 crc kubenswrapper[4755]: I1210 15:43:27.824284 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0009273d-a6d2-43da-99f9-993f2aba0e3a-credential-keys\") pod \"keystone-bootstrap-wstp4\" (UID: \"0009273d-a6d2-43da-99f9-993f2aba0e3a\") " pod="openstack/keystone-bootstrap-wstp4" Dec 10 15:43:27 crc kubenswrapper[4755]: I1210 15:43:27.824498 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nn665\" (UniqueName: \"kubernetes.io/projected/0009273d-a6d2-43da-99f9-993f2aba0e3a-kube-api-access-nn665\") pod \"keystone-bootstrap-wstp4\" (UID: \"0009273d-a6d2-43da-99f9-993f2aba0e3a\") " pod="openstack/keystone-bootstrap-wstp4" Dec 10 15:43:27 crc kubenswrapper[4755]: I1210 15:43:27.828328 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0009273d-a6d2-43da-99f9-993f2aba0e3a-config-data\") pod \"keystone-bootstrap-wstp4\" (UID: \"0009273d-a6d2-43da-99f9-993f2aba0e3a\") " pod="openstack/keystone-bootstrap-wstp4" Dec 10 15:43:28 crc kubenswrapper[4755]: I1210 15:43:28.116079 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wstp4" Dec 10 15:43:29 crc kubenswrapper[4755]: I1210 15:43:29.683636 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-cf78879c9-9gscg" Dec 10 15:43:29 crc kubenswrapper[4755]: I1210 15:43:29.741338 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-hpccw"] Dec 10 15:43:29 crc kubenswrapper[4755]: I1210 15:43:29.741624 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b8fbc5445-hpccw" podUID="ef27f966-f4d9-4959-bb7e-4d8422fbb1dd" containerName="dnsmasq-dns" containerID="cri-o://aa0a2cb370ce1a30e6c19ae5c02d01baa9f926faa9f215f37831d4152d9fc2e0" gracePeriod=10 Dec 10 15:43:30 crc kubenswrapper[4755]: I1210 15:43:30.186717 4755 generic.go:334] "Generic (PLEG): container finished" podID="ef27f966-f4d9-4959-bb7e-4d8422fbb1dd" containerID="aa0a2cb370ce1a30e6c19ae5c02d01baa9f926faa9f215f37831d4152d9fc2e0" exitCode=0 Dec 10 15:43:30 crc kubenswrapper[4755]: I1210 15:43:30.186781 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-hpccw" event={"ID":"ef27f966-f4d9-4959-bb7e-4d8422fbb1dd","Type":"ContainerDied","Data":"aa0a2cb370ce1a30e6c19ae5c02d01baa9f926faa9f215f37831d4152d9fc2e0"} Dec 10 15:43:31 crc kubenswrapper[4755]: I1210 15:43:31.198160 4755 generic.go:334] "Generic (PLEG): container finished" podID="5d949f1d-5cb7-49e0-aa0b-52a615dfe4b5" containerID="bb7e8108b253bf2aa90140899f0db1509d53c3030162ec47375974cdca5a60c2" exitCode=0 Dec 10 15:43:31 crc kubenswrapper[4755]: I1210 15:43:31.198206 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-pth9b" event={"ID":"5d949f1d-5cb7-49e0-aa0b-52a615dfe4b5","Type":"ContainerDied","Data":"bb7e8108b253bf2aa90140899f0db1509d53c3030162ec47375974cdca5a60c2"} Dec 10 15:43:32 crc kubenswrapper[4755]: I1210 15:43:32.792505 4755 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-hpccw" podUID="ef27f966-f4d9-4959-bb7e-4d8422fbb1dd" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.131:5353: connect: connection refused" Dec 10 15:43:37 crc kubenswrapper[4755]: I1210 15:43:37.792660 4755 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-hpccw" podUID="ef27f966-f4d9-4959-bb7e-4d8422fbb1dd" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.131:5353: connect: connection refused" Dec 10 15:43:37 crc kubenswrapper[4755]: E1210 15:43:37.964653 4755 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Dec 10 15:43:37 crc kubenswrapper[4755]: E1210 15:43:37.964803 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6t25v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-6fnmv_openstack(3821a978-80ec-4434-a871-ed026186f498): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 10 15:43:37 crc kubenswrapper[4755]: E1210 15:43:37.966649 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-6fnmv" podUID="3821a978-80ec-4434-a871-ed026186f498" Dec 10 15:43:38 crc kubenswrapper[4755]: E1210 15:43:38.267592 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-6fnmv" podUID="3821a978-80ec-4434-a871-ed026186f498" Dec 10 15:43:42 crc kubenswrapper[4755]: I1210 15:43:42.792060 4755 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-hpccw" podUID="ef27f966-f4d9-4959-bb7e-4d8422fbb1dd" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.131:5353: connect: connection refused" Dec 10 15:43:42 crc kubenswrapper[4755]: I1210 15:43:42.792747 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-hpccw" Dec 10 15:43:43 crc kubenswrapper[4755]: E1210 15:43:43.025648 4755 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Dec 10 15:43:43 crc kubenswrapper[4755]: E1210 15:43:43.026572 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n685h565hddhd7hd8h689hf5h66bh599h5d5h55ch64h65h5b6h66h5dh9h697h689h6dhch68bh66fh76h6hfh575h5f7h584h5c4h87h54cq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sllbg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(0ca4e52f-2a99-42bb-abb3-20a9ee8594b5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 10 15:43:47 crc kubenswrapper[4755]: I1210 15:43:47.792922 4755 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-hpccw" podUID="ef27f966-f4d9-4959-bb7e-4d8422fbb1dd" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.131:5353: connect: connection refused" Dec 10 15:43:52 crc kubenswrapper[4755]: I1210 15:43:52.761255 4755 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 10 15:43:52 crc kubenswrapper[4755]: I1210 15:43:52.791944 4755 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-hpccw" podUID="ef27f966-f4d9-4959-bb7e-4d8422fbb1dd" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.131:5353: connect: connection refused" Dec 10 15:43:52 crc kubenswrapper[4755]: I1210 15:43:52.908975 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-pth9b" Dec 10 15:43:53 crc kubenswrapper[4755]: I1210 15:43:53.015343 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d949f1d-5cb7-49e0-aa0b-52a615dfe4b5-config-data\") pod \"5d949f1d-5cb7-49e0-aa0b-52a615dfe4b5\" (UID: \"5d949f1d-5cb7-49e0-aa0b-52a615dfe4b5\") " Dec 10 15:43:53 crc kubenswrapper[4755]: I1210 15:43:53.015919 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k8q87\" (UniqueName: \"kubernetes.io/projected/5d949f1d-5cb7-49e0-aa0b-52a615dfe4b5-kube-api-access-k8q87\") pod \"5d949f1d-5cb7-49e0-aa0b-52a615dfe4b5\" (UID: \"5d949f1d-5cb7-49e0-aa0b-52a615dfe4b5\") " Dec 10 15:43:53 crc kubenswrapper[4755]: I1210 15:43:53.016017 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d949f1d-5cb7-49e0-aa0b-52a615dfe4b5-combined-ca-bundle\") pod \"5d949f1d-5cb7-49e0-aa0b-52a615dfe4b5\" (UID: \"5d949f1d-5cb7-49e0-aa0b-52a615dfe4b5\") " Dec 10 15:43:53 crc kubenswrapper[4755]: I1210 15:43:53.016140 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5d949f1d-5cb7-49e0-aa0b-52a615dfe4b5-db-sync-config-data\") pod \"5d949f1d-5cb7-49e0-aa0b-52a615dfe4b5\" (UID: \"5d949f1d-5cb7-49e0-aa0b-52a615dfe4b5\") " Dec 10 15:43:53 crc kubenswrapper[4755]: I1210 15:43:53.024817 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d949f1d-5cb7-49e0-aa0b-52a615dfe4b5-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "5d949f1d-5cb7-49e0-aa0b-52a615dfe4b5" (UID: "5d949f1d-5cb7-49e0-aa0b-52a615dfe4b5"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:43:53 crc kubenswrapper[4755]: I1210 15:43:53.026635 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d949f1d-5cb7-49e0-aa0b-52a615dfe4b5-kube-api-access-k8q87" (OuterVolumeSpecName: "kube-api-access-k8q87") pod "5d949f1d-5cb7-49e0-aa0b-52a615dfe4b5" (UID: "5d949f1d-5cb7-49e0-aa0b-52a615dfe4b5"). InnerVolumeSpecName "kube-api-access-k8q87". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:43:53 crc kubenswrapper[4755]: I1210 15:43:53.050408 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d949f1d-5cb7-49e0-aa0b-52a615dfe4b5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5d949f1d-5cb7-49e0-aa0b-52a615dfe4b5" (UID: "5d949f1d-5cb7-49e0-aa0b-52a615dfe4b5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:43:53 crc kubenswrapper[4755]: I1210 15:43:53.076649 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d949f1d-5cb7-49e0-aa0b-52a615dfe4b5-config-data" (OuterVolumeSpecName: "config-data") pod "5d949f1d-5cb7-49e0-aa0b-52a615dfe4b5" (UID: "5d949f1d-5cb7-49e0-aa0b-52a615dfe4b5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:43:53 crc kubenswrapper[4755]: I1210 15:43:53.119597 4755 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d949f1d-5cb7-49e0-aa0b-52a615dfe4b5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:43:53 crc kubenswrapper[4755]: I1210 15:43:53.119630 4755 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5d949f1d-5cb7-49e0-aa0b-52a615dfe4b5-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 15:43:53 crc kubenswrapper[4755]: I1210 15:43:53.119639 4755 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d949f1d-5cb7-49e0-aa0b-52a615dfe4b5-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 15:43:53 crc kubenswrapper[4755]: I1210 15:43:53.119649 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k8q87\" (UniqueName: \"kubernetes.io/projected/5d949f1d-5cb7-49e0-aa0b-52a615dfe4b5-kube-api-access-k8q87\") on node \"crc\" DevicePath \"\"" Dec 10 15:43:53 crc kubenswrapper[4755]: I1210 15:43:53.414558 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-pth9b" event={"ID":"5d949f1d-5cb7-49e0-aa0b-52a615dfe4b5","Type":"ContainerDied","Data":"e0422f0aece4d26e8cbeb6db3ee7d4ec40360a305d934f5a72a67474784dfd1d"} Dec 10 15:43:53 crc kubenswrapper[4755]: I1210 15:43:53.414860 4755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0422f0aece4d26e8cbeb6db3ee7d4ec40360a305d934f5a72a67474784dfd1d" Dec 10 15:43:53 crc kubenswrapper[4755]: I1210 15:43:53.414614 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-pth9b" Dec 10 15:43:54 crc kubenswrapper[4755]: I1210 15:43:54.313035 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-st762"] Dec 10 15:43:54 crc kubenswrapper[4755]: E1210 15:43:54.315629 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d949f1d-5cb7-49e0-aa0b-52a615dfe4b5" containerName="glance-db-sync" Dec 10 15:43:54 crc kubenswrapper[4755]: I1210 15:43:54.315835 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d949f1d-5cb7-49e0-aa0b-52a615dfe4b5" containerName="glance-db-sync" Dec 10 15:43:54 crc kubenswrapper[4755]: I1210 15:43:54.316245 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d949f1d-5cb7-49e0-aa0b-52a615dfe4b5" containerName="glance-db-sync" Dec 10 15:43:54 crc kubenswrapper[4755]: I1210 15:43:54.317767 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-st762" Dec 10 15:43:54 crc kubenswrapper[4755]: I1210 15:43:54.328890 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-st762"] Dec 10 15:43:54 crc kubenswrapper[4755]: I1210 15:43:54.348190 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/de75203a-0006-4039-9ed4-505cbe5852d2-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-st762\" (UID: \"de75203a-0006-4039-9ed4-505cbe5852d2\") " pod="openstack/dnsmasq-dns-56df8fb6b7-st762" Dec 10 15:43:54 crc kubenswrapper[4755]: I1210 15:43:54.348257 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/de75203a-0006-4039-9ed4-505cbe5852d2-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-st762\" (UID: \"de75203a-0006-4039-9ed4-505cbe5852d2\") " pod="openstack/dnsmasq-dns-56df8fb6b7-st762" Dec 10 15:43:54 crc kubenswrapper[4755]: I1210 15:43:54.348303 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/de75203a-0006-4039-9ed4-505cbe5852d2-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-st762\" (UID: \"de75203a-0006-4039-9ed4-505cbe5852d2\") " pod="openstack/dnsmasq-dns-56df8fb6b7-st762" Dec 10 15:43:54 crc kubenswrapper[4755]: I1210 15:43:54.348349 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmd4q\" (UniqueName: \"kubernetes.io/projected/de75203a-0006-4039-9ed4-505cbe5852d2-kube-api-access-pmd4q\") pod \"dnsmasq-dns-56df8fb6b7-st762\" (UID: \"de75203a-0006-4039-9ed4-505cbe5852d2\") " pod="openstack/dnsmasq-dns-56df8fb6b7-st762" Dec 10 15:43:54 crc kubenswrapper[4755]: I1210 15:43:54.348372 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/de75203a-0006-4039-9ed4-505cbe5852d2-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-st762\" (UID: \"de75203a-0006-4039-9ed4-505cbe5852d2\") " pod="openstack/dnsmasq-dns-56df8fb6b7-st762" Dec 10 15:43:54 crc kubenswrapper[4755]: I1210 15:43:54.348415 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de75203a-0006-4039-9ed4-505cbe5852d2-config\") pod \"dnsmasq-dns-56df8fb6b7-st762\" (UID: \"de75203a-0006-4039-9ed4-505cbe5852d2\") " pod="openstack/dnsmasq-dns-56df8fb6b7-st762" Dec 10 15:43:54 crc kubenswrapper[4755]: I1210 15:43:54.450041 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de75203a-0006-4039-9ed4-505cbe5852d2-config\") pod \"dnsmasq-dns-56df8fb6b7-st762\" (UID: \"de75203a-0006-4039-9ed4-505cbe5852d2\") " pod="openstack/dnsmasq-dns-56df8fb6b7-st762" Dec 10 15:43:54 crc kubenswrapper[4755]: I1210 15:43:54.450179 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/de75203a-0006-4039-9ed4-505cbe5852d2-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-st762\" (UID: \"de75203a-0006-4039-9ed4-505cbe5852d2\") " pod="openstack/dnsmasq-dns-56df8fb6b7-st762" Dec 10 15:43:54 crc kubenswrapper[4755]: I1210 15:43:54.450219 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/de75203a-0006-4039-9ed4-505cbe5852d2-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-st762\" (UID: \"de75203a-0006-4039-9ed4-505cbe5852d2\") " pod="openstack/dnsmasq-dns-56df8fb6b7-st762" Dec 10 15:43:54 crc kubenswrapper[4755]: I1210 15:43:54.450263 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/de75203a-0006-4039-9ed4-505cbe5852d2-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-st762\" (UID: \"de75203a-0006-4039-9ed4-505cbe5852d2\") " pod="openstack/dnsmasq-dns-56df8fb6b7-st762" Dec 10 15:43:54 crc kubenswrapper[4755]: I1210 15:43:54.450294 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmd4q\" (UniqueName: \"kubernetes.io/projected/de75203a-0006-4039-9ed4-505cbe5852d2-kube-api-access-pmd4q\") pod \"dnsmasq-dns-56df8fb6b7-st762\" (UID: \"de75203a-0006-4039-9ed4-505cbe5852d2\") " pod="openstack/dnsmasq-dns-56df8fb6b7-st762" Dec 10 15:43:54 crc kubenswrapper[4755]: I1210 15:43:54.450318 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/de75203a-0006-4039-9ed4-505cbe5852d2-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-st762\" (UID: \"de75203a-0006-4039-9ed4-505cbe5852d2\") " pod="openstack/dnsmasq-dns-56df8fb6b7-st762" Dec 10 15:43:54 crc kubenswrapper[4755]: I1210 15:43:54.451184 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de75203a-0006-4039-9ed4-505cbe5852d2-config\") pod \"dnsmasq-dns-56df8fb6b7-st762\" (UID: \"de75203a-0006-4039-9ed4-505cbe5852d2\") " pod="openstack/dnsmasq-dns-56df8fb6b7-st762" Dec 10 15:43:54 crc kubenswrapper[4755]: I1210 15:43:54.451268 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/de75203a-0006-4039-9ed4-505cbe5852d2-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-st762\" (UID: \"de75203a-0006-4039-9ed4-505cbe5852d2\") " pod="openstack/dnsmasq-dns-56df8fb6b7-st762" Dec 10 15:43:54 crc kubenswrapper[4755]: I1210 15:43:54.451459 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/de75203a-0006-4039-9ed4-505cbe5852d2-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-st762\" (UID: \"de75203a-0006-4039-9ed4-505cbe5852d2\") " pod="openstack/dnsmasq-dns-56df8fb6b7-st762" Dec 10 15:43:54 crc kubenswrapper[4755]: I1210 15:43:54.451704 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/de75203a-0006-4039-9ed4-505cbe5852d2-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-st762\" (UID: \"de75203a-0006-4039-9ed4-505cbe5852d2\") " pod="openstack/dnsmasq-dns-56df8fb6b7-st762" Dec 10 15:43:54 crc kubenswrapper[4755]: I1210 15:43:54.452080 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/de75203a-0006-4039-9ed4-505cbe5852d2-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-st762\" (UID: \"de75203a-0006-4039-9ed4-505cbe5852d2\") " pod="openstack/dnsmasq-dns-56df8fb6b7-st762" Dec 10 15:43:54 crc kubenswrapper[4755]: I1210 15:43:54.474522 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmd4q\" (UniqueName: \"kubernetes.io/projected/de75203a-0006-4039-9ed4-505cbe5852d2-kube-api-access-pmd4q\") pod \"dnsmasq-dns-56df8fb6b7-st762\" (UID: \"de75203a-0006-4039-9ed4-505cbe5852d2\") " pod="openstack/dnsmasq-dns-56df8fb6b7-st762" Dec 10 15:43:54 crc kubenswrapper[4755]: I1210 15:43:54.668264 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-st762" Dec 10 15:43:55 crc kubenswrapper[4755]: E1210 15:43:55.111509 4755 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Dec 10 15:43:55 crc kubenswrapper[4755]: E1210 15:43:55.111971 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ff59l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-cwjsz_openstack(9b9ab1e5-2daa-4057-84e3-50bef68bbaca): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 10 15:43:55 crc kubenswrapper[4755]: E1210 15:43:55.113171 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-cwjsz" podUID="9b9ab1e5-2daa-4057-84e3-50bef68bbaca" Dec 10 15:43:55 crc kubenswrapper[4755]: I1210 15:43:55.125829 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-hpccw" Dec 10 15:43:55 crc kubenswrapper[4755]: I1210 15:43:55.224115 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 10 15:43:55 crc kubenswrapper[4755]: E1210 15:43:55.224493 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef27f966-f4d9-4959-bb7e-4d8422fbb1dd" containerName="dnsmasq-dns" Dec 10 15:43:55 crc kubenswrapper[4755]: I1210 15:43:55.224508 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef27f966-f4d9-4959-bb7e-4d8422fbb1dd" containerName="dnsmasq-dns" Dec 10 15:43:55 crc kubenswrapper[4755]: E1210 15:43:55.224520 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef27f966-f4d9-4959-bb7e-4d8422fbb1dd" containerName="init" Dec 10 15:43:55 crc kubenswrapper[4755]: I1210 15:43:55.224526 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef27f966-f4d9-4959-bb7e-4d8422fbb1dd" containerName="init" Dec 10 15:43:55 crc kubenswrapper[4755]: I1210 15:43:55.224695 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef27f966-f4d9-4959-bb7e-4d8422fbb1dd" containerName="dnsmasq-dns" Dec 10 15:43:55 crc kubenswrapper[4755]: I1210 15:43:55.225648 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 10 15:43:55 crc kubenswrapper[4755]: I1210 15:43:55.227636 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 10 15:43:55 crc kubenswrapper[4755]: I1210 15:43:55.232812 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-dm8gw" Dec 10 15:43:55 crc kubenswrapper[4755]: I1210 15:43:55.237362 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Dec 10 15:43:55 crc kubenswrapper[4755]: I1210 15:43:55.259475 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 10 15:43:55 crc kubenswrapper[4755]: I1210 15:43:55.316913 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ef27f966-f4d9-4959-bb7e-4d8422fbb1dd-ovsdbserver-sb\") pod \"ef27f966-f4d9-4959-bb7e-4d8422fbb1dd\" (UID: \"ef27f966-f4d9-4959-bb7e-4d8422fbb1dd\") " Dec 10 15:43:55 crc kubenswrapper[4755]: I1210 15:43:55.317021 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ef27f966-f4d9-4959-bb7e-4d8422fbb1dd-ovsdbserver-nb\") pod \"ef27f966-f4d9-4959-bb7e-4d8422fbb1dd\" (UID: \"ef27f966-f4d9-4959-bb7e-4d8422fbb1dd\") " Dec 10 15:43:55 crc kubenswrapper[4755]: I1210 15:43:55.317053 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t49zj\" (UniqueName: \"kubernetes.io/projected/ef27f966-f4d9-4959-bb7e-4d8422fbb1dd-kube-api-access-t49zj\") pod \"ef27f966-f4d9-4959-bb7e-4d8422fbb1dd\" (UID: \"ef27f966-f4d9-4959-bb7e-4d8422fbb1dd\") " Dec 10 15:43:55 crc kubenswrapper[4755]: I1210 15:43:55.317118 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef27f966-f4d9-4959-bb7e-4d8422fbb1dd-config\") pod \"ef27f966-f4d9-4959-bb7e-4d8422fbb1dd\" (UID: \"ef27f966-f4d9-4959-bb7e-4d8422fbb1dd\") " Dec 10 15:43:55 crc kubenswrapper[4755]: I1210 15:43:55.317132 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef27f966-f4d9-4959-bb7e-4d8422fbb1dd-dns-svc\") pod \"ef27f966-f4d9-4959-bb7e-4d8422fbb1dd\" (UID: \"ef27f966-f4d9-4959-bb7e-4d8422fbb1dd\") " Dec 10 15:43:55 crc kubenswrapper[4755]: I1210 15:43:55.331923 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef27f966-f4d9-4959-bb7e-4d8422fbb1dd-kube-api-access-t49zj" (OuterVolumeSpecName: "kube-api-access-t49zj") pod "ef27f966-f4d9-4959-bb7e-4d8422fbb1dd" (UID: "ef27f966-f4d9-4959-bb7e-4d8422fbb1dd"). InnerVolumeSpecName "kube-api-access-t49zj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:43:55 crc kubenswrapper[4755]: I1210 15:43:55.418813 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-fad4194d-6d90-49f1-a017-ae4167f764c9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fad4194d-6d90-49f1-a017-ae4167f764c9\") pod \"glance-default-external-api-0\" (UID: \"d79c58e8-f9d1-4fde-945d-a6e175ec8fee\") " pod="openstack/glance-default-external-api-0" Dec 10 15:43:55 crc kubenswrapper[4755]: I1210 15:43:55.419432 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d79c58e8-f9d1-4fde-945d-a6e175ec8fee-logs\") pod \"glance-default-external-api-0\" (UID: \"d79c58e8-f9d1-4fde-945d-a6e175ec8fee\") " pod="openstack/glance-default-external-api-0" Dec 10 15:43:55 crc kubenswrapper[4755]: I1210 15:43:55.419616 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d79c58e8-f9d1-4fde-945d-a6e175ec8fee-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d79c58e8-f9d1-4fde-945d-a6e175ec8fee\") " pod="openstack/glance-default-external-api-0" Dec 10 15:43:55 crc kubenswrapper[4755]: I1210 15:43:55.419733 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnw8q\" (UniqueName: \"kubernetes.io/projected/d79c58e8-f9d1-4fde-945d-a6e175ec8fee-kube-api-access-tnw8q\") pod \"glance-default-external-api-0\" (UID: \"d79c58e8-f9d1-4fde-945d-a6e175ec8fee\") " pod="openstack/glance-default-external-api-0" Dec 10 15:43:55 crc kubenswrapper[4755]: I1210 15:43:55.419844 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d79c58e8-f9d1-4fde-945d-a6e175ec8fee-scripts\") pod \"glance-default-external-api-0\" (UID: \"d79c58e8-f9d1-4fde-945d-a6e175ec8fee\") " pod="openstack/glance-default-external-api-0" Dec 10 15:43:55 crc kubenswrapper[4755]: I1210 15:43:55.419934 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d79c58e8-f9d1-4fde-945d-a6e175ec8fee-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d79c58e8-f9d1-4fde-945d-a6e175ec8fee\") " pod="openstack/glance-default-external-api-0" Dec 10 15:43:55 crc kubenswrapper[4755]: I1210 15:43:55.420585 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d79c58e8-f9d1-4fde-945d-a6e175ec8fee-config-data\") pod \"glance-default-external-api-0\" (UID: \"d79c58e8-f9d1-4fde-945d-a6e175ec8fee\") " pod="openstack/glance-default-external-api-0" Dec 10 15:43:55 crc kubenswrapper[4755]: I1210 15:43:55.420719 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t49zj\" (UniqueName: \"kubernetes.io/projected/ef27f966-f4d9-4959-bb7e-4d8422fbb1dd-kube-api-access-t49zj\") on node \"crc\" DevicePath \"\"" Dec 10 15:43:55 crc kubenswrapper[4755]: I1210 15:43:55.454215 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef27f966-f4d9-4959-bb7e-4d8422fbb1dd-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ef27f966-f4d9-4959-bb7e-4d8422fbb1dd" (UID: "ef27f966-f4d9-4959-bb7e-4d8422fbb1dd"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:43:55 crc kubenswrapper[4755]: I1210 15:43:55.457001 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-hpccw" Dec 10 15:43:55 crc kubenswrapper[4755]: I1210 15:43:55.457559 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-hpccw" event={"ID":"ef27f966-f4d9-4959-bb7e-4d8422fbb1dd","Type":"ContainerDied","Data":"2035681ab4645f31ea731dd886a0fcaa5236cb6bfa6944dcac646dea4dfdd351"} Dec 10 15:43:55 crc kubenswrapper[4755]: I1210 15:43:55.457654 4755 scope.go:117] "RemoveContainer" containerID="aa0a2cb370ce1a30e6c19ae5c02d01baa9f926faa9f215f37831d4152d9fc2e0" Dec 10 15:43:55 crc kubenswrapper[4755]: E1210 15:43:55.460378 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-cwjsz" podUID="9b9ab1e5-2daa-4057-84e3-50bef68bbaca" Dec 10 15:43:55 crc kubenswrapper[4755]: I1210 15:43:55.471961 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef27f966-f4d9-4959-bb7e-4d8422fbb1dd-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ef27f966-f4d9-4959-bb7e-4d8422fbb1dd" (UID: "ef27f966-f4d9-4959-bb7e-4d8422fbb1dd"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:43:55 crc kubenswrapper[4755]: I1210 15:43:55.479073 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef27f966-f4d9-4959-bb7e-4d8422fbb1dd-config" (OuterVolumeSpecName: "config") pod "ef27f966-f4d9-4959-bb7e-4d8422fbb1dd" (UID: "ef27f966-f4d9-4959-bb7e-4d8422fbb1dd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:43:55 crc kubenswrapper[4755]: I1210 15:43:55.494822 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef27f966-f4d9-4959-bb7e-4d8422fbb1dd-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ef27f966-f4d9-4959-bb7e-4d8422fbb1dd" (UID: "ef27f966-f4d9-4959-bb7e-4d8422fbb1dd"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:43:55 crc kubenswrapper[4755]: I1210 15:43:55.526661 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d79c58e8-f9d1-4fde-945d-a6e175ec8fee-scripts\") pod \"glance-default-external-api-0\" (UID: \"d79c58e8-f9d1-4fde-945d-a6e175ec8fee\") " pod="openstack/glance-default-external-api-0" Dec 10 15:43:55 crc kubenswrapper[4755]: I1210 15:43:55.526730 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d79c58e8-f9d1-4fde-945d-a6e175ec8fee-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d79c58e8-f9d1-4fde-945d-a6e175ec8fee\") " pod="openstack/glance-default-external-api-0" Dec 10 15:43:55 crc kubenswrapper[4755]: I1210 15:43:55.527036 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d79c58e8-f9d1-4fde-945d-a6e175ec8fee-config-data\") pod \"glance-default-external-api-0\" (UID: \"d79c58e8-f9d1-4fde-945d-a6e175ec8fee\") " pod="openstack/glance-default-external-api-0" Dec 10 15:43:55 crc kubenswrapper[4755]: I1210 15:43:55.527632 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-fad4194d-6d90-49f1-a017-ae4167f764c9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fad4194d-6d90-49f1-a017-ae4167f764c9\") pod \"glance-default-external-api-0\" (UID: \"d79c58e8-f9d1-4fde-945d-a6e175ec8fee\") " pod="openstack/glance-default-external-api-0" Dec 10 15:43:55 crc kubenswrapper[4755]: I1210 15:43:55.527675 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d79c58e8-f9d1-4fde-945d-a6e175ec8fee-logs\") pod \"glance-default-external-api-0\" (UID: \"d79c58e8-f9d1-4fde-945d-a6e175ec8fee\") " pod="openstack/glance-default-external-api-0" Dec 10 15:43:55 crc kubenswrapper[4755]: I1210 15:43:55.527699 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d79c58e8-f9d1-4fde-945d-a6e175ec8fee-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d79c58e8-f9d1-4fde-945d-a6e175ec8fee\") " pod="openstack/glance-default-external-api-0" Dec 10 15:43:55 crc kubenswrapper[4755]: I1210 15:43:55.527735 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tnw8q\" (UniqueName: \"kubernetes.io/projected/d79c58e8-f9d1-4fde-945d-a6e175ec8fee-kube-api-access-tnw8q\") pod \"glance-default-external-api-0\" (UID: \"d79c58e8-f9d1-4fde-945d-a6e175ec8fee\") " pod="openstack/glance-default-external-api-0" Dec 10 15:43:55 crc kubenswrapper[4755]: I1210 15:43:55.528025 4755 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef27f966-f4d9-4959-bb7e-4d8422fbb1dd-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:43:55 crc kubenswrapper[4755]: I1210 15:43:55.528038 4755 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef27f966-f4d9-4959-bb7e-4d8422fbb1dd-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 10 15:43:55 crc kubenswrapper[4755]: I1210 15:43:55.528046 4755 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ef27f966-f4d9-4959-bb7e-4d8422fbb1dd-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 10 15:43:55 crc kubenswrapper[4755]: I1210 15:43:55.528056 4755 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ef27f966-f4d9-4959-bb7e-4d8422fbb1dd-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 10 15:43:55 crc kubenswrapper[4755]: I1210 15:43:55.529114 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d79c58e8-f9d1-4fde-945d-a6e175ec8fee-logs\") pod \"glance-default-external-api-0\" (UID: \"d79c58e8-f9d1-4fde-945d-a6e175ec8fee\") " pod="openstack/glance-default-external-api-0" Dec 10 15:43:55 crc kubenswrapper[4755]: I1210 15:43:55.529749 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d79c58e8-f9d1-4fde-945d-a6e175ec8fee-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d79c58e8-f9d1-4fde-945d-a6e175ec8fee\") " pod="openstack/glance-default-external-api-0" Dec 10 15:43:55 crc kubenswrapper[4755]: I1210 15:43:55.532245 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d79c58e8-f9d1-4fde-945d-a6e175ec8fee-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d79c58e8-f9d1-4fde-945d-a6e175ec8fee\") " pod="openstack/glance-default-external-api-0" Dec 10 15:43:55 crc kubenswrapper[4755]: I1210 15:43:55.533857 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d79c58e8-f9d1-4fde-945d-a6e175ec8fee-config-data\") pod \"glance-default-external-api-0\" (UID: \"d79c58e8-f9d1-4fde-945d-a6e175ec8fee\") " pod="openstack/glance-default-external-api-0" Dec 10 15:43:55 crc kubenswrapper[4755]: I1210 15:43:55.535423 4755 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 10 15:43:55 crc kubenswrapper[4755]: I1210 15:43:55.535461 4755 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-fad4194d-6d90-49f1-a017-ae4167f764c9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fad4194d-6d90-49f1-a017-ae4167f764c9\") pod \"glance-default-external-api-0\" (UID: \"d79c58e8-f9d1-4fde-945d-a6e175ec8fee\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/2c29aae4dccb9080bd5d2f8d1cce721d31204eaed02b3364c97d3b8bf6504cd5/globalmount\"" pod="openstack/glance-default-external-api-0" Dec 10 15:43:55 crc kubenswrapper[4755]: I1210 15:43:55.537992 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d79c58e8-f9d1-4fde-945d-a6e175ec8fee-scripts\") pod \"glance-default-external-api-0\" (UID: \"d79c58e8-f9d1-4fde-945d-a6e175ec8fee\") " pod="openstack/glance-default-external-api-0" Dec 10 15:43:55 crc kubenswrapper[4755]: I1210 15:43:55.546226 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 10 15:43:55 crc kubenswrapper[4755]: I1210 15:43:55.548223 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 10 15:43:55 crc kubenswrapper[4755]: I1210 15:43:55.550565 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 10 15:43:55 crc kubenswrapper[4755]: I1210 15:43:55.554733 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tnw8q\" (UniqueName: \"kubernetes.io/projected/d79c58e8-f9d1-4fde-945d-a6e175ec8fee-kube-api-access-tnw8q\") pod \"glance-default-external-api-0\" (UID: \"d79c58e8-f9d1-4fde-945d-a6e175ec8fee\") " pod="openstack/glance-default-external-api-0" Dec 10 15:43:55 crc kubenswrapper[4755]: I1210 15:43:55.572958 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 10 15:43:55 crc kubenswrapper[4755]: I1210 15:43:55.593315 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-fad4194d-6d90-49f1-a017-ae4167f764c9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fad4194d-6d90-49f1-a017-ae4167f764c9\") pod \"glance-default-external-api-0\" (UID: \"d79c58e8-f9d1-4fde-945d-a6e175ec8fee\") " pod="openstack/glance-default-external-api-0" Dec 10 15:43:55 crc kubenswrapper[4755]: I1210 15:43:55.731590 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbc85971-1a23-47af-bae0-708919198aee-config-data\") pod \"glance-default-internal-api-0\" (UID: \"dbc85971-1a23-47af-bae0-708919198aee\") " pod="openstack/glance-default-internal-api-0" Dec 10 15:43:55 crc kubenswrapper[4755]: I1210 15:43:55.731655 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dbc85971-1a23-47af-bae0-708919198aee-scripts\") pod \"glance-default-internal-api-0\" (UID: \"dbc85971-1a23-47af-bae0-708919198aee\") " pod="openstack/glance-default-internal-api-0" Dec 10 15:43:55 crc kubenswrapper[4755]: I1210 15:43:55.731692 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbc85971-1a23-47af-bae0-708919198aee-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"dbc85971-1a23-47af-bae0-708919198aee\") " pod="openstack/glance-default-internal-api-0" Dec 10 15:43:55 crc kubenswrapper[4755]: I1210 15:43:55.731757 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dbc85971-1a23-47af-bae0-708919198aee-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"dbc85971-1a23-47af-bae0-708919198aee\") " pod="openstack/glance-default-internal-api-0" Dec 10 15:43:55 crc kubenswrapper[4755]: I1210 15:43:55.731843 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dbc85971-1a23-47af-bae0-708919198aee-logs\") pod \"glance-default-internal-api-0\" (UID: \"dbc85971-1a23-47af-bae0-708919198aee\") " pod="openstack/glance-default-internal-api-0" Dec 10 15:43:55 crc kubenswrapper[4755]: I1210 15:43:55.731879 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4rx8\" (UniqueName: \"kubernetes.io/projected/dbc85971-1a23-47af-bae0-708919198aee-kube-api-access-s4rx8\") pod \"glance-default-internal-api-0\" (UID: \"dbc85971-1a23-47af-bae0-708919198aee\") " pod="openstack/glance-default-internal-api-0" Dec 10 15:43:55 crc kubenswrapper[4755]: I1210 15:43:55.731924 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-3c1ecacc-2c1b-4e07-9def-36c303767d2a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3c1ecacc-2c1b-4e07-9def-36c303767d2a\") pod \"glance-default-internal-api-0\" (UID: \"dbc85971-1a23-47af-bae0-708919198aee\") " pod="openstack/glance-default-internal-api-0" Dec 10 15:43:55 crc kubenswrapper[4755]: I1210 15:43:55.802289 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-hpccw"] Dec 10 15:43:55 crc kubenswrapper[4755]: I1210 15:43:55.811443 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-hpccw"] Dec 10 15:43:55 crc kubenswrapper[4755]: I1210 15:43:55.833205 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dbc85971-1a23-47af-bae0-708919198aee-logs\") pod \"glance-default-internal-api-0\" (UID: \"dbc85971-1a23-47af-bae0-708919198aee\") " pod="openstack/glance-default-internal-api-0" Dec 10 15:43:55 crc kubenswrapper[4755]: I1210 15:43:55.833270 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4rx8\" (UniqueName: \"kubernetes.io/projected/dbc85971-1a23-47af-bae0-708919198aee-kube-api-access-s4rx8\") pod \"glance-default-internal-api-0\" (UID: \"dbc85971-1a23-47af-bae0-708919198aee\") " pod="openstack/glance-default-internal-api-0" Dec 10 15:43:55 crc kubenswrapper[4755]: I1210 15:43:55.833318 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-3c1ecacc-2c1b-4e07-9def-36c303767d2a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3c1ecacc-2c1b-4e07-9def-36c303767d2a\") pod \"glance-default-internal-api-0\" (UID: \"dbc85971-1a23-47af-bae0-708919198aee\") " pod="openstack/glance-default-internal-api-0" Dec 10 15:43:55 crc kubenswrapper[4755]: I1210 15:43:55.833379 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbc85971-1a23-47af-bae0-708919198aee-config-data\") pod \"glance-default-internal-api-0\" (UID: \"dbc85971-1a23-47af-bae0-708919198aee\") " pod="openstack/glance-default-internal-api-0" Dec 10 15:43:55 crc kubenswrapper[4755]: I1210 15:43:55.833415 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dbc85971-1a23-47af-bae0-708919198aee-scripts\") pod \"glance-default-internal-api-0\" (UID: \"dbc85971-1a23-47af-bae0-708919198aee\") " pod="openstack/glance-default-internal-api-0" Dec 10 15:43:55 crc kubenswrapper[4755]: I1210 15:43:55.833445 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbc85971-1a23-47af-bae0-708919198aee-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"dbc85971-1a23-47af-bae0-708919198aee\") " pod="openstack/glance-default-internal-api-0" Dec 10 15:43:55 crc kubenswrapper[4755]: I1210 15:43:55.833527 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dbc85971-1a23-47af-bae0-708919198aee-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"dbc85971-1a23-47af-bae0-708919198aee\") " pod="openstack/glance-default-internal-api-0" Dec 10 15:43:55 crc kubenswrapper[4755]: I1210 15:43:55.833815 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dbc85971-1a23-47af-bae0-708919198aee-logs\") pod \"glance-default-internal-api-0\" (UID: \"dbc85971-1a23-47af-bae0-708919198aee\") " pod="openstack/glance-default-internal-api-0" Dec 10 15:43:55 crc kubenswrapper[4755]: I1210 15:43:55.833917 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dbc85971-1a23-47af-bae0-708919198aee-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"dbc85971-1a23-47af-bae0-708919198aee\") " pod="openstack/glance-default-internal-api-0" Dec 10 15:43:55 crc kubenswrapper[4755]: I1210 15:43:55.838442 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbc85971-1a23-47af-bae0-708919198aee-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"dbc85971-1a23-47af-bae0-708919198aee\") " pod="openstack/glance-default-internal-api-0" Dec 10 15:43:55 crc kubenswrapper[4755]: I1210 15:43:55.840085 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbc85971-1a23-47af-bae0-708919198aee-config-data\") pod \"glance-default-internal-api-0\" (UID: \"dbc85971-1a23-47af-bae0-708919198aee\") " pod="openstack/glance-default-internal-api-0" Dec 10 15:43:55 crc kubenswrapper[4755]: I1210 15:43:55.843892 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dbc85971-1a23-47af-bae0-708919198aee-scripts\") pod \"glance-default-internal-api-0\" (UID: \"dbc85971-1a23-47af-bae0-708919198aee\") " pod="openstack/glance-default-internal-api-0" Dec 10 15:43:55 crc kubenswrapper[4755]: I1210 15:43:55.844548 4755 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 10 15:43:55 crc kubenswrapper[4755]: I1210 15:43:55.845226 4755 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-3c1ecacc-2c1b-4e07-9def-36c303767d2a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3c1ecacc-2c1b-4e07-9def-36c303767d2a\") pod \"glance-default-internal-api-0\" (UID: \"dbc85971-1a23-47af-bae0-708919198aee\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e7209b79faa29a25e04bc03f8c7f38aa826c0ab7d3d63e0c6698575f30077871/globalmount\"" pod="openstack/glance-default-internal-api-0" Dec 10 15:43:55 crc kubenswrapper[4755]: I1210 15:43:55.848845 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 10 15:43:55 crc kubenswrapper[4755]: I1210 15:43:55.859815 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4rx8\" (UniqueName: \"kubernetes.io/projected/dbc85971-1a23-47af-bae0-708919198aee-kube-api-access-s4rx8\") pod \"glance-default-internal-api-0\" (UID: \"dbc85971-1a23-47af-bae0-708919198aee\") " pod="openstack/glance-default-internal-api-0" Dec 10 15:43:55 crc kubenswrapper[4755]: I1210 15:43:55.887750 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-3c1ecacc-2c1b-4e07-9def-36c303767d2a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3c1ecacc-2c1b-4e07-9def-36c303767d2a\") pod \"glance-default-internal-api-0\" (UID: \"dbc85971-1a23-47af-bae0-708919198aee\") " pod="openstack/glance-default-internal-api-0" Dec 10 15:43:56 crc kubenswrapper[4755]: I1210 15:43:56.175327 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 10 15:43:56 crc kubenswrapper[4755]: I1210 15:43:56.853871 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 10 15:43:56 crc kubenswrapper[4755]: I1210 15:43:56.928103 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 10 15:43:57 crc kubenswrapper[4755]: I1210 15:43:57.776137 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef27f966-f4d9-4959-bb7e-4d8422fbb1dd" path="/var/lib/kubelet/pods/ef27f966-f4d9-4959-bb7e-4d8422fbb1dd/volumes" Dec 10 15:43:58 crc kubenswrapper[4755]: I1210 15:43:58.826854 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-wstp4"] Dec 10 15:44:01 crc kubenswrapper[4755]: I1210 15:44:01.357514 4755 scope.go:117] "RemoveContainer" containerID="1a341cef385c275218b0a1b44e553ef0b7278e1a25aa6038cd9423f35211e121" Dec 10 15:44:01 crc kubenswrapper[4755]: I1210 15:44:01.513154 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wstp4" event={"ID":"0009273d-a6d2-43da-99f9-993f2aba0e3a","Type":"ContainerStarted","Data":"99a1791e52ecebec4e10a812e72cbcb997ec7edc2a805e605a9b7503601af8dc"} Dec 10 15:44:01 crc kubenswrapper[4755]: E1210 15:44:01.886798 4755 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Dec 10 15:44:01 crc kubenswrapper[4755]: E1210 15:44:01.886837 4755 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Dec 10 15:44:01 crc kubenswrapper[4755]: E1210 15:44:01.886949 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7hvvl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-jr6l4_openstack(cbc4e627-8238-49b1-a0ac-48d07a29c23a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 10 15:44:01 crc kubenswrapper[4755]: E1210 15:44:01.888325 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cloudkitty-db-sync-jr6l4" podUID="cbc4e627-8238-49b1-a0ac-48d07a29c23a" Dec 10 15:44:02 crc kubenswrapper[4755]: I1210 15:44:02.404172 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-st762"] Dec 10 15:44:02 crc kubenswrapper[4755]: W1210 15:44:02.450825 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podde75203a_0006_4039_9ed4_505cbe5852d2.slice/crio-8621e32821c25177343bb04aec9fec90126f8a3b6126bc128d2964a92d6405c1 WatchSource:0}: Error finding container 8621e32821c25177343bb04aec9fec90126f8a3b6126bc128d2964a92d6405c1: Status 404 returned error can't find the container with id 8621e32821c25177343bb04aec9fec90126f8a3b6126bc128d2964a92d6405c1 Dec 10 15:44:02 crc kubenswrapper[4755]: I1210 15:44:02.536200 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-st762" event={"ID":"de75203a-0006-4039-9ed4-505cbe5852d2","Type":"ContainerStarted","Data":"8621e32821c25177343bb04aec9fec90126f8a3b6126bc128d2964a92d6405c1"} Dec 10 15:44:02 crc kubenswrapper[4755]: I1210 15:44:02.537804 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-zttv2" event={"ID":"07e5d790-41c9-4f66-87e4-6088fe8bbc8f","Type":"ContainerStarted","Data":"815cb3dcfec50a0281d6b4e6b81c6ac12f56545437ec542718836fab236e026d"} Dec 10 15:44:02 crc kubenswrapper[4755]: I1210 15:44:02.550938 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wstp4" event={"ID":"0009273d-a6d2-43da-99f9-993f2aba0e3a","Type":"ContainerStarted","Data":"998291edebca64a51c3c48de3b26f728fbd5dc52fab841d39b6d5bc065326347"} Dec 10 15:44:02 crc kubenswrapper[4755]: I1210 15:44:02.560538 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-zttv2" podStartSLOduration=11.002513161 podStartE2EDuration="45.560521298s" podCreationTimestamp="2025-12-10 15:43:17 +0000 UTC" firstStartedPulling="2025-12-10 15:43:18.810887002 +0000 UTC m=+1195.411770634" lastFinishedPulling="2025-12-10 15:43:53.368895109 +0000 UTC m=+1229.969778771" observedRunningTime="2025-12-10 15:44:02.557405214 +0000 UTC m=+1239.158288846" watchObservedRunningTime="2025-12-10 15:44:02.560521298 +0000 UTC m=+1239.161404930" Dec 10 15:44:02 crc kubenswrapper[4755]: I1210 15:44:02.562243 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ca4e52f-2a99-42bb-abb3-20a9ee8594b5","Type":"ContainerStarted","Data":"2eb738bec907df392d198953ddb82b679095fed813616a75999ee2ee4466451c"} Dec 10 15:44:02 crc kubenswrapper[4755]: I1210 15:44:02.565095 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-6fnmv" event={"ID":"3821a978-80ec-4434-a871-ed026186f498","Type":"ContainerStarted","Data":"85cfc7d93b2a05f6174a89957198e855ab5e29e641e81a0927b2d05109cf98e8"} Dec 10 15:44:02 crc kubenswrapper[4755]: E1210 15:44:02.565978 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-jr6l4" podUID="cbc4e627-8238-49b1-a0ac-48d07a29c23a" Dec 10 15:44:02 crc kubenswrapper[4755]: I1210 15:44:02.576038 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-wstp4" podStartSLOduration=35.576022668 podStartE2EDuration="35.576022668s" podCreationTimestamp="2025-12-10 15:43:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:44:02.574041915 +0000 UTC m=+1239.174925547" watchObservedRunningTime="2025-12-10 15:44:02.576022668 +0000 UTC m=+1239.176906300" Dec 10 15:44:02 crc kubenswrapper[4755]: I1210 15:44:02.621182 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-6fnmv" podStartSLOduration=2.621461425 podStartE2EDuration="45.621165875s" podCreationTimestamp="2025-12-10 15:43:17 +0000 UTC" firstStartedPulling="2025-12-10 15:43:19.028961081 +0000 UTC m=+1195.629844713" lastFinishedPulling="2025-12-10 15:44:02.028665511 +0000 UTC m=+1238.629549163" observedRunningTime="2025-12-10 15:44:02.614955575 +0000 UTC m=+1239.215839207" watchObservedRunningTime="2025-12-10 15:44:02.621165875 +0000 UTC m=+1239.222049507" Dec 10 15:44:02 crc kubenswrapper[4755]: I1210 15:44:02.656605 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 10 15:44:02 crc kubenswrapper[4755]: I1210 15:44:02.720600 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 10 15:44:02 crc kubenswrapper[4755]: E1210 15:44:02.895997 4755 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podde75203a_0006_4039_9ed4_505cbe5852d2.slice/crio-conmon-ed481a9c85ba279d104e8abe5efe9f4512782a658d813cc3f59f4b19fca0e58d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podde75203a_0006_4039_9ed4_505cbe5852d2.slice/crio-ed481a9c85ba279d104e8abe5efe9f4512782a658d813cc3f59f4b19fca0e58d.scope\": RecentStats: unable to find data in memory cache]" Dec 10 15:44:03 crc kubenswrapper[4755]: I1210 15:44:03.578733 4755 generic.go:334] "Generic (PLEG): container finished" podID="c7e81227-e01b-4851-ac1f-d4ff480c0993" containerID="b4742b126f04bd73fc89a611417ca7e607aa6268f96f09a20199c1a37ce9f841" exitCode=0 Dec 10 15:44:03 crc kubenswrapper[4755]: I1210 15:44:03.578830 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-9l6v9" event={"ID":"c7e81227-e01b-4851-ac1f-d4ff480c0993","Type":"ContainerDied","Data":"b4742b126f04bd73fc89a611417ca7e607aa6268f96f09a20199c1a37ce9f841"} Dec 10 15:44:03 crc kubenswrapper[4755]: I1210 15:44:03.584699 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"dbc85971-1a23-47af-bae0-708919198aee","Type":"ContainerStarted","Data":"37de2f26e55b5b74c2d1119d9b29aac8c99afc5a6cea8be7bcc1b2f5ac5a5c36"} Dec 10 15:44:03 crc kubenswrapper[4755]: I1210 15:44:03.587223 4755 generic.go:334] "Generic (PLEG): container finished" podID="de75203a-0006-4039-9ed4-505cbe5852d2" containerID="ed481a9c85ba279d104e8abe5efe9f4512782a658d813cc3f59f4b19fca0e58d" exitCode=0 Dec 10 15:44:03 crc kubenswrapper[4755]: I1210 15:44:03.587312 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-st762" event={"ID":"de75203a-0006-4039-9ed4-505cbe5852d2","Type":"ContainerDied","Data":"ed481a9c85ba279d104e8abe5efe9f4512782a658d813cc3f59f4b19fca0e58d"} Dec 10 15:44:03 crc kubenswrapper[4755]: I1210 15:44:03.588298 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d79c58e8-f9d1-4fde-945d-a6e175ec8fee","Type":"ContainerStarted","Data":"9e3d119905d748344192b7e68c06b6f2e6657fda6a97a6d7cef5fe75b564376b"} Dec 10 15:44:05 crc kubenswrapper[4755]: I1210 15:44:05.074632 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-9l6v9" Dec 10 15:44:05 crc kubenswrapper[4755]: I1210 15:44:05.133618 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-62cbt\" (UniqueName: \"kubernetes.io/projected/c7e81227-e01b-4851-ac1f-d4ff480c0993-kube-api-access-62cbt\") pod \"c7e81227-e01b-4851-ac1f-d4ff480c0993\" (UID: \"c7e81227-e01b-4851-ac1f-d4ff480c0993\") " Dec 10 15:44:05 crc kubenswrapper[4755]: I1210 15:44:05.133685 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c7e81227-e01b-4851-ac1f-d4ff480c0993-config\") pod \"c7e81227-e01b-4851-ac1f-d4ff480c0993\" (UID: \"c7e81227-e01b-4851-ac1f-d4ff480c0993\") " Dec 10 15:44:05 crc kubenswrapper[4755]: I1210 15:44:05.133856 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7e81227-e01b-4851-ac1f-d4ff480c0993-combined-ca-bundle\") pod \"c7e81227-e01b-4851-ac1f-d4ff480c0993\" (UID: \"c7e81227-e01b-4851-ac1f-d4ff480c0993\") " Dec 10 15:44:05 crc kubenswrapper[4755]: I1210 15:44:05.154431 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7e81227-e01b-4851-ac1f-d4ff480c0993-kube-api-access-62cbt" (OuterVolumeSpecName: "kube-api-access-62cbt") pod "c7e81227-e01b-4851-ac1f-d4ff480c0993" (UID: "c7e81227-e01b-4851-ac1f-d4ff480c0993"). InnerVolumeSpecName "kube-api-access-62cbt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:44:05 crc kubenswrapper[4755]: I1210 15:44:05.188712 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7e81227-e01b-4851-ac1f-d4ff480c0993-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c7e81227-e01b-4851-ac1f-d4ff480c0993" (UID: "c7e81227-e01b-4851-ac1f-d4ff480c0993"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:44:05 crc kubenswrapper[4755]: I1210 15:44:05.214659 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7e81227-e01b-4851-ac1f-d4ff480c0993-config" (OuterVolumeSpecName: "config") pod "c7e81227-e01b-4851-ac1f-d4ff480c0993" (UID: "c7e81227-e01b-4851-ac1f-d4ff480c0993"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:44:05 crc kubenswrapper[4755]: I1210 15:44:05.235776 4755 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7e81227-e01b-4851-ac1f-d4ff480c0993-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:05 crc kubenswrapper[4755]: I1210 15:44:05.235814 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-62cbt\" (UniqueName: \"kubernetes.io/projected/c7e81227-e01b-4851-ac1f-d4ff480c0993-kube-api-access-62cbt\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:05 crc kubenswrapper[4755]: I1210 15:44:05.235826 4755 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/c7e81227-e01b-4851-ac1f-d4ff480c0993-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:05 crc kubenswrapper[4755]: I1210 15:44:05.622162 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-st762" event={"ID":"de75203a-0006-4039-9ed4-505cbe5852d2","Type":"ContainerStarted","Data":"4fd733f1375886ae38c46880d3e1c49497ede0c574ddfa1dc6eead2b124373e8"} Dec 10 15:44:05 crc kubenswrapper[4755]: I1210 15:44:05.622330 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-56df8fb6b7-st762" Dec 10 15:44:05 crc kubenswrapper[4755]: I1210 15:44:05.628157 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d79c58e8-f9d1-4fde-945d-a6e175ec8fee","Type":"ContainerStarted","Data":"484644dc9f04c386c26fc0cd7b6116e4354c3540756dbc20ae2747f2de24c422"} Dec 10 15:44:05 crc kubenswrapper[4755]: I1210 15:44:05.630784 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-9l6v9" Dec 10 15:44:05 crc kubenswrapper[4755]: I1210 15:44:05.630785 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-9l6v9" event={"ID":"c7e81227-e01b-4851-ac1f-d4ff480c0993","Type":"ContainerDied","Data":"50a69470631e39cc191d910a9becadead74499f0dcb7c9ea3449f7b35a258e52"} Dec 10 15:44:05 crc kubenswrapper[4755]: I1210 15:44:05.630945 4755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50a69470631e39cc191d910a9becadead74499f0dcb7c9ea3449f7b35a258e52" Dec 10 15:44:05 crc kubenswrapper[4755]: I1210 15:44:05.640274 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"dbc85971-1a23-47af-bae0-708919198aee","Type":"ContainerStarted","Data":"eee6d33760bd3448c482537ce51f449b5f55274b2eb0c62de4ca45a51b43ab7c"} Dec 10 15:44:05 crc kubenswrapper[4755]: I1210 15:44:05.657918 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-56df8fb6b7-st762" podStartSLOduration=11.657900324 podStartE2EDuration="11.657900324s" podCreationTimestamp="2025-12-10 15:43:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:44:05.640767569 +0000 UTC m=+1242.241651221" watchObservedRunningTime="2025-12-10 15:44:05.657900324 +0000 UTC m=+1242.258783956" Dec 10 15:44:05 crc kubenswrapper[4755]: I1210 15:44:05.934933 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-st762"] Dec 10 15:44:05 crc kubenswrapper[4755]: I1210 15:44:05.982019 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-gg57w"] Dec 10 15:44:05 crc kubenswrapper[4755]: E1210 15:44:05.984535 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7e81227-e01b-4851-ac1f-d4ff480c0993" containerName="neutron-db-sync" Dec 10 15:44:05 crc kubenswrapper[4755]: I1210 15:44:05.984677 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7e81227-e01b-4851-ac1f-d4ff480c0993" containerName="neutron-db-sync" Dec 10 15:44:05 crc kubenswrapper[4755]: I1210 15:44:05.985163 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7e81227-e01b-4851-ac1f-d4ff480c0993" containerName="neutron-db-sync" Dec 10 15:44:06 crc kubenswrapper[4755]: I1210 15:44:06.012031 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-gg57w" Dec 10 15:44:06 crc kubenswrapper[4755]: I1210 15:44:06.015326 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-gg57w"] Dec 10 15:44:06 crc kubenswrapper[4755]: I1210 15:44:06.039114 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6d5449c6d4-5rw2f"] Dec 10 15:44:06 crc kubenswrapper[4755]: I1210 15:44:06.041660 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6d5449c6d4-5rw2f" Dec 10 15:44:06 crc kubenswrapper[4755]: I1210 15:44:06.049454 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Dec 10 15:44:06 crc kubenswrapper[4755]: I1210 15:44:06.051956 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Dec 10 15:44:06 crc kubenswrapper[4755]: I1210 15:44:06.053622 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Dec 10 15:44:06 crc kubenswrapper[4755]: I1210 15:44:06.053762 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-fqzhg" Dec 10 15:44:06 crc kubenswrapper[4755]: I1210 15:44:06.063767 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d03214c3-1a52-44fb-aa0d-45de3de8ff44-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-gg57w\" (UID: \"d03214c3-1a52-44fb-aa0d-45de3de8ff44\") " pod="openstack/dnsmasq-dns-6b7b667979-gg57w" Dec 10 15:44:06 crc kubenswrapper[4755]: I1210 15:44:06.063825 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2jsb\" (UniqueName: \"kubernetes.io/projected/d03214c3-1a52-44fb-aa0d-45de3de8ff44-kube-api-access-m2jsb\") pod \"dnsmasq-dns-6b7b667979-gg57w\" (UID: \"d03214c3-1a52-44fb-aa0d-45de3de8ff44\") " pod="openstack/dnsmasq-dns-6b7b667979-gg57w" Dec 10 15:44:06 crc kubenswrapper[4755]: I1210 15:44:06.063850 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d03214c3-1a52-44fb-aa0d-45de3de8ff44-config\") pod \"dnsmasq-dns-6b7b667979-gg57w\" (UID: \"d03214c3-1a52-44fb-aa0d-45de3de8ff44\") " pod="openstack/dnsmasq-dns-6b7b667979-gg57w" Dec 10 15:44:06 crc kubenswrapper[4755]: I1210 15:44:06.063865 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d03214c3-1a52-44fb-aa0d-45de3de8ff44-dns-svc\") pod \"dnsmasq-dns-6b7b667979-gg57w\" (UID: \"d03214c3-1a52-44fb-aa0d-45de3de8ff44\") " pod="openstack/dnsmasq-dns-6b7b667979-gg57w" Dec 10 15:44:06 crc kubenswrapper[4755]: I1210 15:44:06.063898 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d03214c3-1a52-44fb-aa0d-45de3de8ff44-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-gg57w\" (UID: \"d03214c3-1a52-44fb-aa0d-45de3de8ff44\") " pod="openstack/dnsmasq-dns-6b7b667979-gg57w" Dec 10 15:44:06 crc kubenswrapper[4755]: I1210 15:44:06.063941 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d03214c3-1a52-44fb-aa0d-45de3de8ff44-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-gg57w\" (UID: \"d03214c3-1a52-44fb-aa0d-45de3de8ff44\") " pod="openstack/dnsmasq-dns-6b7b667979-gg57w" Dec 10 15:44:06 crc kubenswrapper[4755]: I1210 15:44:06.080960 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6d5449c6d4-5rw2f"] Dec 10 15:44:06 crc kubenswrapper[4755]: I1210 15:44:06.166134 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4r8t\" (UniqueName: \"kubernetes.io/projected/7d152fd0-fbbf-4c7b-874a-169860ee9075-kube-api-access-z4r8t\") pod \"neutron-6d5449c6d4-5rw2f\" (UID: \"7d152fd0-fbbf-4c7b-874a-169860ee9075\") " pod="openstack/neutron-6d5449c6d4-5rw2f" Dec 10 15:44:06 crc kubenswrapper[4755]: I1210 15:44:06.166238 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7d152fd0-fbbf-4c7b-874a-169860ee9075-httpd-config\") pod \"neutron-6d5449c6d4-5rw2f\" (UID: \"7d152fd0-fbbf-4c7b-874a-169860ee9075\") " pod="openstack/neutron-6d5449c6d4-5rw2f" Dec 10 15:44:06 crc kubenswrapper[4755]: I1210 15:44:06.166269 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d152fd0-fbbf-4c7b-874a-169860ee9075-combined-ca-bundle\") pod \"neutron-6d5449c6d4-5rw2f\" (UID: \"7d152fd0-fbbf-4c7b-874a-169860ee9075\") " pod="openstack/neutron-6d5449c6d4-5rw2f" Dec 10 15:44:06 crc kubenswrapper[4755]: I1210 15:44:06.166300 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d03214c3-1a52-44fb-aa0d-45de3de8ff44-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-gg57w\" (UID: \"d03214c3-1a52-44fb-aa0d-45de3de8ff44\") " pod="openstack/dnsmasq-dns-6b7b667979-gg57w" Dec 10 15:44:06 crc kubenswrapper[4755]: I1210 15:44:06.166333 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2jsb\" (UniqueName: \"kubernetes.io/projected/d03214c3-1a52-44fb-aa0d-45de3de8ff44-kube-api-access-m2jsb\") pod \"dnsmasq-dns-6b7b667979-gg57w\" (UID: \"d03214c3-1a52-44fb-aa0d-45de3de8ff44\") " pod="openstack/dnsmasq-dns-6b7b667979-gg57w" Dec 10 15:44:06 crc kubenswrapper[4755]: I1210 15:44:06.166356 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d03214c3-1a52-44fb-aa0d-45de3de8ff44-config\") pod \"dnsmasq-dns-6b7b667979-gg57w\" (UID: \"d03214c3-1a52-44fb-aa0d-45de3de8ff44\") " pod="openstack/dnsmasq-dns-6b7b667979-gg57w" Dec 10 15:44:06 crc kubenswrapper[4755]: I1210 15:44:06.166370 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d03214c3-1a52-44fb-aa0d-45de3de8ff44-dns-svc\") pod \"dnsmasq-dns-6b7b667979-gg57w\" (UID: \"d03214c3-1a52-44fb-aa0d-45de3de8ff44\") " pod="openstack/dnsmasq-dns-6b7b667979-gg57w" Dec 10 15:44:06 crc kubenswrapper[4755]: I1210 15:44:06.166398 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7d152fd0-fbbf-4c7b-874a-169860ee9075-config\") pod \"neutron-6d5449c6d4-5rw2f\" (UID: \"7d152fd0-fbbf-4c7b-874a-169860ee9075\") " pod="openstack/neutron-6d5449c6d4-5rw2f" Dec 10 15:44:06 crc kubenswrapper[4755]: I1210 15:44:06.166415 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d152fd0-fbbf-4c7b-874a-169860ee9075-ovndb-tls-certs\") pod \"neutron-6d5449c6d4-5rw2f\" (UID: \"7d152fd0-fbbf-4c7b-874a-169860ee9075\") " pod="openstack/neutron-6d5449c6d4-5rw2f" Dec 10 15:44:06 crc kubenswrapper[4755]: I1210 15:44:06.166433 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d03214c3-1a52-44fb-aa0d-45de3de8ff44-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-gg57w\" (UID: \"d03214c3-1a52-44fb-aa0d-45de3de8ff44\") " pod="openstack/dnsmasq-dns-6b7b667979-gg57w" Dec 10 15:44:06 crc kubenswrapper[4755]: I1210 15:44:06.166518 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d03214c3-1a52-44fb-aa0d-45de3de8ff44-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-gg57w\" (UID: \"d03214c3-1a52-44fb-aa0d-45de3de8ff44\") " pod="openstack/dnsmasq-dns-6b7b667979-gg57w" Dec 10 15:44:06 crc kubenswrapper[4755]: I1210 15:44:06.167291 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d03214c3-1a52-44fb-aa0d-45de3de8ff44-dns-svc\") pod \"dnsmasq-dns-6b7b667979-gg57w\" (UID: \"d03214c3-1a52-44fb-aa0d-45de3de8ff44\") " pod="openstack/dnsmasq-dns-6b7b667979-gg57w" Dec 10 15:44:06 crc kubenswrapper[4755]: I1210 15:44:06.167349 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d03214c3-1a52-44fb-aa0d-45de3de8ff44-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-gg57w\" (UID: \"d03214c3-1a52-44fb-aa0d-45de3de8ff44\") " pod="openstack/dnsmasq-dns-6b7b667979-gg57w" Dec 10 15:44:06 crc kubenswrapper[4755]: I1210 15:44:06.167771 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d03214c3-1a52-44fb-aa0d-45de3de8ff44-config\") pod \"dnsmasq-dns-6b7b667979-gg57w\" (UID: \"d03214c3-1a52-44fb-aa0d-45de3de8ff44\") " pod="openstack/dnsmasq-dns-6b7b667979-gg57w" Dec 10 15:44:06 crc kubenswrapper[4755]: I1210 15:44:06.167788 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d03214c3-1a52-44fb-aa0d-45de3de8ff44-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-gg57w\" (UID: \"d03214c3-1a52-44fb-aa0d-45de3de8ff44\") " pod="openstack/dnsmasq-dns-6b7b667979-gg57w" Dec 10 15:44:06 crc kubenswrapper[4755]: I1210 15:44:06.168173 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d03214c3-1a52-44fb-aa0d-45de3de8ff44-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-gg57w\" (UID: \"d03214c3-1a52-44fb-aa0d-45de3de8ff44\") " pod="openstack/dnsmasq-dns-6b7b667979-gg57w" Dec 10 15:44:06 crc kubenswrapper[4755]: I1210 15:44:06.185256 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2jsb\" (UniqueName: \"kubernetes.io/projected/d03214c3-1a52-44fb-aa0d-45de3de8ff44-kube-api-access-m2jsb\") pod \"dnsmasq-dns-6b7b667979-gg57w\" (UID: \"d03214c3-1a52-44fb-aa0d-45de3de8ff44\") " pod="openstack/dnsmasq-dns-6b7b667979-gg57w" Dec 10 15:44:06 crc kubenswrapper[4755]: I1210 15:44:06.268510 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7d152fd0-fbbf-4c7b-874a-169860ee9075-config\") pod \"neutron-6d5449c6d4-5rw2f\" (UID: \"7d152fd0-fbbf-4c7b-874a-169860ee9075\") " pod="openstack/neutron-6d5449c6d4-5rw2f" Dec 10 15:44:06 crc kubenswrapper[4755]: I1210 15:44:06.268558 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d152fd0-fbbf-4c7b-874a-169860ee9075-ovndb-tls-certs\") pod \"neutron-6d5449c6d4-5rw2f\" (UID: \"7d152fd0-fbbf-4c7b-874a-169860ee9075\") " pod="openstack/neutron-6d5449c6d4-5rw2f" Dec 10 15:44:06 crc kubenswrapper[4755]: I1210 15:44:06.269149 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4r8t\" (UniqueName: \"kubernetes.io/projected/7d152fd0-fbbf-4c7b-874a-169860ee9075-kube-api-access-z4r8t\") pod \"neutron-6d5449c6d4-5rw2f\" (UID: \"7d152fd0-fbbf-4c7b-874a-169860ee9075\") " pod="openstack/neutron-6d5449c6d4-5rw2f" Dec 10 15:44:06 crc kubenswrapper[4755]: I1210 15:44:06.269199 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7d152fd0-fbbf-4c7b-874a-169860ee9075-httpd-config\") pod \"neutron-6d5449c6d4-5rw2f\" (UID: \"7d152fd0-fbbf-4c7b-874a-169860ee9075\") " pod="openstack/neutron-6d5449c6d4-5rw2f" Dec 10 15:44:06 crc kubenswrapper[4755]: I1210 15:44:06.269224 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d152fd0-fbbf-4c7b-874a-169860ee9075-combined-ca-bundle\") pod \"neutron-6d5449c6d4-5rw2f\" (UID: \"7d152fd0-fbbf-4c7b-874a-169860ee9075\") " pod="openstack/neutron-6d5449c6d4-5rw2f" Dec 10 15:44:06 crc kubenswrapper[4755]: I1210 15:44:06.284479 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d152fd0-fbbf-4c7b-874a-169860ee9075-ovndb-tls-certs\") pod \"neutron-6d5449c6d4-5rw2f\" (UID: \"7d152fd0-fbbf-4c7b-874a-169860ee9075\") " pod="openstack/neutron-6d5449c6d4-5rw2f" Dec 10 15:44:06 crc kubenswrapper[4755]: I1210 15:44:06.284707 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d152fd0-fbbf-4c7b-874a-169860ee9075-combined-ca-bundle\") pod \"neutron-6d5449c6d4-5rw2f\" (UID: \"7d152fd0-fbbf-4c7b-874a-169860ee9075\") " pod="openstack/neutron-6d5449c6d4-5rw2f" Dec 10 15:44:06 crc kubenswrapper[4755]: I1210 15:44:06.288737 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4r8t\" (UniqueName: \"kubernetes.io/projected/7d152fd0-fbbf-4c7b-874a-169860ee9075-kube-api-access-z4r8t\") pod \"neutron-6d5449c6d4-5rw2f\" (UID: \"7d152fd0-fbbf-4c7b-874a-169860ee9075\") " pod="openstack/neutron-6d5449c6d4-5rw2f" Dec 10 15:44:06 crc kubenswrapper[4755]: I1210 15:44:06.289137 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/7d152fd0-fbbf-4c7b-874a-169860ee9075-config\") pod \"neutron-6d5449c6d4-5rw2f\" (UID: \"7d152fd0-fbbf-4c7b-874a-169860ee9075\") " pod="openstack/neutron-6d5449c6d4-5rw2f" Dec 10 15:44:06 crc kubenswrapper[4755]: I1210 15:44:06.291337 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7d152fd0-fbbf-4c7b-874a-169860ee9075-httpd-config\") pod \"neutron-6d5449c6d4-5rw2f\" (UID: \"7d152fd0-fbbf-4c7b-874a-169860ee9075\") " pod="openstack/neutron-6d5449c6d4-5rw2f" Dec 10 15:44:06 crc kubenswrapper[4755]: I1210 15:44:06.353962 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-gg57w" Dec 10 15:44:06 crc kubenswrapper[4755]: I1210 15:44:06.373430 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6d5449c6d4-5rw2f" Dec 10 15:44:07 crc kubenswrapper[4755]: I1210 15:44:07.659754 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-56df8fb6b7-st762" podUID="de75203a-0006-4039-9ed4-505cbe5852d2" containerName="dnsmasq-dns" containerID="cri-o://4fd733f1375886ae38c46880d3e1c49497ede0c574ddfa1dc6eead2b124373e8" gracePeriod=10 Dec 10 15:44:08 crc kubenswrapper[4755]: I1210 15:44:08.434341 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5f46cf586c-brqwd"] Dec 10 15:44:08 crc kubenswrapper[4755]: I1210 15:44:08.438528 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5f46cf586c-brqwd" Dec 10 15:44:08 crc kubenswrapper[4755]: I1210 15:44:08.445024 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Dec 10 15:44:08 crc kubenswrapper[4755]: I1210 15:44:08.445088 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Dec 10 15:44:08 crc kubenswrapper[4755]: I1210 15:44:08.448593 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5f46cf586c-brqwd"] Dec 10 15:44:08 crc kubenswrapper[4755]: I1210 15:44:08.516604 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/efbad6ea-87b6-40ec-b2a6-542e31d18e69-public-tls-certs\") pod \"neutron-5f46cf586c-brqwd\" (UID: \"efbad6ea-87b6-40ec-b2a6-542e31d18e69\") " pod="openstack/neutron-5f46cf586c-brqwd" Dec 10 15:44:08 crc kubenswrapper[4755]: I1210 15:44:08.516663 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efbad6ea-87b6-40ec-b2a6-542e31d18e69-combined-ca-bundle\") pod \"neutron-5f46cf586c-brqwd\" (UID: \"efbad6ea-87b6-40ec-b2a6-542e31d18e69\") " pod="openstack/neutron-5f46cf586c-brqwd" Dec 10 15:44:08 crc kubenswrapper[4755]: I1210 15:44:08.516741 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nm6ff\" (UniqueName: \"kubernetes.io/projected/efbad6ea-87b6-40ec-b2a6-542e31d18e69-kube-api-access-nm6ff\") pod \"neutron-5f46cf586c-brqwd\" (UID: \"efbad6ea-87b6-40ec-b2a6-542e31d18e69\") " pod="openstack/neutron-5f46cf586c-brqwd" Dec 10 15:44:08 crc kubenswrapper[4755]: I1210 15:44:08.516795 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/efbad6ea-87b6-40ec-b2a6-542e31d18e69-ovndb-tls-certs\") pod \"neutron-5f46cf586c-brqwd\" (UID: \"efbad6ea-87b6-40ec-b2a6-542e31d18e69\") " pod="openstack/neutron-5f46cf586c-brqwd" Dec 10 15:44:08 crc kubenswrapper[4755]: I1210 15:44:08.516842 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/efbad6ea-87b6-40ec-b2a6-542e31d18e69-config\") pod \"neutron-5f46cf586c-brqwd\" (UID: \"efbad6ea-87b6-40ec-b2a6-542e31d18e69\") " pod="openstack/neutron-5f46cf586c-brqwd" Dec 10 15:44:08 crc kubenswrapper[4755]: I1210 15:44:08.516906 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/efbad6ea-87b6-40ec-b2a6-542e31d18e69-internal-tls-certs\") pod \"neutron-5f46cf586c-brqwd\" (UID: \"efbad6ea-87b6-40ec-b2a6-542e31d18e69\") " pod="openstack/neutron-5f46cf586c-brqwd" Dec 10 15:44:08 crc kubenswrapper[4755]: I1210 15:44:08.516945 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/efbad6ea-87b6-40ec-b2a6-542e31d18e69-httpd-config\") pod \"neutron-5f46cf586c-brqwd\" (UID: \"efbad6ea-87b6-40ec-b2a6-542e31d18e69\") " pod="openstack/neutron-5f46cf586c-brqwd" Dec 10 15:44:08 crc kubenswrapper[4755]: I1210 15:44:08.618420 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/efbad6ea-87b6-40ec-b2a6-542e31d18e69-ovndb-tls-certs\") pod \"neutron-5f46cf586c-brqwd\" (UID: \"efbad6ea-87b6-40ec-b2a6-542e31d18e69\") " pod="openstack/neutron-5f46cf586c-brqwd" Dec 10 15:44:08 crc kubenswrapper[4755]: I1210 15:44:08.618779 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/efbad6ea-87b6-40ec-b2a6-542e31d18e69-config\") pod \"neutron-5f46cf586c-brqwd\" (UID: \"efbad6ea-87b6-40ec-b2a6-542e31d18e69\") " pod="openstack/neutron-5f46cf586c-brqwd" Dec 10 15:44:08 crc kubenswrapper[4755]: I1210 15:44:08.618820 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/efbad6ea-87b6-40ec-b2a6-542e31d18e69-internal-tls-certs\") pod \"neutron-5f46cf586c-brqwd\" (UID: \"efbad6ea-87b6-40ec-b2a6-542e31d18e69\") " pod="openstack/neutron-5f46cf586c-brqwd" Dec 10 15:44:08 crc kubenswrapper[4755]: I1210 15:44:08.618855 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/efbad6ea-87b6-40ec-b2a6-542e31d18e69-httpd-config\") pod \"neutron-5f46cf586c-brqwd\" (UID: \"efbad6ea-87b6-40ec-b2a6-542e31d18e69\") " pod="openstack/neutron-5f46cf586c-brqwd" Dec 10 15:44:08 crc kubenswrapper[4755]: I1210 15:44:08.618932 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/efbad6ea-87b6-40ec-b2a6-542e31d18e69-public-tls-certs\") pod \"neutron-5f46cf586c-brqwd\" (UID: \"efbad6ea-87b6-40ec-b2a6-542e31d18e69\") " pod="openstack/neutron-5f46cf586c-brqwd" Dec 10 15:44:08 crc kubenswrapper[4755]: I1210 15:44:08.618960 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efbad6ea-87b6-40ec-b2a6-542e31d18e69-combined-ca-bundle\") pod \"neutron-5f46cf586c-brqwd\" (UID: \"efbad6ea-87b6-40ec-b2a6-542e31d18e69\") " pod="openstack/neutron-5f46cf586c-brqwd" Dec 10 15:44:08 crc kubenswrapper[4755]: I1210 15:44:08.618994 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nm6ff\" (UniqueName: \"kubernetes.io/projected/efbad6ea-87b6-40ec-b2a6-542e31d18e69-kube-api-access-nm6ff\") pod \"neutron-5f46cf586c-brqwd\" (UID: \"efbad6ea-87b6-40ec-b2a6-542e31d18e69\") " pod="openstack/neutron-5f46cf586c-brqwd" Dec 10 15:44:08 crc kubenswrapper[4755]: I1210 15:44:08.624385 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/efbad6ea-87b6-40ec-b2a6-542e31d18e69-internal-tls-certs\") pod \"neutron-5f46cf586c-brqwd\" (UID: \"efbad6ea-87b6-40ec-b2a6-542e31d18e69\") " pod="openstack/neutron-5f46cf586c-brqwd" Dec 10 15:44:08 crc kubenswrapper[4755]: I1210 15:44:08.624878 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/efbad6ea-87b6-40ec-b2a6-542e31d18e69-config\") pod \"neutron-5f46cf586c-brqwd\" (UID: \"efbad6ea-87b6-40ec-b2a6-542e31d18e69\") " pod="openstack/neutron-5f46cf586c-brqwd" Dec 10 15:44:08 crc kubenswrapper[4755]: I1210 15:44:08.626673 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/efbad6ea-87b6-40ec-b2a6-542e31d18e69-public-tls-certs\") pod \"neutron-5f46cf586c-brqwd\" (UID: \"efbad6ea-87b6-40ec-b2a6-542e31d18e69\") " pod="openstack/neutron-5f46cf586c-brqwd" Dec 10 15:44:08 crc kubenswrapper[4755]: I1210 15:44:08.627049 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/efbad6ea-87b6-40ec-b2a6-542e31d18e69-ovndb-tls-certs\") pod \"neutron-5f46cf586c-brqwd\" (UID: \"efbad6ea-87b6-40ec-b2a6-542e31d18e69\") " pod="openstack/neutron-5f46cf586c-brqwd" Dec 10 15:44:08 crc kubenswrapper[4755]: I1210 15:44:08.627252 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efbad6ea-87b6-40ec-b2a6-542e31d18e69-combined-ca-bundle\") pod \"neutron-5f46cf586c-brqwd\" (UID: \"efbad6ea-87b6-40ec-b2a6-542e31d18e69\") " pod="openstack/neutron-5f46cf586c-brqwd" Dec 10 15:44:08 crc kubenswrapper[4755]: I1210 15:44:08.627668 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/efbad6ea-87b6-40ec-b2a6-542e31d18e69-httpd-config\") pod \"neutron-5f46cf586c-brqwd\" (UID: \"efbad6ea-87b6-40ec-b2a6-542e31d18e69\") " pod="openstack/neutron-5f46cf586c-brqwd" Dec 10 15:44:08 crc kubenswrapper[4755]: I1210 15:44:08.642229 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nm6ff\" (UniqueName: \"kubernetes.io/projected/efbad6ea-87b6-40ec-b2a6-542e31d18e69-kube-api-access-nm6ff\") pod \"neutron-5f46cf586c-brqwd\" (UID: \"efbad6ea-87b6-40ec-b2a6-542e31d18e69\") " pod="openstack/neutron-5f46cf586c-brqwd" Dec 10 15:44:08 crc kubenswrapper[4755]: I1210 15:44:08.674786 4755 generic.go:334] "Generic (PLEG): container finished" podID="07e5d790-41c9-4f66-87e4-6088fe8bbc8f" containerID="815cb3dcfec50a0281d6b4e6b81c6ac12f56545437ec542718836fab236e026d" exitCode=0 Dec 10 15:44:08 crc kubenswrapper[4755]: I1210 15:44:08.674860 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-zttv2" event={"ID":"07e5d790-41c9-4f66-87e4-6088fe8bbc8f","Type":"ContainerDied","Data":"815cb3dcfec50a0281d6b4e6b81c6ac12f56545437ec542718836fab236e026d"} Dec 10 15:44:08 crc kubenswrapper[4755]: I1210 15:44:08.679994 4755 generic.go:334] "Generic (PLEG): container finished" podID="0009273d-a6d2-43da-99f9-993f2aba0e3a" containerID="998291edebca64a51c3c48de3b26f728fbd5dc52fab841d39b6d5bc065326347" exitCode=0 Dec 10 15:44:08 crc kubenswrapper[4755]: I1210 15:44:08.680076 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wstp4" event={"ID":"0009273d-a6d2-43da-99f9-993f2aba0e3a","Type":"ContainerDied","Data":"998291edebca64a51c3c48de3b26f728fbd5dc52fab841d39b6d5bc065326347"} Dec 10 15:44:08 crc kubenswrapper[4755]: I1210 15:44:08.682493 4755 generic.go:334] "Generic (PLEG): container finished" podID="de75203a-0006-4039-9ed4-505cbe5852d2" containerID="4fd733f1375886ae38c46880d3e1c49497ede0c574ddfa1dc6eead2b124373e8" exitCode=0 Dec 10 15:44:08 crc kubenswrapper[4755]: I1210 15:44:08.682525 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-st762" event={"ID":"de75203a-0006-4039-9ed4-505cbe5852d2","Type":"ContainerDied","Data":"4fd733f1375886ae38c46880d3e1c49497ede0c574ddfa1dc6eead2b124373e8"} Dec 10 15:44:08 crc kubenswrapper[4755]: I1210 15:44:08.773039 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5f46cf586c-brqwd" Dec 10 15:44:09 crc kubenswrapper[4755]: I1210 15:44:09.731621 4755 generic.go:334] "Generic (PLEG): container finished" podID="3821a978-80ec-4434-a871-ed026186f498" containerID="85cfc7d93b2a05f6174a89957198e855ab5e29e641e81a0927b2d05109cf98e8" exitCode=0 Dec 10 15:44:09 crc kubenswrapper[4755]: I1210 15:44:09.732051 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-6fnmv" event={"ID":"3821a978-80ec-4434-a871-ed026186f498","Type":"ContainerDied","Data":"85cfc7d93b2a05f6174a89957198e855ab5e29e641e81a0927b2d05109cf98e8"} Dec 10 15:44:09 crc kubenswrapper[4755]: I1210 15:44:09.979878 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-gg57w"] Dec 10 15:44:10 crc kubenswrapper[4755]: I1210 15:44:10.191127 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-st762" Dec 10 15:44:10 crc kubenswrapper[4755]: I1210 15:44:10.322449 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/de75203a-0006-4039-9ed4-505cbe5852d2-ovsdbserver-sb\") pod \"de75203a-0006-4039-9ed4-505cbe5852d2\" (UID: \"de75203a-0006-4039-9ed4-505cbe5852d2\") " Dec 10 15:44:10 crc kubenswrapper[4755]: I1210 15:44:10.322514 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/de75203a-0006-4039-9ed4-505cbe5852d2-ovsdbserver-nb\") pod \"de75203a-0006-4039-9ed4-505cbe5852d2\" (UID: \"de75203a-0006-4039-9ed4-505cbe5852d2\") " Dec 10 15:44:10 crc kubenswrapper[4755]: I1210 15:44:10.322569 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/de75203a-0006-4039-9ed4-505cbe5852d2-dns-svc\") pod \"de75203a-0006-4039-9ed4-505cbe5852d2\" (UID: \"de75203a-0006-4039-9ed4-505cbe5852d2\") " Dec 10 15:44:10 crc kubenswrapper[4755]: I1210 15:44:10.322732 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pmd4q\" (UniqueName: \"kubernetes.io/projected/de75203a-0006-4039-9ed4-505cbe5852d2-kube-api-access-pmd4q\") pod \"de75203a-0006-4039-9ed4-505cbe5852d2\" (UID: \"de75203a-0006-4039-9ed4-505cbe5852d2\") " Dec 10 15:44:10 crc kubenswrapper[4755]: I1210 15:44:10.322760 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/de75203a-0006-4039-9ed4-505cbe5852d2-dns-swift-storage-0\") pod \"de75203a-0006-4039-9ed4-505cbe5852d2\" (UID: \"de75203a-0006-4039-9ed4-505cbe5852d2\") " Dec 10 15:44:10 crc kubenswrapper[4755]: I1210 15:44:10.322778 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de75203a-0006-4039-9ed4-505cbe5852d2-config\") pod \"de75203a-0006-4039-9ed4-505cbe5852d2\" (UID: \"de75203a-0006-4039-9ed4-505cbe5852d2\") " Dec 10 15:44:10 crc kubenswrapper[4755]: I1210 15:44:10.344250 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de75203a-0006-4039-9ed4-505cbe5852d2-kube-api-access-pmd4q" (OuterVolumeSpecName: "kube-api-access-pmd4q") pod "de75203a-0006-4039-9ed4-505cbe5852d2" (UID: "de75203a-0006-4039-9ed4-505cbe5852d2"). InnerVolumeSpecName "kube-api-access-pmd4q". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:44:10 crc kubenswrapper[4755]: I1210 15:44:10.386941 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6d5449c6d4-5rw2f"] Dec 10 15:44:10 crc kubenswrapper[4755]: I1210 15:44:10.394775 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de75203a-0006-4039-9ed4-505cbe5852d2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "de75203a-0006-4039-9ed4-505cbe5852d2" (UID: "de75203a-0006-4039-9ed4-505cbe5852d2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:44:10 crc kubenswrapper[4755]: I1210 15:44:10.431997 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pmd4q\" (UniqueName: \"kubernetes.io/projected/de75203a-0006-4039-9ed4-505cbe5852d2-kube-api-access-pmd4q\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:10 crc kubenswrapper[4755]: I1210 15:44:10.432044 4755 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/de75203a-0006-4039-9ed4-505cbe5852d2-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:10 crc kubenswrapper[4755]: I1210 15:44:10.441524 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5f46cf586c-brqwd"] Dec 10 15:44:10 crc kubenswrapper[4755]: I1210 15:44:10.450431 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de75203a-0006-4039-9ed4-505cbe5852d2-config" (OuterVolumeSpecName: "config") pod "de75203a-0006-4039-9ed4-505cbe5852d2" (UID: "de75203a-0006-4039-9ed4-505cbe5852d2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:44:10 crc kubenswrapper[4755]: I1210 15:44:10.451625 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de75203a-0006-4039-9ed4-505cbe5852d2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "de75203a-0006-4039-9ed4-505cbe5852d2" (UID: "de75203a-0006-4039-9ed4-505cbe5852d2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:44:10 crc kubenswrapper[4755]: I1210 15:44:10.452931 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de75203a-0006-4039-9ed4-505cbe5852d2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "de75203a-0006-4039-9ed4-505cbe5852d2" (UID: "de75203a-0006-4039-9ed4-505cbe5852d2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:44:10 crc kubenswrapper[4755]: I1210 15:44:10.470310 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de75203a-0006-4039-9ed4-505cbe5852d2-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "de75203a-0006-4039-9ed4-505cbe5852d2" (UID: "de75203a-0006-4039-9ed4-505cbe5852d2"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:44:10 crc kubenswrapper[4755]: W1210 15:44:10.475544 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podefbad6ea_87b6_40ec_b2a6_542e31d18e69.slice/crio-2f1d4cb1d1868aa665bff477eb585e92b8e3310c5a847c56c908a3ec1b8b83c5 WatchSource:0}: Error finding container 2f1d4cb1d1868aa665bff477eb585e92b8e3310c5a847c56c908a3ec1b8b83c5: Status 404 returned error can't find the container with id 2f1d4cb1d1868aa665bff477eb585e92b8e3310c5a847c56c908a3ec1b8b83c5 Dec 10 15:44:10 crc kubenswrapper[4755]: I1210 15:44:10.483778 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-zttv2" Dec 10 15:44:10 crc kubenswrapper[4755]: I1210 15:44:10.485809 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wstp4" Dec 10 15:44:10 crc kubenswrapper[4755]: I1210 15:44:10.533862 4755 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/de75203a-0006-4039-9ed4-505cbe5852d2-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:10 crc kubenswrapper[4755]: I1210 15:44:10.533902 4755 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de75203a-0006-4039-9ed4-505cbe5852d2-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:10 crc kubenswrapper[4755]: I1210 15:44:10.533916 4755 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/de75203a-0006-4039-9ed4-505cbe5852d2-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:10 crc kubenswrapper[4755]: I1210 15:44:10.533926 4755 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/de75203a-0006-4039-9ed4-505cbe5852d2-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:10 crc kubenswrapper[4755]: I1210 15:44:10.635178 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0009273d-a6d2-43da-99f9-993f2aba0e3a-config-data\") pod \"0009273d-a6d2-43da-99f9-993f2aba0e3a\" (UID: \"0009273d-a6d2-43da-99f9-993f2aba0e3a\") " Dec 10 15:44:10 crc kubenswrapper[4755]: I1210 15:44:10.635259 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07e5d790-41c9-4f66-87e4-6088fe8bbc8f-scripts\") pod \"07e5d790-41c9-4f66-87e4-6088fe8bbc8f\" (UID: \"07e5d790-41c9-4f66-87e4-6088fe8bbc8f\") " Dec 10 15:44:10 crc kubenswrapper[4755]: I1210 15:44:10.635284 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0009273d-a6d2-43da-99f9-993f2aba0e3a-fernet-keys\") pod \"0009273d-a6d2-43da-99f9-993f2aba0e3a\" (UID: \"0009273d-a6d2-43da-99f9-993f2aba0e3a\") " Dec 10 15:44:10 crc kubenswrapper[4755]: I1210 15:44:10.635451 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0009273d-a6d2-43da-99f9-993f2aba0e3a-credential-keys\") pod \"0009273d-a6d2-43da-99f9-993f2aba0e3a\" (UID: \"0009273d-a6d2-43da-99f9-993f2aba0e3a\") " Dec 10 15:44:10 crc kubenswrapper[4755]: I1210 15:44:10.635514 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0009273d-a6d2-43da-99f9-993f2aba0e3a-scripts\") pod \"0009273d-a6d2-43da-99f9-993f2aba0e3a\" (UID: \"0009273d-a6d2-43da-99f9-993f2aba0e3a\") " Dec 10 15:44:10 crc kubenswrapper[4755]: I1210 15:44:10.635551 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0009273d-a6d2-43da-99f9-993f2aba0e3a-combined-ca-bundle\") pod \"0009273d-a6d2-43da-99f9-993f2aba0e3a\" (UID: \"0009273d-a6d2-43da-99f9-993f2aba0e3a\") " Dec 10 15:44:10 crc kubenswrapper[4755]: I1210 15:44:10.635650 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07e5d790-41c9-4f66-87e4-6088fe8bbc8f-combined-ca-bundle\") pod \"07e5d790-41c9-4f66-87e4-6088fe8bbc8f\" (UID: \"07e5d790-41c9-4f66-87e4-6088fe8bbc8f\") " Dec 10 15:44:10 crc kubenswrapper[4755]: I1210 15:44:10.635717 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07e5d790-41c9-4f66-87e4-6088fe8bbc8f-logs\") pod \"07e5d790-41c9-4f66-87e4-6088fe8bbc8f\" (UID: \"07e5d790-41c9-4f66-87e4-6088fe8bbc8f\") " Dec 10 15:44:10 crc kubenswrapper[4755]: I1210 15:44:10.635757 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sk854\" (UniqueName: \"kubernetes.io/projected/07e5d790-41c9-4f66-87e4-6088fe8bbc8f-kube-api-access-sk854\") pod \"07e5d790-41c9-4f66-87e4-6088fe8bbc8f\" (UID: \"07e5d790-41c9-4f66-87e4-6088fe8bbc8f\") " Dec 10 15:44:10 crc kubenswrapper[4755]: I1210 15:44:10.635803 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07e5d790-41c9-4f66-87e4-6088fe8bbc8f-config-data\") pod \"07e5d790-41c9-4f66-87e4-6088fe8bbc8f\" (UID: \"07e5d790-41c9-4f66-87e4-6088fe8bbc8f\") " Dec 10 15:44:10 crc kubenswrapper[4755]: I1210 15:44:10.635843 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nn665\" (UniqueName: \"kubernetes.io/projected/0009273d-a6d2-43da-99f9-993f2aba0e3a-kube-api-access-nn665\") pod \"0009273d-a6d2-43da-99f9-993f2aba0e3a\" (UID: \"0009273d-a6d2-43da-99f9-993f2aba0e3a\") " Dec 10 15:44:10 crc kubenswrapper[4755]: I1210 15:44:10.636833 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07e5d790-41c9-4f66-87e4-6088fe8bbc8f-logs" (OuterVolumeSpecName: "logs") pod "07e5d790-41c9-4f66-87e4-6088fe8bbc8f" (UID: "07e5d790-41c9-4f66-87e4-6088fe8bbc8f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:44:10 crc kubenswrapper[4755]: I1210 15:44:10.642736 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07e5d790-41c9-4f66-87e4-6088fe8bbc8f-kube-api-access-sk854" (OuterVolumeSpecName: "kube-api-access-sk854") pod "07e5d790-41c9-4f66-87e4-6088fe8bbc8f" (UID: "07e5d790-41c9-4f66-87e4-6088fe8bbc8f"). InnerVolumeSpecName "kube-api-access-sk854". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:44:10 crc kubenswrapper[4755]: I1210 15:44:10.658614 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0009273d-a6d2-43da-99f9-993f2aba0e3a-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "0009273d-a6d2-43da-99f9-993f2aba0e3a" (UID: "0009273d-a6d2-43da-99f9-993f2aba0e3a"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:44:10 crc kubenswrapper[4755]: I1210 15:44:10.662075 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07e5d790-41c9-4f66-87e4-6088fe8bbc8f-scripts" (OuterVolumeSpecName: "scripts") pod "07e5d790-41c9-4f66-87e4-6088fe8bbc8f" (UID: "07e5d790-41c9-4f66-87e4-6088fe8bbc8f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:44:10 crc kubenswrapper[4755]: I1210 15:44:10.669148 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0009273d-a6d2-43da-99f9-993f2aba0e3a-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "0009273d-a6d2-43da-99f9-993f2aba0e3a" (UID: "0009273d-a6d2-43da-99f9-993f2aba0e3a"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:44:10 crc kubenswrapper[4755]: I1210 15:44:10.673288 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0009273d-a6d2-43da-99f9-993f2aba0e3a-kube-api-access-nn665" (OuterVolumeSpecName: "kube-api-access-nn665") pod "0009273d-a6d2-43da-99f9-993f2aba0e3a" (UID: "0009273d-a6d2-43da-99f9-993f2aba0e3a"). InnerVolumeSpecName "kube-api-access-nn665". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:44:10 crc kubenswrapper[4755]: I1210 15:44:10.674417 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0009273d-a6d2-43da-99f9-993f2aba0e3a-scripts" (OuterVolumeSpecName: "scripts") pod "0009273d-a6d2-43da-99f9-993f2aba0e3a" (UID: "0009273d-a6d2-43da-99f9-993f2aba0e3a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:44:10 crc kubenswrapper[4755]: I1210 15:44:10.680222 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0009273d-a6d2-43da-99f9-993f2aba0e3a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0009273d-a6d2-43da-99f9-993f2aba0e3a" (UID: "0009273d-a6d2-43da-99f9-993f2aba0e3a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:44:10 crc kubenswrapper[4755]: I1210 15:44:10.680286 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0009273d-a6d2-43da-99f9-993f2aba0e3a-config-data" (OuterVolumeSpecName: "config-data") pod "0009273d-a6d2-43da-99f9-993f2aba0e3a" (UID: "0009273d-a6d2-43da-99f9-993f2aba0e3a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:44:10 crc kubenswrapper[4755]: I1210 15:44:10.680562 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07e5d790-41c9-4f66-87e4-6088fe8bbc8f-config-data" (OuterVolumeSpecName: "config-data") pod "07e5d790-41c9-4f66-87e4-6088fe8bbc8f" (UID: "07e5d790-41c9-4f66-87e4-6088fe8bbc8f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:44:10 crc kubenswrapper[4755]: I1210 15:44:10.686767 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07e5d790-41c9-4f66-87e4-6088fe8bbc8f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "07e5d790-41c9-4f66-87e4-6088fe8bbc8f" (UID: "07e5d790-41c9-4f66-87e4-6088fe8bbc8f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:44:10 crc kubenswrapper[4755]: I1210 15:44:10.743762 4755 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0009273d-a6d2-43da-99f9-993f2aba0e3a-credential-keys\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:10 crc kubenswrapper[4755]: I1210 15:44:10.743798 4755 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0009273d-a6d2-43da-99f9-993f2aba0e3a-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:10 crc kubenswrapper[4755]: I1210 15:44:10.743813 4755 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0009273d-a6d2-43da-99f9-993f2aba0e3a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:10 crc kubenswrapper[4755]: I1210 15:44:10.743827 4755 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07e5d790-41c9-4f66-87e4-6088fe8bbc8f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:10 crc kubenswrapper[4755]: I1210 15:44:10.743841 4755 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07e5d790-41c9-4f66-87e4-6088fe8bbc8f-logs\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:10 crc kubenswrapper[4755]: I1210 15:44:10.743854 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sk854\" (UniqueName: \"kubernetes.io/projected/07e5d790-41c9-4f66-87e4-6088fe8bbc8f-kube-api-access-sk854\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:10 crc kubenswrapper[4755]: I1210 15:44:10.743870 4755 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07e5d790-41c9-4f66-87e4-6088fe8bbc8f-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:10 crc kubenswrapper[4755]: I1210 15:44:10.743883 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nn665\" (UniqueName: \"kubernetes.io/projected/0009273d-a6d2-43da-99f9-993f2aba0e3a-kube-api-access-nn665\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:10 crc kubenswrapper[4755]: I1210 15:44:10.743907 4755 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0009273d-a6d2-43da-99f9-993f2aba0e3a-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:10 crc kubenswrapper[4755]: I1210 15:44:10.743919 4755 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07e5d790-41c9-4f66-87e4-6088fe8bbc8f-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:10 crc kubenswrapper[4755]: I1210 15:44:10.743931 4755 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0009273d-a6d2-43da-99f9-993f2aba0e3a-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:10 crc kubenswrapper[4755]: I1210 15:44:10.785640 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"dbc85971-1a23-47af-bae0-708919198aee","Type":"ContainerStarted","Data":"31264b3af3717e9e0d6dfd275c96394d4cedfe9e708d617c9826b6c28e75dd5e"} Dec 10 15:44:10 crc kubenswrapper[4755]: I1210 15:44:10.785784 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="dbc85971-1a23-47af-bae0-708919198aee" containerName="glance-log" containerID="cri-o://eee6d33760bd3448c482537ce51f449b5f55274b2eb0c62de4ca45a51b43ab7c" gracePeriod=30 Dec 10 15:44:10 crc kubenswrapper[4755]: I1210 15:44:10.786057 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="dbc85971-1a23-47af-bae0-708919198aee" containerName="glance-httpd" containerID="cri-o://31264b3af3717e9e0d6dfd275c96394d4cedfe9e708d617c9826b6c28e75dd5e" gracePeriod=30 Dec 10 15:44:10 crc kubenswrapper[4755]: I1210 15:44:10.800345 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ca4e52f-2a99-42bb-abb3-20a9ee8594b5","Type":"ContainerStarted","Data":"aabe60e3adf7bc0269c08a71e4fe1d986b4d7a53ed2892180259e68165aa5cb7"} Dec 10 15:44:10 crc kubenswrapper[4755]: I1210 15:44:10.823456 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6d5449c6d4-5rw2f" event={"ID":"7d152fd0-fbbf-4c7b-874a-169860ee9075","Type":"ContainerStarted","Data":"61de7b2c15e59d0bd635c02e1e4985b8bfd2694a7ded31026d0d4a36cc326f17"} Dec 10 15:44:10 crc kubenswrapper[4755]: I1210 15:44:10.823507 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6d5449c6d4-5rw2f" event={"ID":"7d152fd0-fbbf-4c7b-874a-169860ee9075","Type":"ContainerStarted","Data":"31d1afa3cb59c5ad381b33b84d5f4d3650540c48cfb95df7de723424328cbe05"} Dec 10 15:44:10 crc kubenswrapper[4755]: I1210 15:44:10.842925 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-st762" event={"ID":"de75203a-0006-4039-9ed4-505cbe5852d2","Type":"ContainerDied","Data":"8621e32821c25177343bb04aec9fec90126f8a3b6126bc128d2964a92d6405c1"} Dec 10 15:44:10 crc kubenswrapper[4755]: I1210 15:44:10.842951 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-st762" Dec 10 15:44:10 crc kubenswrapper[4755]: I1210 15:44:10.842972 4755 scope.go:117] "RemoveContainer" containerID="4fd733f1375886ae38c46880d3e1c49497ede0c574ddfa1dc6eead2b124373e8" Dec 10 15:44:10 crc kubenswrapper[4755]: I1210 15:44:10.849559 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-75b8ff9576-fcxhh"] Dec 10 15:44:10 crc kubenswrapper[4755]: E1210 15:44:10.849977 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07e5d790-41c9-4f66-87e4-6088fe8bbc8f" containerName="placement-db-sync" Dec 10 15:44:10 crc kubenswrapper[4755]: I1210 15:44:10.849994 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="07e5d790-41c9-4f66-87e4-6088fe8bbc8f" containerName="placement-db-sync" Dec 10 15:44:10 crc kubenswrapper[4755]: E1210 15:44:10.850007 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0009273d-a6d2-43da-99f9-993f2aba0e3a" containerName="keystone-bootstrap" Dec 10 15:44:10 crc kubenswrapper[4755]: I1210 15:44:10.850013 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="0009273d-a6d2-43da-99f9-993f2aba0e3a" containerName="keystone-bootstrap" Dec 10 15:44:10 crc kubenswrapper[4755]: E1210 15:44:10.850027 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de75203a-0006-4039-9ed4-505cbe5852d2" containerName="init" Dec 10 15:44:10 crc kubenswrapper[4755]: I1210 15:44:10.850033 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="de75203a-0006-4039-9ed4-505cbe5852d2" containerName="init" Dec 10 15:44:10 crc kubenswrapper[4755]: E1210 15:44:10.850057 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de75203a-0006-4039-9ed4-505cbe5852d2" containerName="dnsmasq-dns" Dec 10 15:44:10 crc kubenswrapper[4755]: I1210 15:44:10.850065 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="de75203a-0006-4039-9ed4-505cbe5852d2" containerName="dnsmasq-dns" Dec 10 15:44:10 crc kubenswrapper[4755]: I1210 15:44:10.850239 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="de75203a-0006-4039-9ed4-505cbe5852d2" containerName="dnsmasq-dns" Dec 10 15:44:10 crc kubenswrapper[4755]: I1210 15:44:10.850260 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="0009273d-a6d2-43da-99f9-993f2aba0e3a" containerName="keystone-bootstrap" Dec 10 15:44:10 crc kubenswrapper[4755]: I1210 15:44:10.850273 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="07e5d790-41c9-4f66-87e4-6088fe8bbc8f" containerName="placement-db-sync" Dec 10 15:44:10 crc kubenswrapper[4755]: I1210 15:44:10.851642 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-75b8ff9576-fcxhh" Dec 10 15:44:10 crc kubenswrapper[4755]: I1210 15:44:10.860511 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=16.860491495 podStartE2EDuration="16.860491495s" podCreationTimestamp="2025-12-10 15:43:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:44:10.812999725 +0000 UTC m=+1247.413883357" watchObservedRunningTime="2025-12-10 15:44:10.860491495 +0000 UTC m=+1247.461375137" Dec 10 15:44:10 crc kubenswrapper[4755]: I1210 15:44:10.862620 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Dec 10 15:44:10 crc kubenswrapper[4755]: I1210 15:44:10.863000 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Dec 10 15:44:10 crc kubenswrapper[4755]: I1210 15:44:10.863185 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-75b8ff9576-fcxhh"] Dec 10 15:44:10 crc kubenswrapper[4755]: I1210 15:44:10.864709 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d79c58e8-f9d1-4fde-945d-a6e175ec8fee","Type":"ContainerStarted","Data":"bbb53cf639bf3e40527ad702f68813c89df04d8f991131a2956a2546fffdc70c"} Dec 10 15:44:10 crc kubenswrapper[4755]: I1210 15:44:10.864848 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="d79c58e8-f9d1-4fde-945d-a6e175ec8fee" containerName="glance-log" containerID="cri-o://484644dc9f04c386c26fc0cd7b6116e4354c3540756dbc20ae2747f2de24c422" gracePeriod=30 Dec 10 15:44:10 crc kubenswrapper[4755]: I1210 15:44:10.865918 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="d79c58e8-f9d1-4fde-945d-a6e175ec8fee" containerName="glance-httpd" containerID="cri-o://bbb53cf639bf3e40527ad702f68813c89df04d8f991131a2956a2546fffdc70c" gracePeriod=30 Dec 10 15:44:10 crc kubenswrapper[4755]: I1210 15:44:10.872742 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-zttv2" event={"ID":"07e5d790-41c9-4f66-87e4-6088fe8bbc8f","Type":"ContainerDied","Data":"1f6ed92a3d6c24de97595bdc9d0c78344b496c1688609ab2fa71da3f448babf1"} Dec 10 15:44:10 crc kubenswrapper[4755]: I1210 15:44:10.872778 4755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f6ed92a3d6c24de97595bdc9d0c78344b496c1688609ab2fa71da3f448babf1" Dec 10 15:44:10 crc kubenswrapper[4755]: I1210 15:44:10.872836 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-zttv2" Dec 10 15:44:10 crc kubenswrapper[4755]: I1210 15:44:10.882777 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5f46cf586c-brqwd" event={"ID":"efbad6ea-87b6-40ec-b2a6-542e31d18e69","Type":"ContainerStarted","Data":"2f1d4cb1d1868aa665bff477eb585e92b8e3310c5a847c56c908a3ec1b8b83c5"} Dec 10 15:44:10 crc kubenswrapper[4755]: I1210 15:44:10.890926 4755 generic.go:334] "Generic (PLEG): container finished" podID="d03214c3-1a52-44fb-aa0d-45de3de8ff44" containerID="82828f2c1d8e3dfb9e1893b99609480c4b19ff32d368f166bc9ad6821a41c584" exitCode=0 Dec 10 15:44:10 crc kubenswrapper[4755]: I1210 15:44:10.890986 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-gg57w" event={"ID":"d03214c3-1a52-44fb-aa0d-45de3de8ff44","Type":"ContainerDied","Data":"82828f2c1d8e3dfb9e1893b99609480c4b19ff32d368f166bc9ad6821a41c584"} Dec 10 15:44:10 crc kubenswrapper[4755]: I1210 15:44:10.891011 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-gg57w" event={"ID":"d03214c3-1a52-44fb-aa0d-45de3de8ff44","Type":"ContainerStarted","Data":"eeefa94bf87fe574554df62b1927047a98d82b0d8a2500feceebeca95396bc6f"} Dec 10 15:44:10 crc kubenswrapper[4755]: I1210 15:44:10.897015 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wstp4" Dec 10 15:44:10 crc kubenswrapper[4755]: I1210 15:44:10.902865 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wstp4" event={"ID":"0009273d-a6d2-43da-99f9-993f2aba0e3a","Type":"ContainerDied","Data":"99a1791e52ecebec4e10a812e72cbcb997ec7edc2a805e605a9b7503601af8dc"} Dec 10 15:44:10 crc kubenswrapper[4755]: I1210 15:44:10.902901 4755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="99a1791e52ecebec4e10a812e72cbcb997ec7edc2a805e605a9b7503601af8dc" Dec 10 15:44:10 crc kubenswrapper[4755]: I1210 15:44:10.921378 4755 scope.go:117] "RemoveContainer" containerID="ed481a9c85ba279d104e8abe5efe9f4512782a658d813cc3f59f4b19fca0e58d" Dec 10 15:44:10 crc kubenswrapper[4755]: I1210 15:44:10.922822 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-st762"] Dec 10 15:44:10 crc kubenswrapper[4755]: I1210 15:44:10.948438 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-st762"] Dec 10 15:44:10 crc kubenswrapper[4755]: I1210 15:44:10.951330 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=16.951315529 podStartE2EDuration="16.951315529s" podCreationTimestamp="2025-12-10 15:43:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:44:10.942434159 +0000 UTC m=+1247.543317791" watchObservedRunningTime="2025-12-10 15:44:10.951315529 +0000 UTC m=+1247.552199161" Dec 10 15:44:11 crc kubenswrapper[4755]: I1210 15:44:11.049371 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3c11b73-3e0e-4c7e-ac2f-943e44b99d92-public-tls-certs\") pod \"placement-75b8ff9576-fcxhh\" (UID: \"c3c11b73-3e0e-4c7e-ac2f-943e44b99d92\") " pod="openstack/placement-75b8ff9576-fcxhh" Dec 10 15:44:11 crc kubenswrapper[4755]: I1210 15:44:11.049440 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3c11b73-3e0e-4c7e-ac2f-943e44b99d92-combined-ca-bundle\") pod \"placement-75b8ff9576-fcxhh\" (UID: \"c3c11b73-3e0e-4c7e-ac2f-943e44b99d92\") " pod="openstack/placement-75b8ff9576-fcxhh" Dec 10 15:44:11 crc kubenswrapper[4755]: I1210 15:44:11.049497 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3c11b73-3e0e-4c7e-ac2f-943e44b99d92-config-data\") pod \"placement-75b8ff9576-fcxhh\" (UID: \"c3c11b73-3e0e-4c7e-ac2f-943e44b99d92\") " pod="openstack/placement-75b8ff9576-fcxhh" Dec 10 15:44:11 crc kubenswrapper[4755]: I1210 15:44:11.049603 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3c11b73-3e0e-4c7e-ac2f-943e44b99d92-scripts\") pod \"placement-75b8ff9576-fcxhh\" (UID: \"c3c11b73-3e0e-4c7e-ac2f-943e44b99d92\") " pod="openstack/placement-75b8ff9576-fcxhh" Dec 10 15:44:11 crc kubenswrapper[4755]: I1210 15:44:11.049618 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6krq\" (UniqueName: \"kubernetes.io/projected/c3c11b73-3e0e-4c7e-ac2f-943e44b99d92-kube-api-access-b6krq\") pod \"placement-75b8ff9576-fcxhh\" (UID: \"c3c11b73-3e0e-4c7e-ac2f-943e44b99d92\") " pod="openstack/placement-75b8ff9576-fcxhh" Dec 10 15:44:11 crc kubenswrapper[4755]: I1210 15:44:11.049670 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3c11b73-3e0e-4c7e-ac2f-943e44b99d92-logs\") pod \"placement-75b8ff9576-fcxhh\" (UID: \"c3c11b73-3e0e-4c7e-ac2f-943e44b99d92\") " pod="openstack/placement-75b8ff9576-fcxhh" Dec 10 15:44:11 crc kubenswrapper[4755]: I1210 15:44:11.049743 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3c11b73-3e0e-4c7e-ac2f-943e44b99d92-internal-tls-certs\") pod \"placement-75b8ff9576-fcxhh\" (UID: \"c3c11b73-3e0e-4c7e-ac2f-943e44b99d92\") " pod="openstack/placement-75b8ff9576-fcxhh" Dec 10 15:44:11 crc kubenswrapper[4755]: I1210 15:44:11.161504 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3c11b73-3e0e-4c7e-ac2f-943e44b99d92-logs\") pod \"placement-75b8ff9576-fcxhh\" (UID: \"c3c11b73-3e0e-4c7e-ac2f-943e44b99d92\") " pod="openstack/placement-75b8ff9576-fcxhh" Dec 10 15:44:11 crc kubenswrapper[4755]: I1210 15:44:11.169663 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3c11b73-3e0e-4c7e-ac2f-943e44b99d92-logs\") pod \"placement-75b8ff9576-fcxhh\" (UID: \"c3c11b73-3e0e-4c7e-ac2f-943e44b99d92\") " pod="openstack/placement-75b8ff9576-fcxhh" Dec 10 15:44:11 crc kubenswrapper[4755]: I1210 15:44:11.173154 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3c11b73-3e0e-4c7e-ac2f-943e44b99d92-internal-tls-certs\") pod \"placement-75b8ff9576-fcxhh\" (UID: \"c3c11b73-3e0e-4c7e-ac2f-943e44b99d92\") " pod="openstack/placement-75b8ff9576-fcxhh" Dec 10 15:44:11 crc kubenswrapper[4755]: I1210 15:44:11.173203 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3c11b73-3e0e-4c7e-ac2f-943e44b99d92-public-tls-certs\") pod \"placement-75b8ff9576-fcxhh\" (UID: \"c3c11b73-3e0e-4c7e-ac2f-943e44b99d92\") " pod="openstack/placement-75b8ff9576-fcxhh" Dec 10 15:44:11 crc kubenswrapper[4755]: I1210 15:44:11.173293 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3c11b73-3e0e-4c7e-ac2f-943e44b99d92-combined-ca-bundle\") pod \"placement-75b8ff9576-fcxhh\" (UID: \"c3c11b73-3e0e-4c7e-ac2f-943e44b99d92\") " pod="openstack/placement-75b8ff9576-fcxhh" Dec 10 15:44:11 crc kubenswrapper[4755]: I1210 15:44:11.173340 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3c11b73-3e0e-4c7e-ac2f-943e44b99d92-config-data\") pod \"placement-75b8ff9576-fcxhh\" (UID: \"c3c11b73-3e0e-4c7e-ac2f-943e44b99d92\") " pod="openstack/placement-75b8ff9576-fcxhh" Dec 10 15:44:11 crc kubenswrapper[4755]: I1210 15:44:11.173598 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3c11b73-3e0e-4c7e-ac2f-943e44b99d92-scripts\") pod \"placement-75b8ff9576-fcxhh\" (UID: \"c3c11b73-3e0e-4c7e-ac2f-943e44b99d92\") " pod="openstack/placement-75b8ff9576-fcxhh" Dec 10 15:44:11 crc kubenswrapper[4755]: I1210 15:44:11.173616 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6krq\" (UniqueName: \"kubernetes.io/projected/c3c11b73-3e0e-4c7e-ac2f-943e44b99d92-kube-api-access-b6krq\") pod \"placement-75b8ff9576-fcxhh\" (UID: \"c3c11b73-3e0e-4c7e-ac2f-943e44b99d92\") " pod="openstack/placement-75b8ff9576-fcxhh" Dec 10 15:44:11 crc kubenswrapper[4755]: I1210 15:44:11.187684 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3c11b73-3e0e-4c7e-ac2f-943e44b99d92-config-data\") pod \"placement-75b8ff9576-fcxhh\" (UID: \"c3c11b73-3e0e-4c7e-ac2f-943e44b99d92\") " pod="openstack/placement-75b8ff9576-fcxhh" Dec 10 15:44:11 crc kubenswrapper[4755]: I1210 15:44:11.187937 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3c11b73-3e0e-4c7e-ac2f-943e44b99d92-internal-tls-certs\") pod \"placement-75b8ff9576-fcxhh\" (UID: \"c3c11b73-3e0e-4c7e-ac2f-943e44b99d92\") " pod="openstack/placement-75b8ff9576-fcxhh" Dec 10 15:44:11 crc kubenswrapper[4755]: I1210 15:44:11.188595 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-764c845c6f-lbzq2"] Dec 10 15:44:11 crc kubenswrapper[4755]: I1210 15:44:11.189790 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-764c845c6f-lbzq2" Dec 10 15:44:11 crc kubenswrapper[4755]: I1210 15:44:11.199841 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3c11b73-3e0e-4c7e-ac2f-943e44b99d92-scripts\") pod \"placement-75b8ff9576-fcxhh\" (UID: \"c3c11b73-3e0e-4c7e-ac2f-943e44b99d92\") " pod="openstack/placement-75b8ff9576-fcxhh" Dec 10 15:44:11 crc kubenswrapper[4755]: I1210 15:44:11.202728 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-tp2mw" Dec 10 15:44:11 crc kubenswrapper[4755]: I1210 15:44:11.203059 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Dec 10 15:44:11 crc kubenswrapper[4755]: I1210 15:44:11.203359 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3c11b73-3e0e-4c7e-ac2f-943e44b99d92-public-tls-certs\") pod \"placement-75b8ff9576-fcxhh\" (UID: \"c3c11b73-3e0e-4c7e-ac2f-943e44b99d92\") " pod="openstack/placement-75b8ff9576-fcxhh" Dec 10 15:44:11 crc kubenswrapper[4755]: I1210 15:44:11.211412 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-764c845c6f-lbzq2"] Dec 10 15:44:11 crc kubenswrapper[4755]: I1210 15:44:11.219649 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Dec 10 15:44:11 crc kubenswrapper[4755]: I1210 15:44:11.219998 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 10 15:44:11 crc kubenswrapper[4755]: I1210 15:44:11.220137 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 10 15:44:11 crc kubenswrapper[4755]: I1210 15:44:11.220265 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 10 15:44:11 crc kubenswrapper[4755]: I1210 15:44:11.222880 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3c11b73-3e0e-4c7e-ac2f-943e44b99d92-combined-ca-bundle\") pod \"placement-75b8ff9576-fcxhh\" (UID: \"c3c11b73-3e0e-4c7e-ac2f-943e44b99d92\") " pod="openstack/placement-75b8ff9576-fcxhh" Dec 10 15:44:11 crc kubenswrapper[4755]: I1210 15:44:11.237064 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6krq\" (UniqueName: \"kubernetes.io/projected/c3c11b73-3e0e-4c7e-ac2f-943e44b99d92-kube-api-access-b6krq\") pod \"placement-75b8ff9576-fcxhh\" (UID: \"c3c11b73-3e0e-4c7e-ac2f-943e44b99d92\") " pod="openstack/placement-75b8ff9576-fcxhh" Dec 10 15:44:11 crc kubenswrapper[4755]: I1210 15:44:11.289420 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73f09975-30b1-46a8-a34e-ccb4683adf6c-scripts\") pod \"keystone-764c845c6f-lbzq2\" (UID: \"73f09975-30b1-46a8-a34e-ccb4683adf6c\") " pod="openstack/keystone-764c845c6f-lbzq2" Dec 10 15:44:11 crc kubenswrapper[4755]: I1210 15:44:11.289504 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/73f09975-30b1-46a8-a34e-ccb4683adf6c-public-tls-certs\") pod \"keystone-764c845c6f-lbzq2\" (UID: \"73f09975-30b1-46a8-a34e-ccb4683adf6c\") " pod="openstack/keystone-764c845c6f-lbzq2" Dec 10 15:44:11 crc kubenswrapper[4755]: I1210 15:44:11.289537 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73f09975-30b1-46a8-a34e-ccb4683adf6c-combined-ca-bundle\") pod \"keystone-764c845c6f-lbzq2\" (UID: \"73f09975-30b1-46a8-a34e-ccb4683adf6c\") " pod="openstack/keystone-764c845c6f-lbzq2" Dec 10 15:44:11 crc kubenswrapper[4755]: I1210 15:44:11.289555 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/73f09975-30b1-46a8-a34e-ccb4683adf6c-internal-tls-certs\") pod \"keystone-764c845c6f-lbzq2\" (UID: \"73f09975-30b1-46a8-a34e-ccb4683adf6c\") " pod="openstack/keystone-764c845c6f-lbzq2" Dec 10 15:44:11 crc kubenswrapper[4755]: I1210 15:44:11.289581 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/73f09975-30b1-46a8-a34e-ccb4683adf6c-credential-keys\") pod \"keystone-764c845c6f-lbzq2\" (UID: \"73f09975-30b1-46a8-a34e-ccb4683adf6c\") " pod="openstack/keystone-764c845c6f-lbzq2" Dec 10 15:44:11 crc kubenswrapper[4755]: I1210 15:44:11.289603 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/73f09975-30b1-46a8-a34e-ccb4683adf6c-fernet-keys\") pod \"keystone-764c845c6f-lbzq2\" (UID: \"73f09975-30b1-46a8-a34e-ccb4683adf6c\") " pod="openstack/keystone-764c845c6f-lbzq2" Dec 10 15:44:11 crc kubenswrapper[4755]: I1210 15:44:11.289642 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73f09975-30b1-46a8-a34e-ccb4683adf6c-config-data\") pod \"keystone-764c845c6f-lbzq2\" (UID: \"73f09975-30b1-46a8-a34e-ccb4683adf6c\") " pod="openstack/keystone-764c845c6f-lbzq2" Dec 10 15:44:11 crc kubenswrapper[4755]: I1210 15:44:11.289674 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gg7f\" (UniqueName: \"kubernetes.io/projected/73f09975-30b1-46a8-a34e-ccb4683adf6c-kube-api-access-4gg7f\") pod \"keystone-764c845c6f-lbzq2\" (UID: \"73f09975-30b1-46a8-a34e-ccb4683adf6c\") " pod="openstack/keystone-764c845c6f-lbzq2" Dec 10 15:44:11 crc kubenswrapper[4755]: I1210 15:44:11.346032 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-6fnmv" Dec 10 15:44:11 crc kubenswrapper[4755]: I1210 15:44:11.403026 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3821a978-80ec-4434-a871-ed026186f498-db-sync-config-data\") pod \"3821a978-80ec-4434-a871-ed026186f498\" (UID: \"3821a978-80ec-4434-a871-ed026186f498\") " Dec 10 15:44:11 crc kubenswrapper[4755]: I1210 15:44:11.403089 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6t25v\" (UniqueName: \"kubernetes.io/projected/3821a978-80ec-4434-a871-ed026186f498-kube-api-access-6t25v\") pod \"3821a978-80ec-4434-a871-ed026186f498\" (UID: \"3821a978-80ec-4434-a871-ed026186f498\") " Dec 10 15:44:11 crc kubenswrapper[4755]: I1210 15:44:11.403276 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3821a978-80ec-4434-a871-ed026186f498-combined-ca-bundle\") pod \"3821a978-80ec-4434-a871-ed026186f498\" (UID: \"3821a978-80ec-4434-a871-ed026186f498\") " Dec 10 15:44:11 crc kubenswrapper[4755]: I1210 15:44:11.403764 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/73f09975-30b1-46a8-a34e-ccb4683adf6c-public-tls-certs\") pod \"keystone-764c845c6f-lbzq2\" (UID: \"73f09975-30b1-46a8-a34e-ccb4683adf6c\") " pod="openstack/keystone-764c845c6f-lbzq2" Dec 10 15:44:11 crc kubenswrapper[4755]: I1210 15:44:11.403809 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73f09975-30b1-46a8-a34e-ccb4683adf6c-combined-ca-bundle\") pod \"keystone-764c845c6f-lbzq2\" (UID: \"73f09975-30b1-46a8-a34e-ccb4683adf6c\") " pod="openstack/keystone-764c845c6f-lbzq2" Dec 10 15:44:11 crc kubenswrapper[4755]: I1210 15:44:11.403831 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/73f09975-30b1-46a8-a34e-ccb4683adf6c-internal-tls-certs\") pod \"keystone-764c845c6f-lbzq2\" (UID: \"73f09975-30b1-46a8-a34e-ccb4683adf6c\") " pod="openstack/keystone-764c845c6f-lbzq2" Dec 10 15:44:11 crc kubenswrapper[4755]: I1210 15:44:11.403858 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/73f09975-30b1-46a8-a34e-ccb4683adf6c-credential-keys\") pod \"keystone-764c845c6f-lbzq2\" (UID: \"73f09975-30b1-46a8-a34e-ccb4683adf6c\") " pod="openstack/keystone-764c845c6f-lbzq2" Dec 10 15:44:11 crc kubenswrapper[4755]: I1210 15:44:11.403883 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/73f09975-30b1-46a8-a34e-ccb4683adf6c-fernet-keys\") pod \"keystone-764c845c6f-lbzq2\" (UID: \"73f09975-30b1-46a8-a34e-ccb4683adf6c\") " pod="openstack/keystone-764c845c6f-lbzq2" Dec 10 15:44:11 crc kubenswrapper[4755]: I1210 15:44:11.403930 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73f09975-30b1-46a8-a34e-ccb4683adf6c-config-data\") pod \"keystone-764c845c6f-lbzq2\" (UID: \"73f09975-30b1-46a8-a34e-ccb4683adf6c\") " pod="openstack/keystone-764c845c6f-lbzq2" Dec 10 15:44:11 crc kubenswrapper[4755]: I1210 15:44:11.403970 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gg7f\" (UniqueName: \"kubernetes.io/projected/73f09975-30b1-46a8-a34e-ccb4683adf6c-kube-api-access-4gg7f\") pod \"keystone-764c845c6f-lbzq2\" (UID: \"73f09975-30b1-46a8-a34e-ccb4683adf6c\") " pod="openstack/keystone-764c845c6f-lbzq2" Dec 10 15:44:11 crc kubenswrapper[4755]: I1210 15:44:11.404059 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73f09975-30b1-46a8-a34e-ccb4683adf6c-scripts\") pod \"keystone-764c845c6f-lbzq2\" (UID: \"73f09975-30b1-46a8-a34e-ccb4683adf6c\") " pod="openstack/keystone-764c845c6f-lbzq2" Dec 10 15:44:11 crc kubenswrapper[4755]: I1210 15:44:11.411406 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/73f09975-30b1-46a8-a34e-ccb4683adf6c-internal-tls-certs\") pod \"keystone-764c845c6f-lbzq2\" (UID: \"73f09975-30b1-46a8-a34e-ccb4683adf6c\") " pod="openstack/keystone-764c845c6f-lbzq2" Dec 10 15:44:11 crc kubenswrapper[4755]: I1210 15:44:11.418458 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/73f09975-30b1-46a8-a34e-ccb4683adf6c-fernet-keys\") pod \"keystone-764c845c6f-lbzq2\" (UID: \"73f09975-30b1-46a8-a34e-ccb4683adf6c\") " pod="openstack/keystone-764c845c6f-lbzq2" Dec 10 15:44:11 crc kubenswrapper[4755]: I1210 15:44:11.426180 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73f09975-30b1-46a8-a34e-ccb4683adf6c-scripts\") pod \"keystone-764c845c6f-lbzq2\" (UID: \"73f09975-30b1-46a8-a34e-ccb4683adf6c\") " pod="openstack/keystone-764c845c6f-lbzq2" Dec 10 15:44:11 crc kubenswrapper[4755]: I1210 15:44:11.429404 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3821a978-80ec-4434-a871-ed026186f498-kube-api-access-6t25v" (OuterVolumeSpecName: "kube-api-access-6t25v") pod "3821a978-80ec-4434-a871-ed026186f498" (UID: "3821a978-80ec-4434-a871-ed026186f498"). InnerVolumeSpecName "kube-api-access-6t25v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:44:11 crc kubenswrapper[4755]: I1210 15:44:11.429437 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3821a978-80ec-4434-a871-ed026186f498-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "3821a978-80ec-4434-a871-ed026186f498" (UID: "3821a978-80ec-4434-a871-ed026186f498"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:44:11 crc kubenswrapper[4755]: I1210 15:44:11.429955 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/73f09975-30b1-46a8-a34e-ccb4683adf6c-public-tls-certs\") pod \"keystone-764c845c6f-lbzq2\" (UID: \"73f09975-30b1-46a8-a34e-ccb4683adf6c\") " pod="openstack/keystone-764c845c6f-lbzq2" Dec 10 15:44:11 crc kubenswrapper[4755]: I1210 15:44:11.430829 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73f09975-30b1-46a8-a34e-ccb4683adf6c-combined-ca-bundle\") pod \"keystone-764c845c6f-lbzq2\" (UID: \"73f09975-30b1-46a8-a34e-ccb4683adf6c\") " pod="openstack/keystone-764c845c6f-lbzq2" Dec 10 15:44:11 crc kubenswrapper[4755]: I1210 15:44:11.431282 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73f09975-30b1-46a8-a34e-ccb4683adf6c-config-data\") pod \"keystone-764c845c6f-lbzq2\" (UID: \"73f09975-30b1-46a8-a34e-ccb4683adf6c\") " pod="openstack/keystone-764c845c6f-lbzq2" Dec 10 15:44:11 crc kubenswrapper[4755]: I1210 15:44:11.435436 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/73f09975-30b1-46a8-a34e-ccb4683adf6c-credential-keys\") pod \"keystone-764c845c6f-lbzq2\" (UID: \"73f09975-30b1-46a8-a34e-ccb4683adf6c\") " pod="openstack/keystone-764c845c6f-lbzq2" Dec 10 15:44:11 crc kubenswrapper[4755]: I1210 15:44:11.443886 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gg7f\" (UniqueName: \"kubernetes.io/projected/73f09975-30b1-46a8-a34e-ccb4683adf6c-kube-api-access-4gg7f\") pod \"keystone-764c845c6f-lbzq2\" (UID: \"73f09975-30b1-46a8-a34e-ccb4683adf6c\") " pod="openstack/keystone-764c845c6f-lbzq2" Dec 10 15:44:11 crc kubenswrapper[4755]: I1210 15:44:11.474248 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-75b8ff9576-fcxhh" Dec 10 15:44:11 crc kubenswrapper[4755]: I1210 15:44:11.484694 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3821a978-80ec-4434-a871-ed026186f498-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3821a978-80ec-4434-a871-ed026186f498" (UID: "3821a978-80ec-4434-a871-ed026186f498"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:44:11 crc kubenswrapper[4755]: I1210 15:44:11.506093 4755 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3821a978-80ec-4434-a871-ed026186f498-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:11 crc kubenswrapper[4755]: I1210 15:44:11.506527 4755 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3821a978-80ec-4434-a871-ed026186f498-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:11 crc kubenswrapper[4755]: I1210 15:44:11.506586 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6t25v\" (UniqueName: \"kubernetes.io/projected/3821a978-80ec-4434-a871-ed026186f498-kube-api-access-6t25v\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:11 crc kubenswrapper[4755]: I1210 15:44:11.540645 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-764c845c6f-lbzq2" Dec 10 15:44:11 crc kubenswrapper[4755]: I1210 15:44:11.773708 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de75203a-0006-4039-9ed4-505cbe5852d2" path="/var/lib/kubelet/pods/de75203a-0006-4039-9ed4-505cbe5852d2/volumes" Dec 10 15:44:11 crc kubenswrapper[4755]: I1210 15:44:11.915909 4755 generic.go:334] "Generic (PLEG): container finished" podID="dbc85971-1a23-47af-bae0-708919198aee" containerID="31264b3af3717e9e0d6dfd275c96394d4cedfe9e708d617c9826b6c28e75dd5e" exitCode=0 Dec 10 15:44:11 crc kubenswrapper[4755]: I1210 15:44:11.916237 4755 generic.go:334] "Generic (PLEG): container finished" podID="dbc85971-1a23-47af-bae0-708919198aee" containerID="eee6d33760bd3448c482537ce51f449b5f55274b2eb0c62de4ca45a51b43ab7c" exitCode=143 Dec 10 15:44:11 crc kubenswrapper[4755]: I1210 15:44:11.915951 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"dbc85971-1a23-47af-bae0-708919198aee","Type":"ContainerDied","Data":"31264b3af3717e9e0d6dfd275c96394d4cedfe9e708d617c9826b6c28e75dd5e"} Dec 10 15:44:11 crc kubenswrapper[4755]: I1210 15:44:11.916340 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"dbc85971-1a23-47af-bae0-708919198aee","Type":"ContainerDied","Data":"eee6d33760bd3448c482537ce51f449b5f55274b2eb0c62de4ca45a51b43ab7c"} Dec 10 15:44:11 crc kubenswrapper[4755]: I1210 15:44:11.918320 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-6fnmv" Dec 10 15:44:11 crc kubenswrapper[4755]: I1210 15:44:11.918684 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-6fnmv" event={"ID":"3821a978-80ec-4434-a871-ed026186f498","Type":"ContainerDied","Data":"e6c3fbda279e21f480c71909873a5abec80c8a66d531a36d9ac01498c4abee44"} Dec 10 15:44:11 crc kubenswrapper[4755]: I1210 15:44:11.918726 4755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e6c3fbda279e21f480c71909873a5abec80c8a66d531a36d9ac01498c4abee44" Dec 10 15:44:11 crc kubenswrapper[4755]: I1210 15:44:11.923383 4755 generic.go:334] "Generic (PLEG): container finished" podID="d79c58e8-f9d1-4fde-945d-a6e175ec8fee" containerID="484644dc9f04c386c26fc0cd7b6116e4354c3540756dbc20ae2747f2de24c422" exitCode=143 Dec 10 15:44:11 crc kubenswrapper[4755]: I1210 15:44:11.923442 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d79c58e8-f9d1-4fde-945d-a6e175ec8fee","Type":"ContainerDied","Data":"484644dc9f04c386c26fc0cd7b6116e4354c3540756dbc20ae2747f2de24c422"} Dec 10 15:44:11 crc kubenswrapper[4755]: I1210 15:44:11.989810 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-75b8ff9576-fcxhh"] Dec 10 15:44:12 crc kubenswrapper[4755]: I1210 15:44:12.153549 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-979964fb-8vrlp"] Dec 10 15:44:12 crc kubenswrapper[4755]: E1210 15:44:12.154511 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3821a978-80ec-4434-a871-ed026186f498" containerName="barbican-db-sync" Dec 10 15:44:12 crc kubenswrapper[4755]: I1210 15:44:12.154524 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="3821a978-80ec-4434-a871-ed026186f498" containerName="barbican-db-sync" Dec 10 15:44:12 crc kubenswrapper[4755]: I1210 15:44:12.162545 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="3821a978-80ec-4434-a871-ed026186f498" containerName="barbican-db-sync" Dec 10 15:44:12 crc kubenswrapper[4755]: I1210 15:44:12.163999 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-979964fb-8vrlp" Dec 10 15:44:12 crc kubenswrapper[4755]: I1210 15:44:12.173241 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-lnwnx" Dec 10 15:44:12 crc kubenswrapper[4755]: I1210 15:44:12.173659 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Dec 10 15:44:12 crc kubenswrapper[4755]: I1210 15:44:12.182137 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Dec 10 15:44:12 crc kubenswrapper[4755]: I1210 15:44:12.188872 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-5d855b58d9-fzhd2"] Dec 10 15:44:12 crc kubenswrapper[4755]: I1210 15:44:12.190652 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5d855b58d9-fzhd2" Dec 10 15:44:12 crc kubenswrapper[4755]: I1210 15:44:12.194162 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Dec 10 15:44:12 crc kubenswrapper[4755]: I1210 15:44:12.217604 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-979964fb-8vrlp"] Dec 10 15:44:12 crc kubenswrapper[4755]: I1210 15:44:12.240551 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5d855b58d9-fzhd2"] Dec 10 15:44:12 crc kubenswrapper[4755]: I1210 15:44:12.246495 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a5eb4f86-4f65-41b4-8694-279c44c08491-config-data-custom\") pod \"barbican-keystone-listener-979964fb-8vrlp\" (UID: \"a5eb4f86-4f65-41b4-8694-279c44c08491\") " pod="openstack/barbican-keystone-listener-979964fb-8vrlp" Dec 10 15:44:12 crc kubenswrapper[4755]: I1210 15:44:12.246562 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83de1ea1-f46a-43ab-9a89-f5980d7bed78-combined-ca-bundle\") pod \"barbican-worker-5d855b58d9-fzhd2\" (UID: \"83de1ea1-f46a-43ab-9a89-f5980d7bed78\") " pod="openstack/barbican-worker-5d855b58d9-fzhd2" Dec 10 15:44:12 crc kubenswrapper[4755]: I1210 15:44:12.246601 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27nwb\" (UniqueName: \"kubernetes.io/projected/a5eb4f86-4f65-41b4-8694-279c44c08491-kube-api-access-27nwb\") pod \"barbican-keystone-listener-979964fb-8vrlp\" (UID: \"a5eb4f86-4f65-41b4-8694-279c44c08491\") " pod="openstack/barbican-keystone-listener-979964fb-8vrlp" Dec 10 15:44:12 crc kubenswrapper[4755]: I1210 15:44:12.246685 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83de1ea1-f46a-43ab-9a89-f5980d7bed78-logs\") pod \"barbican-worker-5d855b58d9-fzhd2\" (UID: \"83de1ea1-f46a-43ab-9a89-f5980d7bed78\") " pod="openstack/barbican-worker-5d855b58d9-fzhd2" Dec 10 15:44:12 crc kubenswrapper[4755]: I1210 15:44:12.246705 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83de1ea1-f46a-43ab-9a89-f5980d7bed78-config-data\") pod \"barbican-worker-5d855b58d9-fzhd2\" (UID: \"83de1ea1-f46a-43ab-9a89-f5980d7bed78\") " pod="openstack/barbican-worker-5d855b58d9-fzhd2" Dec 10 15:44:12 crc kubenswrapper[4755]: I1210 15:44:12.246782 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5eb4f86-4f65-41b4-8694-279c44c08491-combined-ca-bundle\") pod \"barbican-keystone-listener-979964fb-8vrlp\" (UID: \"a5eb4f86-4f65-41b4-8694-279c44c08491\") " pod="openstack/barbican-keystone-listener-979964fb-8vrlp" Dec 10 15:44:12 crc kubenswrapper[4755]: I1210 15:44:12.246818 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzblc\" (UniqueName: \"kubernetes.io/projected/83de1ea1-f46a-43ab-9a89-f5980d7bed78-kube-api-access-lzblc\") pod \"barbican-worker-5d855b58d9-fzhd2\" (UID: \"83de1ea1-f46a-43ab-9a89-f5980d7bed78\") " pod="openstack/barbican-worker-5d855b58d9-fzhd2" Dec 10 15:44:12 crc kubenswrapper[4755]: I1210 15:44:12.246835 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5eb4f86-4f65-41b4-8694-279c44c08491-config-data\") pod \"barbican-keystone-listener-979964fb-8vrlp\" (UID: \"a5eb4f86-4f65-41b4-8694-279c44c08491\") " pod="openstack/barbican-keystone-listener-979964fb-8vrlp" Dec 10 15:44:12 crc kubenswrapper[4755]: I1210 15:44:12.246849 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/83de1ea1-f46a-43ab-9a89-f5980d7bed78-config-data-custom\") pod \"barbican-worker-5d855b58d9-fzhd2\" (UID: \"83de1ea1-f46a-43ab-9a89-f5980d7bed78\") " pod="openstack/barbican-worker-5d855b58d9-fzhd2" Dec 10 15:44:12 crc kubenswrapper[4755]: I1210 15:44:12.246870 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a5eb4f86-4f65-41b4-8694-279c44c08491-logs\") pod \"barbican-keystone-listener-979964fb-8vrlp\" (UID: \"a5eb4f86-4f65-41b4-8694-279c44c08491\") " pod="openstack/barbican-keystone-listener-979964fb-8vrlp" Dec 10 15:44:12 crc kubenswrapper[4755]: I1210 15:44:12.287460 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-gg57w"] Dec 10 15:44:12 crc kubenswrapper[4755]: I1210 15:44:12.335488 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-764c845c6f-lbzq2"] Dec 10 15:44:12 crc kubenswrapper[4755]: I1210 15:44:12.347978 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5eb4f86-4f65-41b4-8694-279c44c08491-combined-ca-bundle\") pod \"barbican-keystone-listener-979964fb-8vrlp\" (UID: \"a5eb4f86-4f65-41b4-8694-279c44c08491\") " pod="openstack/barbican-keystone-listener-979964fb-8vrlp" Dec 10 15:44:12 crc kubenswrapper[4755]: I1210 15:44:12.348032 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzblc\" (UniqueName: \"kubernetes.io/projected/83de1ea1-f46a-43ab-9a89-f5980d7bed78-kube-api-access-lzblc\") pod \"barbican-worker-5d855b58d9-fzhd2\" (UID: \"83de1ea1-f46a-43ab-9a89-f5980d7bed78\") " pod="openstack/barbican-worker-5d855b58d9-fzhd2" Dec 10 15:44:12 crc kubenswrapper[4755]: I1210 15:44:12.348052 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5eb4f86-4f65-41b4-8694-279c44c08491-config-data\") pod \"barbican-keystone-listener-979964fb-8vrlp\" (UID: \"a5eb4f86-4f65-41b4-8694-279c44c08491\") " pod="openstack/barbican-keystone-listener-979964fb-8vrlp" Dec 10 15:44:12 crc kubenswrapper[4755]: I1210 15:44:12.348071 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/83de1ea1-f46a-43ab-9a89-f5980d7bed78-config-data-custom\") pod \"barbican-worker-5d855b58d9-fzhd2\" (UID: \"83de1ea1-f46a-43ab-9a89-f5980d7bed78\") " pod="openstack/barbican-worker-5d855b58d9-fzhd2" Dec 10 15:44:12 crc kubenswrapper[4755]: I1210 15:44:12.348107 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a5eb4f86-4f65-41b4-8694-279c44c08491-logs\") pod \"barbican-keystone-listener-979964fb-8vrlp\" (UID: \"a5eb4f86-4f65-41b4-8694-279c44c08491\") " pod="openstack/barbican-keystone-listener-979964fb-8vrlp" Dec 10 15:44:12 crc kubenswrapper[4755]: I1210 15:44:12.348130 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a5eb4f86-4f65-41b4-8694-279c44c08491-config-data-custom\") pod \"barbican-keystone-listener-979964fb-8vrlp\" (UID: \"a5eb4f86-4f65-41b4-8694-279c44c08491\") " pod="openstack/barbican-keystone-listener-979964fb-8vrlp" Dec 10 15:44:12 crc kubenswrapper[4755]: I1210 15:44:12.348158 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83de1ea1-f46a-43ab-9a89-f5980d7bed78-combined-ca-bundle\") pod \"barbican-worker-5d855b58d9-fzhd2\" (UID: \"83de1ea1-f46a-43ab-9a89-f5980d7bed78\") " pod="openstack/barbican-worker-5d855b58d9-fzhd2" Dec 10 15:44:12 crc kubenswrapper[4755]: I1210 15:44:12.348190 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27nwb\" (UniqueName: \"kubernetes.io/projected/a5eb4f86-4f65-41b4-8694-279c44c08491-kube-api-access-27nwb\") pod \"barbican-keystone-listener-979964fb-8vrlp\" (UID: \"a5eb4f86-4f65-41b4-8694-279c44c08491\") " pod="openstack/barbican-keystone-listener-979964fb-8vrlp" Dec 10 15:44:12 crc kubenswrapper[4755]: I1210 15:44:12.348230 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83de1ea1-f46a-43ab-9a89-f5980d7bed78-logs\") pod \"barbican-worker-5d855b58d9-fzhd2\" (UID: \"83de1ea1-f46a-43ab-9a89-f5980d7bed78\") " pod="openstack/barbican-worker-5d855b58d9-fzhd2" Dec 10 15:44:12 crc kubenswrapper[4755]: I1210 15:44:12.348248 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83de1ea1-f46a-43ab-9a89-f5980d7bed78-config-data\") pod \"barbican-worker-5d855b58d9-fzhd2\" (UID: \"83de1ea1-f46a-43ab-9a89-f5980d7bed78\") " pod="openstack/barbican-worker-5d855b58d9-fzhd2" Dec 10 15:44:12 crc kubenswrapper[4755]: I1210 15:44:12.360566 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-mff22"] Dec 10 15:44:12 crc kubenswrapper[4755]: I1210 15:44:12.361812 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a5eb4f86-4f65-41b4-8694-279c44c08491-logs\") pod \"barbican-keystone-listener-979964fb-8vrlp\" (UID: \"a5eb4f86-4f65-41b4-8694-279c44c08491\") " pod="openstack/barbican-keystone-listener-979964fb-8vrlp" Dec 10 15:44:12 crc kubenswrapper[4755]: I1210 15:44:12.362094 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83de1ea1-f46a-43ab-9a89-f5980d7bed78-logs\") pod \"barbican-worker-5d855b58d9-fzhd2\" (UID: \"83de1ea1-f46a-43ab-9a89-f5980d7bed78\") " pod="openstack/barbican-worker-5d855b58d9-fzhd2" Dec 10 15:44:12 crc kubenswrapper[4755]: I1210 15:44:12.363865 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-mff22" Dec 10 15:44:12 crc kubenswrapper[4755]: I1210 15:44:12.373124 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-mff22"] Dec 10 15:44:12 crc kubenswrapper[4755]: I1210 15:44:12.373920 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5eb4f86-4f65-41b4-8694-279c44c08491-config-data\") pod \"barbican-keystone-listener-979964fb-8vrlp\" (UID: \"a5eb4f86-4f65-41b4-8694-279c44c08491\") " pod="openstack/barbican-keystone-listener-979964fb-8vrlp" Dec 10 15:44:12 crc kubenswrapper[4755]: I1210 15:44:12.389962 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83de1ea1-f46a-43ab-9a89-f5980d7bed78-combined-ca-bundle\") pod \"barbican-worker-5d855b58d9-fzhd2\" (UID: \"83de1ea1-f46a-43ab-9a89-f5980d7bed78\") " pod="openstack/barbican-worker-5d855b58d9-fzhd2" Dec 10 15:44:12 crc kubenswrapper[4755]: I1210 15:44:12.390924 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83de1ea1-f46a-43ab-9a89-f5980d7bed78-config-data\") pod \"barbican-worker-5d855b58d9-fzhd2\" (UID: \"83de1ea1-f46a-43ab-9a89-f5980d7bed78\") " pod="openstack/barbican-worker-5d855b58d9-fzhd2" Dec 10 15:44:12 crc kubenswrapper[4755]: I1210 15:44:12.391300 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/83de1ea1-f46a-43ab-9a89-f5980d7bed78-config-data-custom\") pod \"barbican-worker-5d855b58d9-fzhd2\" (UID: \"83de1ea1-f46a-43ab-9a89-f5980d7bed78\") " pod="openstack/barbican-worker-5d855b58d9-fzhd2" Dec 10 15:44:12 crc kubenswrapper[4755]: I1210 15:44:12.401427 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5eb4f86-4f65-41b4-8694-279c44c08491-combined-ca-bundle\") pod \"barbican-keystone-listener-979964fb-8vrlp\" (UID: \"a5eb4f86-4f65-41b4-8694-279c44c08491\") " pod="openstack/barbican-keystone-listener-979964fb-8vrlp" Dec 10 15:44:12 crc kubenswrapper[4755]: I1210 15:44:12.402779 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a5eb4f86-4f65-41b4-8694-279c44c08491-config-data-custom\") pod \"barbican-keystone-listener-979964fb-8vrlp\" (UID: \"a5eb4f86-4f65-41b4-8694-279c44c08491\") " pod="openstack/barbican-keystone-listener-979964fb-8vrlp" Dec 10 15:44:12 crc kubenswrapper[4755]: I1210 15:44:12.406333 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzblc\" (UniqueName: \"kubernetes.io/projected/83de1ea1-f46a-43ab-9a89-f5980d7bed78-kube-api-access-lzblc\") pod \"barbican-worker-5d855b58d9-fzhd2\" (UID: \"83de1ea1-f46a-43ab-9a89-f5980d7bed78\") " pod="openstack/barbican-worker-5d855b58d9-fzhd2" Dec 10 15:44:12 crc kubenswrapper[4755]: I1210 15:44:12.479030 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27nwb\" (UniqueName: \"kubernetes.io/projected/a5eb4f86-4f65-41b4-8694-279c44c08491-kube-api-access-27nwb\") pod \"barbican-keystone-listener-979964fb-8vrlp\" (UID: \"a5eb4f86-4f65-41b4-8694-279c44c08491\") " pod="openstack/barbican-keystone-listener-979964fb-8vrlp" Dec 10 15:44:12 crc kubenswrapper[4755]: I1210 15:44:12.479391 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5d855b58d9-fzhd2" Dec 10 15:44:12 crc kubenswrapper[4755]: I1210 15:44:12.480780 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16a3f983-2b37-4dfc-944d-c959d5824b69-config\") pod \"dnsmasq-dns-848cf88cfc-mff22\" (UID: \"16a3f983-2b37-4dfc-944d-c959d5824b69\") " pod="openstack/dnsmasq-dns-848cf88cfc-mff22" Dec 10 15:44:12 crc kubenswrapper[4755]: I1210 15:44:12.480852 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/16a3f983-2b37-4dfc-944d-c959d5824b69-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-mff22\" (UID: \"16a3f983-2b37-4dfc-944d-c959d5824b69\") " pod="openstack/dnsmasq-dns-848cf88cfc-mff22" Dec 10 15:44:12 crc kubenswrapper[4755]: I1210 15:44:12.480906 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/16a3f983-2b37-4dfc-944d-c959d5824b69-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-mff22\" (UID: \"16a3f983-2b37-4dfc-944d-c959d5824b69\") " pod="openstack/dnsmasq-dns-848cf88cfc-mff22" Dec 10 15:44:12 crc kubenswrapper[4755]: I1210 15:44:12.480961 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/16a3f983-2b37-4dfc-944d-c959d5824b69-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-mff22\" (UID: \"16a3f983-2b37-4dfc-944d-c959d5824b69\") " pod="openstack/dnsmasq-dns-848cf88cfc-mff22" Dec 10 15:44:12 crc kubenswrapper[4755]: I1210 15:44:12.481018 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpg72\" (UniqueName: \"kubernetes.io/projected/16a3f983-2b37-4dfc-944d-c959d5824b69-kube-api-access-bpg72\") pod \"dnsmasq-dns-848cf88cfc-mff22\" (UID: \"16a3f983-2b37-4dfc-944d-c959d5824b69\") " pod="openstack/dnsmasq-dns-848cf88cfc-mff22" Dec 10 15:44:12 crc kubenswrapper[4755]: I1210 15:44:12.481133 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/16a3f983-2b37-4dfc-944d-c959d5824b69-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-mff22\" (UID: \"16a3f983-2b37-4dfc-944d-c959d5824b69\") " pod="openstack/dnsmasq-dns-848cf88cfc-mff22" Dec 10 15:44:12 crc kubenswrapper[4755]: I1210 15:44:12.521530 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-699d79cf4-kwqcl"] Dec 10 15:44:12 crc kubenswrapper[4755]: I1210 15:44:12.523423 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-699d79cf4-kwqcl" Dec 10 15:44:12 crc kubenswrapper[4755]: I1210 15:44:12.533854 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Dec 10 15:44:12 crc kubenswrapper[4755]: I1210 15:44:12.576111 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-699d79cf4-kwqcl"] Dec 10 15:44:12 crc kubenswrapper[4755]: I1210 15:44:12.585688 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16a3f983-2b37-4dfc-944d-c959d5824b69-config\") pod \"dnsmasq-dns-848cf88cfc-mff22\" (UID: \"16a3f983-2b37-4dfc-944d-c959d5824b69\") " pod="openstack/dnsmasq-dns-848cf88cfc-mff22" Dec 10 15:44:12 crc kubenswrapper[4755]: I1210 15:44:12.585770 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/16a3f983-2b37-4dfc-944d-c959d5824b69-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-mff22\" (UID: \"16a3f983-2b37-4dfc-944d-c959d5824b69\") " pod="openstack/dnsmasq-dns-848cf88cfc-mff22" Dec 10 15:44:12 crc kubenswrapper[4755]: I1210 15:44:12.585810 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/16a3f983-2b37-4dfc-944d-c959d5824b69-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-mff22\" (UID: \"16a3f983-2b37-4dfc-944d-c959d5824b69\") " pod="openstack/dnsmasq-dns-848cf88cfc-mff22" Dec 10 15:44:12 crc kubenswrapper[4755]: I1210 15:44:12.585852 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/16a3f983-2b37-4dfc-944d-c959d5824b69-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-mff22\" (UID: \"16a3f983-2b37-4dfc-944d-c959d5824b69\") " pod="openstack/dnsmasq-dns-848cf88cfc-mff22" Dec 10 15:44:12 crc kubenswrapper[4755]: I1210 15:44:12.585892 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpg72\" (UniqueName: \"kubernetes.io/projected/16a3f983-2b37-4dfc-944d-c959d5824b69-kube-api-access-bpg72\") pod \"dnsmasq-dns-848cf88cfc-mff22\" (UID: \"16a3f983-2b37-4dfc-944d-c959d5824b69\") " pod="openstack/dnsmasq-dns-848cf88cfc-mff22" Dec 10 15:44:12 crc kubenswrapper[4755]: I1210 15:44:12.585983 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/16a3f983-2b37-4dfc-944d-c959d5824b69-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-mff22\" (UID: \"16a3f983-2b37-4dfc-944d-c959d5824b69\") " pod="openstack/dnsmasq-dns-848cf88cfc-mff22" Dec 10 15:44:12 crc kubenswrapper[4755]: I1210 15:44:12.586988 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/16a3f983-2b37-4dfc-944d-c959d5824b69-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-mff22\" (UID: \"16a3f983-2b37-4dfc-944d-c959d5824b69\") " pod="openstack/dnsmasq-dns-848cf88cfc-mff22" Dec 10 15:44:12 crc kubenswrapper[4755]: I1210 15:44:12.594163 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/16a3f983-2b37-4dfc-944d-c959d5824b69-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-mff22\" (UID: \"16a3f983-2b37-4dfc-944d-c959d5824b69\") " pod="openstack/dnsmasq-dns-848cf88cfc-mff22" Dec 10 15:44:12 crc kubenswrapper[4755]: I1210 15:44:12.594618 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/16a3f983-2b37-4dfc-944d-c959d5824b69-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-mff22\" (UID: \"16a3f983-2b37-4dfc-944d-c959d5824b69\") " pod="openstack/dnsmasq-dns-848cf88cfc-mff22" Dec 10 15:44:12 crc kubenswrapper[4755]: I1210 15:44:12.594765 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/16a3f983-2b37-4dfc-944d-c959d5824b69-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-mff22\" (UID: \"16a3f983-2b37-4dfc-944d-c959d5824b69\") " pod="openstack/dnsmasq-dns-848cf88cfc-mff22" Dec 10 15:44:12 crc kubenswrapper[4755]: I1210 15:44:12.616369 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16a3f983-2b37-4dfc-944d-c959d5824b69-config\") pod \"dnsmasq-dns-848cf88cfc-mff22\" (UID: \"16a3f983-2b37-4dfc-944d-c959d5824b69\") " pod="openstack/dnsmasq-dns-848cf88cfc-mff22" Dec 10 15:44:12 crc kubenswrapper[4755]: I1210 15:44:12.641308 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpg72\" (UniqueName: \"kubernetes.io/projected/16a3f983-2b37-4dfc-944d-c959d5824b69-kube-api-access-bpg72\") pod \"dnsmasq-dns-848cf88cfc-mff22\" (UID: \"16a3f983-2b37-4dfc-944d-c959d5824b69\") " pod="openstack/dnsmasq-dns-848cf88cfc-mff22" Dec 10 15:44:12 crc kubenswrapper[4755]: I1210 15:44:12.663284 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-979964fb-8vrlp" Dec 10 15:44:12 crc kubenswrapper[4755]: I1210 15:44:12.688082 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af70a62a-214d-4ed0-8508-de8e52dd68a3-config-data\") pod \"barbican-api-699d79cf4-kwqcl\" (UID: \"af70a62a-214d-4ed0-8508-de8e52dd68a3\") " pod="openstack/barbican-api-699d79cf4-kwqcl" Dec 10 15:44:12 crc kubenswrapper[4755]: I1210 15:44:12.688143 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af70a62a-214d-4ed0-8508-de8e52dd68a3-combined-ca-bundle\") pod \"barbican-api-699d79cf4-kwqcl\" (UID: \"af70a62a-214d-4ed0-8508-de8e52dd68a3\") " pod="openstack/barbican-api-699d79cf4-kwqcl" Dec 10 15:44:12 crc kubenswrapper[4755]: I1210 15:44:12.688236 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af70a62a-214d-4ed0-8508-de8e52dd68a3-logs\") pod \"barbican-api-699d79cf4-kwqcl\" (UID: \"af70a62a-214d-4ed0-8508-de8e52dd68a3\") " pod="openstack/barbican-api-699d79cf4-kwqcl" Dec 10 15:44:12 crc kubenswrapper[4755]: I1210 15:44:12.688253 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9f5h\" (UniqueName: \"kubernetes.io/projected/af70a62a-214d-4ed0-8508-de8e52dd68a3-kube-api-access-c9f5h\") pod \"barbican-api-699d79cf4-kwqcl\" (UID: \"af70a62a-214d-4ed0-8508-de8e52dd68a3\") " pod="openstack/barbican-api-699d79cf4-kwqcl" Dec 10 15:44:12 crc kubenswrapper[4755]: I1210 15:44:12.688278 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/af70a62a-214d-4ed0-8508-de8e52dd68a3-config-data-custom\") pod \"barbican-api-699d79cf4-kwqcl\" (UID: \"af70a62a-214d-4ed0-8508-de8e52dd68a3\") " pod="openstack/barbican-api-699d79cf4-kwqcl" Dec 10 15:44:12 crc kubenswrapper[4755]: I1210 15:44:12.777949 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-mff22" Dec 10 15:44:12 crc kubenswrapper[4755]: I1210 15:44:12.793180 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/af70a62a-214d-4ed0-8508-de8e52dd68a3-config-data-custom\") pod \"barbican-api-699d79cf4-kwqcl\" (UID: \"af70a62a-214d-4ed0-8508-de8e52dd68a3\") " pod="openstack/barbican-api-699d79cf4-kwqcl" Dec 10 15:44:12 crc kubenswrapper[4755]: I1210 15:44:12.793518 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af70a62a-214d-4ed0-8508-de8e52dd68a3-config-data\") pod \"barbican-api-699d79cf4-kwqcl\" (UID: \"af70a62a-214d-4ed0-8508-de8e52dd68a3\") " pod="openstack/barbican-api-699d79cf4-kwqcl" Dec 10 15:44:12 crc kubenswrapper[4755]: I1210 15:44:12.793582 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af70a62a-214d-4ed0-8508-de8e52dd68a3-combined-ca-bundle\") pod \"barbican-api-699d79cf4-kwqcl\" (UID: \"af70a62a-214d-4ed0-8508-de8e52dd68a3\") " pod="openstack/barbican-api-699d79cf4-kwqcl" Dec 10 15:44:12 crc kubenswrapper[4755]: I1210 15:44:12.793729 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af70a62a-214d-4ed0-8508-de8e52dd68a3-logs\") pod \"barbican-api-699d79cf4-kwqcl\" (UID: \"af70a62a-214d-4ed0-8508-de8e52dd68a3\") " pod="openstack/barbican-api-699d79cf4-kwqcl" Dec 10 15:44:12 crc kubenswrapper[4755]: I1210 15:44:12.793759 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9f5h\" (UniqueName: \"kubernetes.io/projected/af70a62a-214d-4ed0-8508-de8e52dd68a3-kube-api-access-c9f5h\") pod \"barbican-api-699d79cf4-kwqcl\" (UID: \"af70a62a-214d-4ed0-8508-de8e52dd68a3\") " pod="openstack/barbican-api-699d79cf4-kwqcl" Dec 10 15:44:12 crc kubenswrapper[4755]: I1210 15:44:12.797936 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af70a62a-214d-4ed0-8508-de8e52dd68a3-logs\") pod \"barbican-api-699d79cf4-kwqcl\" (UID: \"af70a62a-214d-4ed0-8508-de8e52dd68a3\") " pod="openstack/barbican-api-699d79cf4-kwqcl" Dec 10 15:44:12 crc kubenswrapper[4755]: I1210 15:44:12.800724 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/af70a62a-214d-4ed0-8508-de8e52dd68a3-config-data-custom\") pod \"barbican-api-699d79cf4-kwqcl\" (UID: \"af70a62a-214d-4ed0-8508-de8e52dd68a3\") " pod="openstack/barbican-api-699d79cf4-kwqcl" Dec 10 15:44:12 crc kubenswrapper[4755]: I1210 15:44:12.806722 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af70a62a-214d-4ed0-8508-de8e52dd68a3-config-data\") pod \"barbican-api-699d79cf4-kwqcl\" (UID: \"af70a62a-214d-4ed0-8508-de8e52dd68a3\") " pod="openstack/barbican-api-699d79cf4-kwqcl" Dec 10 15:44:12 crc kubenswrapper[4755]: I1210 15:44:12.807274 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af70a62a-214d-4ed0-8508-de8e52dd68a3-combined-ca-bundle\") pod \"barbican-api-699d79cf4-kwqcl\" (UID: \"af70a62a-214d-4ed0-8508-de8e52dd68a3\") " pod="openstack/barbican-api-699d79cf4-kwqcl" Dec 10 15:44:12 crc kubenswrapper[4755]: I1210 15:44:12.815454 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9f5h\" (UniqueName: \"kubernetes.io/projected/af70a62a-214d-4ed0-8508-de8e52dd68a3-kube-api-access-c9f5h\") pod \"barbican-api-699d79cf4-kwqcl\" (UID: \"af70a62a-214d-4ed0-8508-de8e52dd68a3\") " pod="openstack/barbican-api-699d79cf4-kwqcl" Dec 10 15:44:12 crc kubenswrapper[4755]: I1210 15:44:12.926668 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-699d79cf4-kwqcl" Dec 10 15:44:12 crc kubenswrapper[4755]: I1210 15:44:12.955916 4755 generic.go:334] "Generic (PLEG): container finished" podID="d79c58e8-f9d1-4fde-945d-a6e175ec8fee" containerID="bbb53cf639bf3e40527ad702f68813c89df04d8f991131a2956a2546fffdc70c" exitCode=0 Dec 10 15:44:12 crc kubenswrapper[4755]: I1210 15:44:12.955999 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d79c58e8-f9d1-4fde-945d-a6e175ec8fee","Type":"ContainerDied","Data":"bbb53cf639bf3e40527ad702f68813c89df04d8f991131a2956a2546fffdc70c"} Dec 10 15:44:12 crc kubenswrapper[4755]: I1210 15:44:12.962094 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-764c845c6f-lbzq2" event={"ID":"73f09975-30b1-46a8-a34e-ccb4683adf6c","Type":"ContainerStarted","Data":"4a4f246934fc6329a49ccc00e8ce52a40797c8b777dbf3857f2b906c8cee3d83"} Dec 10 15:44:12 crc kubenswrapper[4755]: I1210 15:44:12.963840 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-75b8ff9576-fcxhh" event={"ID":"c3c11b73-3e0e-4c7e-ac2f-943e44b99d92","Type":"ContainerStarted","Data":"c168b8b63413509f8fdc28536703ca905a2a564fce9e0e63dc43c731bd6ff12f"} Dec 10 15:44:13 crc kubenswrapper[4755]: I1210 15:44:13.389007 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5d855b58d9-fzhd2"] Dec 10 15:44:13 crc kubenswrapper[4755]: W1210 15:44:13.407512 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda5eb4f86_4f65_41b4_8694_279c44c08491.slice/crio-c77700b0a6424797ac1589b2fa669afb2ed47ad7a45f06d4805c6783277c46d7 WatchSource:0}: Error finding container c77700b0a6424797ac1589b2fa669afb2ed47ad7a45f06d4805c6783277c46d7: Status 404 returned error can't find the container with id c77700b0a6424797ac1589b2fa669afb2ed47ad7a45f06d4805c6783277c46d7 Dec 10 15:44:13 crc kubenswrapper[4755]: I1210 15:44:13.416261 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-979964fb-8vrlp"] Dec 10 15:44:13 crc kubenswrapper[4755]: W1210 15:44:13.632651 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaf70a62a_214d_4ed0_8508_de8e52dd68a3.slice/crio-f1136cf7065c0e75e869166f287e4df0e238bcc7e6639e40191c427c79e5d235 WatchSource:0}: Error finding container f1136cf7065c0e75e869166f287e4df0e238bcc7e6639e40191c427c79e5d235: Status 404 returned error can't find the container with id f1136cf7065c0e75e869166f287e4df0e238bcc7e6639e40191c427c79e5d235 Dec 10 15:44:13 crc kubenswrapper[4755]: I1210 15:44:13.633618 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-699d79cf4-kwqcl"] Dec 10 15:44:13 crc kubenswrapper[4755]: I1210 15:44:13.661598 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-mff22"] Dec 10 15:44:13 crc kubenswrapper[4755]: I1210 15:44:13.974496 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-979964fb-8vrlp" event={"ID":"a5eb4f86-4f65-41b4-8694-279c44c08491","Type":"ContainerStarted","Data":"c77700b0a6424797ac1589b2fa669afb2ed47ad7a45f06d4805c6783277c46d7"} Dec 10 15:44:13 crc kubenswrapper[4755]: I1210 15:44:13.975610 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-mff22" event={"ID":"16a3f983-2b37-4dfc-944d-c959d5824b69","Type":"ContainerStarted","Data":"d5aa303176b20a2ffe0de18e62f812a232e0f2e1ddce37044d6564e75a9449ca"} Dec 10 15:44:13 crc kubenswrapper[4755]: I1210 15:44:13.976577 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-699d79cf4-kwqcl" event={"ID":"af70a62a-214d-4ed0-8508-de8e52dd68a3","Type":"ContainerStarted","Data":"f1136cf7065c0e75e869166f287e4df0e238bcc7e6639e40191c427c79e5d235"} Dec 10 15:44:13 crc kubenswrapper[4755]: I1210 15:44:13.977800 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5d855b58d9-fzhd2" event={"ID":"83de1ea1-f46a-43ab-9a89-f5980d7bed78","Type":"ContainerStarted","Data":"3d0c947afc2c3e04bc6ab2b06b5850416103588b50fe7b378fc9506262fd76b3"} Dec 10 15:44:15 crc kubenswrapper[4755]: I1210 15:44:15.037099 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-55f9947ffb-mpljd"] Dec 10 15:44:15 crc kubenswrapper[4755]: I1210 15:44:15.040161 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-55f9947ffb-mpljd" Dec 10 15:44:15 crc kubenswrapper[4755]: I1210 15:44:15.046788 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Dec 10 15:44:15 crc kubenswrapper[4755]: I1210 15:44:15.047062 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Dec 10 15:44:15 crc kubenswrapper[4755]: I1210 15:44:15.049502 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-55f9947ffb-mpljd"] Dec 10 15:44:15 crc kubenswrapper[4755]: I1210 15:44:15.156575 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/add12461-ad93-468e-9e20-de46b26414a0-logs\") pod \"barbican-api-55f9947ffb-mpljd\" (UID: \"add12461-ad93-468e-9e20-de46b26414a0\") " pod="openstack/barbican-api-55f9947ffb-mpljd" Dec 10 15:44:15 crc kubenswrapper[4755]: I1210 15:44:15.156618 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/add12461-ad93-468e-9e20-de46b26414a0-config-data-custom\") pod \"barbican-api-55f9947ffb-mpljd\" (UID: \"add12461-ad93-468e-9e20-de46b26414a0\") " pod="openstack/barbican-api-55f9947ffb-mpljd" Dec 10 15:44:15 crc kubenswrapper[4755]: I1210 15:44:15.156639 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/add12461-ad93-468e-9e20-de46b26414a0-combined-ca-bundle\") pod \"barbican-api-55f9947ffb-mpljd\" (UID: \"add12461-ad93-468e-9e20-de46b26414a0\") " pod="openstack/barbican-api-55f9947ffb-mpljd" Dec 10 15:44:15 crc kubenswrapper[4755]: I1210 15:44:15.156697 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckldz\" (UniqueName: \"kubernetes.io/projected/add12461-ad93-468e-9e20-de46b26414a0-kube-api-access-ckldz\") pod \"barbican-api-55f9947ffb-mpljd\" (UID: \"add12461-ad93-468e-9e20-de46b26414a0\") " pod="openstack/barbican-api-55f9947ffb-mpljd" Dec 10 15:44:15 crc kubenswrapper[4755]: I1210 15:44:15.156720 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/add12461-ad93-468e-9e20-de46b26414a0-config-data\") pod \"barbican-api-55f9947ffb-mpljd\" (UID: \"add12461-ad93-468e-9e20-de46b26414a0\") " pod="openstack/barbican-api-55f9947ffb-mpljd" Dec 10 15:44:15 crc kubenswrapper[4755]: I1210 15:44:15.156794 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/add12461-ad93-468e-9e20-de46b26414a0-internal-tls-certs\") pod \"barbican-api-55f9947ffb-mpljd\" (UID: \"add12461-ad93-468e-9e20-de46b26414a0\") " pod="openstack/barbican-api-55f9947ffb-mpljd" Dec 10 15:44:15 crc kubenswrapper[4755]: I1210 15:44:15.156861 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/add12461-ad93-468e-9e20-de46b26414a0-public-tls-certs\") pod \"barbican-api-55f9947ffb-mpljd\" (UID: \"add12461-ad93-468e-9e20-de46b26414a0\") " pod="openstack/barbican-api-55f9947ffb-mpljd" Dec 10 15:44:15 crc kubenswrapper[4755]: I1210 15:44:15.259314 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/add12461-ad93-468e-9e20-de46b26414a0-internal-tls-certs\") pod \"barbican-api-55f9947ffb-mpljd\" (UID: \"add12461-ad93-468e-9e20-de46b26414a0\") " pod="openstack/barbican-api-55f9947ffb-mpljd" Dec 10 15:44:15 crc kubenswrapper[4755]: I1210 15:44:15.259550 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/add12461-ad93-468e-9e20-de46b26414a0-public-tls-certs\") pod \"barbican-api-55f9947ffb-mpljd\" (UID: \"add12461-ad93-468e-9e20-de46b26414a0\") " pod="openstack/barbican-api-55f9947ffb-mpljd" Dec 10 15:44:15 crc kubenswrapper[4755]: I1210 15:44:15.259662 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/add12461-ad93-468e-9e20-de46b26414a0-logs\") pod \"barbican-api-55f9947ffb-mpljd\" (UID: \"add12461-ad93-468e-9e20-de46b26414a0\") " pod="openstack/barbican-api-55f9947ffb-mpljd" Dec 10 15:44:15 crc kubenswrapper[4755]: I1210 15:44:15.259741 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/add12461-ad93-468e-9e20-de46b26414a0-config-data-custom\") pod \"barbican-api-55f9947ffb-mpljd\" (UID: \"add12461-ad93-468e-9e20-de46b26414a0\") " pod="openstack/barbican-api-55f9947ffb-mpljd" Dec 10 15:44:15 crc kubenswrapper[4755]: I1210 15:44:15.259801 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/add12461-ad93-468e-9e20-de46b26414a0-combined-ca-bundle\") pod \"barbican-api-55f9947ffb-mpljd\" (UID: \"add12461-ad93-468e-9e20-de46b26414a0\") " pod="openstack/barbican-api-55f9947ffb-mpljd" Dec 10 15:44:15 crc kubenswrapper[4755]: I1210 15:44:15.259946 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckldz\" (UniqueName: \"kubernetes.io/projected/add12461-ad93-468e-9e20-de46b26414a0-kube-api-access-ckldz\") pod \"barbican-api-55f9947ffb-mpljd\" (UID: \"add12461-ad93-468e-9e20-de46b26414a0\") " pod="openstack/barbican-api-55f9947ffb-mpljd" Dec 10 15:44:15 crc kubenswrapper[4755]: I1210 15:44:15.260036 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/add12461-ad93-468e-9e20-de46b26414a0-config-data\") pod \"barbican-api-55f9947ffb-mpljd\" (UID: \"add12461-ad93-468e-9e20-de46b26414a0\") " pod="openstack/barbican-api-55f9947ffb-mpljd" Dec 10 15:44:15 crc kubenswrapper[4755]: I1210 15:44:15.260510 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/add12461-ad93-468e-9e20-de46b26414a0-logs\") pod \"barbican-api-55f9947ffb-mpljd\" (UID: \"add12461-ad93-468e-9e20-de46b26414a0\") " pod="openstack/barbican-api-55f9947ffb-mpljd" Dec 10 15:44:15 crc kubenswrapper[4755]: I1210 15:44:15.265606 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/add12461-ad93-468e-9e20-de46b26414a0-internal-tls-certs\") pod \"barbican-api-55f9947ffb-mpljd\" (UID: \"add12461-ad93-468e-9e20-de46b26414a0\") " pod="openstack/barbican-api-55f9947ffb-mpljd" Dec 10 15:44:15 crc kubenswrapper[4755]: I1210 15:44:15.266801 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/add12461-ad93-468e-9e20-de46b26414a0-config-data\") pod \"barbican-api-55f9947ffb-mpljd\" (UID: \"add12461-ad93-468e-9e20-de46b26414a0\") " pod="openstack/barbican-api-55f9947ffb-mpljd" Dec 10 15:44:15 crc kubenswrapper[4755]: I1210 15:44:15.268099 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/add12461-ad93-468e-9e20-de46b26414a0-combined-ca-bundle\") pod \"barbican-api-55f9947ffb-mpljd\" (UID: \"add12461-ad93-468e-9e20-de46b26414a0\") " pod="openstack/barbican-api-55f9947ffb-mpljd" Dec 10 15:44:15 crc kubenswrapper[4755]: I1210 15:44:15.278766 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/add12461-ad93-468e-9e20-de46b26414a0-public-tls-certs\") pod \"barbican-api-55f9947ffb-mpljd\" (UID: \"add12461-ad93-468e-9e20-de46b26414a0\") " pod="openstack/barbican-api-55f9947ffb-mpljd" Dec 10 15:44:15 crc kubenswrapper[4755]: I1210 15:44:15.279340 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/add12461-ad93-468e-9e20-de46b26414a0-config-data-custom\") pod \"barbican-api-55f9947ffb-mpljd\" (UID: \"add12461-ad93-468e-9e20-de46b26414a0\") " pod="openstack/barbican-api-55f9947ffb-mpljd" Dec 10 15:44:15 crc kubenswrapper[4755]: I1210 15:44:15.285082 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckldz\" (UniqueName: \"kubernetes.io/projected/add12461-ad93-468e-9e20-de46b26414a0-kube-api-access-ckldz\") pod \"barbican-api-55f9947ffb-mpljd\" (UID: \"add12461-ad93-468e-9e20-de46b26414a0\") " pod="openstack/barbican-api-55f9947ffb-mpljd" Dec 10 15:44:15 crc kubenswrapper[4755]: I1210 15:44:15.364373 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-55f9947ffb-mpljd" Dec 10 15:44:15 crc kubenswrapper[4755]: I1210 15:44:15.872427 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-55f9947ffb-mpljd"] Dec 10 15:44:15 crc kubenswrapper[4755]: W1210 15:44:15.884816 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podadd12461_ad93_468e_9e20_de46b26414a0.slice/crio-b3caf0f479c8fc9ef0a1adb4b79c2a6fd2088c7323dea83013ec8b6e34cccecf WatchSource:0}: Error finding container b3caf0f479c8fc9ef0a1adb4b79c2a6fd2088c7323dea83013ec8b6e34cccecf: Status 404 returned error can't find the container with id b3caf0f479c8fc9ef0a1adb4b79c2a6fd2088c7323dea83013ec8b6e34cccecf Dec 10 15:44:16 crc kubenswrapper[4755]: I1210 15:44:16.016786 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-75b8ff9576-fcxhh" event={"ID":"c3c11b73-3e0e-4c7e-ac2f-943e44b99d92","Type":"ContainerStarted","Data":"6e6ea2929442b925caa4d44b2c4374b1cc6eb5115c4cf77eb367486238f4acd2"} Dec 10 15:44:16 crc kubenswrapper[4755]: I1210 15:44:16.022688 4755 generic.go:334] "Generic (PLEG): container finished" podID="16a3f983-2b37-4dfc-944d-c959d5824b69" containerID="e980033572aa8e12de21370e135824f06455245f54bfdbce3d17e18d578b3250" exitCode=0 Dec 10 15:44:16 crc kubenswrapper[4755]: I1210 15:44:16.022787 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-mff22" event={"ID":"16a3f983-2b37-4dfc-944d-c959d5824b69","Type":"ContainerDied","Data":"e980033572aa8e12de21370e135824f06455245f54bfdbce3d17e18d578b3250"} Dec 10 15:44:16 crc kubenswrapper[4755]: I1210 15:44:16.025567 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-gg57w" event={"ID":"d03214c3-1a52-44fb-aa0d-45de3de8ff44","Type":"ContainerStarted","Data":"a35bfd696ac236809c06db16dd58362ac3efb1ac2374187b94c53e9671675607"} Dec 10 15:44:16 crc kubenswrapper[4755]: I1210 15:44:16.025755 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6b7b667979-gg57w" podUID="d03214c3-1a52-44fb-aa0d-45de3de8ff44" containerName="dnsmasq-dns" containerID="cri-o://a35bfd696ac236809c06db16dd58362ac3efb1ac2374187b94c53e9671675607" gracePeriod=10 Dec 10 15:44:16 crc kubenswrapper[4755]: I1210 15:44:16.025867 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6b7b667979-gg57w" Dec 10 15:44:16 crc kubenswrapper[4755]: I1210 15:44:16.029566 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-764c845c6f-lbzq2" event={"ID":"73f09975-30b1-46a8-a34e-ccb4683adf6c","Type":"ContainerStarted","Data":"347e5da64791d7dc6e61caaf93a29a5377d5221c0741d34ab4dc7f3896caa58f"} Dec 10 15:44:16 crc kubenswrapper[4755]: I1210 15:44:16.029713 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-764c845c6f-lbzq2" Dec 10 15:44:16 crc kubenswrapper[4755]: I1210 15:44:16.033109 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-699d79cf4-kwqcl" event={"ID":"af70a62a-214d-4ed0-8508-de8e52dd68a3","Type":"ContainerStarted","Data":"c7b3bb22d66f88d472d636b1ffd7b2a9e19fc9627f1e47c7c907fc690e35514c"} Dec 10 15:44:16 crc kubenswrapper[4755]: I1210 15:44:16.047207 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5f46cf586c-brqwd" event={"ID":"efbad6ea-87b6-40ec-b2a6-542e31d18e69","Type":"ContainerStarted","Data":"1782c85f452a026aef43636237b3fe7326d07e15773dd4cda7b3ada8a0a069f4"} Dec 10 15:44:16 crc kubenswrapper[4755]: I1210 15:44:16.051518 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-55f9947ffb-mpljd" event={"ID":"add12461-ad93-468e-9e20-de46b26414a0","Type":"ContainerStarted","Data":"b3caf0f479c8fc9ef0a1adb4b79c2a6fd2088c7323dea83013ec8b6e34cccecf"} Dec 10 15:44:16 crc kubenswrapper[4755]: I1210 15:44:16.073818 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6d5449c6d4-5rw2f" event={"ID":"7d152fd0-fbbf-4c7b-874a-169860ee9075","Type":"ContainerStarted","Data":"4e05778ed82b900074c8fe7e096c3ce7cd0b6492a587cbaa2f1c4f064a7c250e"} Dec 10 15:44:16 crc kubenswrapper[4755]: I1210 15:44:16.075880 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6d5449c6d4-5rw2f" Dec 10 15:44:16 crc kubenswrapper[4755]: I1210 15:44:16.104484 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-764c845c6f-lbzq2" podStartSLOduration=5.104439446 podStartE2EDuration="5.104439446s" podCreationTimestamp="2025-12-10 15:44:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:44:16.076289082 +0000 UTC m=+1252.677172714" watchObservedRunningTime="2025-12-10 15:44:16.104439446 +0000 UTC m=+1252.705323078" Dec 10 15:44:16 crc kubenswrapper[4755]: I1210 15:44:16.123305 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6b7b667979-gg57w" podStartSLOduration=11.123288547 podStartE2EDuration="11.123288547s" podCreationTimestamp="2025-12-10 15:44:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:44:16.107293113 +0000 UTC m=+1252.708176775" watchObservedRunningTime="2025-12-10 15:44:16.123288547 +0000 UTC m=+1252.724172189" Dec 10 15:44:16 crc kubenswrapper[4755]: I1210 15:44:16.148985 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6d5449c6d4-5rw2f" podStartSLOduration=10.148961065 podStartE2EDuration="10.148961065s" podCreationTimestamp="2025-12-10 15:44:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:44:16.136229279 +0000 UTC m=+1252.737112911" watchObservedRunningTime="2025-12-10 15:44:16.148961065 +0000 UTC m=+1252.749844697" Dec 10 15:44:16 crc kubenswrapper[4755]: I1210 15:44:16.206810 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 10 15:44:16 crc kubenswrapper[4755]: I1210 15:44:16.297373 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d79c58e8-f9d1-4fde-945d-a6e175ec8fee-scripts\") pod \"d79c58e8-f9d1-4fde-945d-a6e175ec8fee\" (UID: \"d79c58e8-f9d1-4fde-945d-a6e175ec8fee\") " Dec 10 15:44:16 crc kubenswrapper[4755]: I1210 15:44:16.297479 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d79c58e8-f9d1-4fde-945d-a6e175ec8fee-httpd-run\") pod \"d79c58e8-f9d1-4fde-945d-a6e175ec8fee\" (UID: \"d79c58e8-f9d1-4fde-945d-a6e175ec8fee\") " Dec 10 15:44:16 crc kubenswrapper[4755]: I1210 15:44:16.297533 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d79c58e8-f9d1-4fde-945d-a6e175ec8fee-logs\") pod \"d79c58e8-f9d1-4fde-945d-a6e175ec8fee\" (UID: \"d79c58e8-f9d1-4fde-945d-a6e175ec8fee\") " Dec 10 15:44:16 crc kubenswrapper[4755]: I1210 15:44:16.297649 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fad4194d-6d90-49f1-a017-ae4167f764c9\") pod \"d79c58e8-f9d1-4fde-945d-a6e175ec8fee\" (UID: \"d79c58e8-f9d1-4fde-945d-a6e175ec8fee\") " Dec 10 15:44:16 crc kubenswrapper[4755]: I1210 15:44:16.297684 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d79c58e8-f9d1-4fde-945d-a6e175ec8fee-combined-ca-bundle\") pod \"d79c58e8-f9d1-4fde-945d-a6e175ec8fee\" (UID: \"d79c58e8-f9d1-4fde-945d-a6e175ec8fee\") " Dec 10 15:44:16 crc kubenswrapper[4755]: I1210 15:44:16.297703 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d79c58e8-f9d1-4fde-945d-a6e175ec8fee-config-data\") pod \"d79c58e8-f9d1-4fde-945d-a6e175ec8fee\" (UID: \"d79c58e8-f9d1-4fde-945d-a6e175ec8fee\") " Dec 10 15:44:16 crc kubenswrapper[4755]: I1210 15:44:16.297754 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tnw8q\" (UniqueName: \"kubernetes.io/projected/d79c58e8-f9d1-4fde-945d-a6e175ec8fee-kube-api-access-tnw8q\") pod \"d79c58e8-f9d1-4fde-945d-a6e175ec8fee\" (UID: \"d79c58e8-f9d1-4fde-945d-a6e175ec8fee\") " Dec 10 15:44:16 crc kubenswrapper[4755]: I1210 15:44:16.299566 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d79c58e8-f9d1-4fde-945d-a6e175ec8fee-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "d79c58e8-f9d1-4fde-945d-a6e175ec8fee" (UID: "d79c58e8-f9d1-4fde-945d-a6e175ec8fee"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:44:16 crc kubenswrapper[4755]: I1210 15:44:16.304289 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d79c58e8-f9d1-4fde-945d-a6e175ec8fee-scripts" (OuterVolumeSpecName: "scripts") pod "d79c58e8-f9d1-4fde-945d-a6e175ec8fee" (UID: "d79c58e8-f9d1-4fde-945d-a6e175ec8fee"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:44:16 crc kubenswrapper[4755]: I1210 15:44:16.307298 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d79c58e8-f9d1-4fde-945d-a6e175ec8fee-kube-api-access-tnw8q" (OuterVolumeSpecName: "kube-api-access-tnw8q") pod "d79c58e8-f9d1-4fde-945d-a6e175ec8fee" (UID: "d79c58e8-f9d1-4fde-945d-a6e175ec8fee"). InnerVolumeSpecName "kube-api-access-tnw8q". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:44:16 crc kubenswrapper[4755]: I1210 15:44:16.308438 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d79c58e8-f9d1-4fde-945d-a6e175ec8fee-logs" (OuterVolumeSpecName: "logs") pod "d79c58e8-f9d1-4fde-945d-a6e175ec8fee" (UID: "d79c58e8-f9d1-4fde-945d-a6e175ec8fee"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:44:16 crc kubenswrapper[4755]: I1210 15:44:16.317244 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fad4194d-6d90-49f1-a017-ae4167f764c9" (OuterVolumeSpecName: "glance") pod "d79c58e8-f9d1-4fde-945d-a6e175ec8fee" (UID: "d79c58e8-f9d1-4fde-945d-a6e175ec8fee"). InnerVolumeSpecName "pvc-fad4194d-6d90-49f1-a017-ae4167f764c9". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 10 15:44:16 crc kubenswrapper[4755]: I1210 15:44:16.336752 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d79c58e8-f9d1-4fde-945d-a6e175ec8fee-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d79c58e8-f9d1-4fde-945d-a6e175ec8fee" (UID: "d79c58e8-f9d1-4fde-945d-a6e175ec8fee"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:44:16 crc kubenswrapper[4755]: I1210 15:44:16.370562 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d79c58e8-f9d1-4fde-945d-a6e175ec8fee-config-data" (OuterVolumeSpecName: "config-data") pod "d79c58e8-f9d1-4fde-945d-a6e175ec8fee" (UID: "d79c58e8-f9d1-4fde-945d-a6e175ec8fee"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:44:16 crc kubenswrapper[4755]: I1210 15:44:16.399083 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tnw8q\" (UniqueName: \"kubernetes.io/projected/d79c58e8-f9d1-4fde-945d-a6e175ec8fee-kube-api-access-tnw8q\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:16 crc kubenswrapper[4755]: I1210 15:44:16.399120 4755 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d79c58e8-f9d1-4fde-945d-a6e175ec8fee-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:16 crc kubenswrapper[4755]: I1210 15:44:16.399132 4755 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d79c58e8-f9d1-4fde-945d-a6e175ec8fee-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:16 crc kubenswrapper[4755]: I1210 15:44:16.399144 4755 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d79c58e8-f9d1-4fde-945d-a6e175ec8fee-logs\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:16 crc kubenswrapper[4755]: I1210 15:44:16.399175 4755 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-fad4194d-6d90-49f1-a017-ae4167f764c9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fad4194d-6d90-49f1-a017-ae4167f764c9\") on node \"crc\" " Dec 10 15:44:16 crc kubenswrapper[4755]: I1210 15:44:16.399188 4755 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d79c58e8-f9d1-4fde-945d-a6e175ec8fee-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:16 crc kubenswrapper[4755]: I1210 15:44:16.399200 4755 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d79c58e8-f9d1-4fde-945d-a6e175ec8fee-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:16 crc kubenswrapper[4755]: I1210 15:44:16.494639 4755 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Dec 10 15:44:16 crc kubenswrapper[4755]: I1210 15:44:16.495193 4755 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-fad4194d-6d90-49f1-a017-ae4167f764c9" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fad4194d-6d90-49f1-a017-ae4167f764c9") on node "crc" Dec 10 15:44:16 crc kubenswrapper[4755]: I1210 15:44:16.502807 4755 reconciler_common.go:293] "Volume detached for volume \"pvc-fad4194d-6d90-49f1-a017-ae4167f764c9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fad4194d-6d90-49f1-a017-ae4167f764c9\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:17 crc kubenswrapper[4755]: I1210 15:44:17.085048 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d79c58e8-f9d1-4fde-945d-a6e175ec8fee","Type":"ContainerDied","Data":"9e3d119905d748344192b7e68c06b6f2e6657fda6a97a6d7cef5fe75b564376b"} Dec 10 15:44:17 crc kubenswrapper[4755]: I1210 15:44:17.085105 4755 scope.go:117] "RemoveContainer" containerID="bbb53cf639bf3e40527ad702f68813c89df04d8f991131a2956a2546fffdc70c" Dec 10 15:44:17 crc kubenswrapper[4755]: I1210 15:44:17.085232 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 10 15:44:17 crc kubenswrapper[4755]: I1210 15:44:17.092024 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-699d79cf4-kwqcl" event={"ID":"af70a62a-214d-4ed0-8508-de8e52dd68a3","Type":"ContainerStarted","Data":"eba94b308e2e1c01708d1c6a4068802247c409b7e4f8f84ffc5448b5b0fdde23"} Dec 10 15:44:17 crc kubenswrapper[4755]: I1210 15:44:17.092802 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-699d79cf4-kwqcl" Dec 10 15:44:17 crc kubenswrapper[4755]: I1210 15:44:17.092922 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-699d79cf4-kwqcl" Dec 10 15:44:17 crc kubenswrapper[4755]: I1210 15:44:17.099513 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5f46cf586c-brqwd" event={"ID":"efbad6ea-87b6-40ec-b2a6-542e31d18e69","Type":"ContainerStarted","Data":"fbcf2cb4a74709f4544625392eb1542da3ebac0bce64fa3da2e2cd0ac2a04868"} Dec 10 15:44:17 crc kubenswrapper[4755]: I1210 15:44:17.100403 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5f46cf586c-brqwd" Dec 10 15:44:17 crc kubenswrapper[4755]: I1210 15:44:17.104488 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-55f9947ffb-mpljd" event={"ID":"add12461-ad93-468e-9e20-de46b26414a0","Type":"ContainerStarted","Data":"4e53d12051aa64a709c6c169ef753542b4f2509b1a0fb3292a614268f469a60d"} Dec 10 15:44:17 crc kubenswrapper[4755]: I1210 15:44:17.109607 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-75b8ff9576-fcxhh" event={"ID":"c3c11b73-3e0e-4c7e-ac2f-943e44b99d92","Type":"ContainerStarted","Data":"e89c3c43692ed06a40684bbd1e09e3954b28724d515735047a1bdd3d886ad081"} Dec 10 15:44:17 crc kubenswrapper[4755]: I1210 15:44:17.110289 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-75b8ff9576-fcxhh" Dec 10 15:44:17 crc kubenswrapper[4755]: I1210 15:44:17.110317 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-75b8ff9576-fcxhh" Dec 10 15:44:17 crc kubenswrapper[4755]: I1210 15:44:17.115541 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-699d79cf4-kwqcl" podStartSLOduration=5.115531881 podStartE2EDuration="5.115531881s" podCreationTimestamp="2025-12-10 15:44:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:44:17.11256094 +0000 UTC m=+1253.713444572" watchObservedRunningTime="2025-12-10 15:44:17.115531881 +0000 UTC m=+1253.716415513" Dec 10 15:44:17 crc kubenswrapper[4755]: I1210 15:44:17.125408 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-mff22" event={"ID":"16a3f983-2b37-4dfc-944d-c959d5824b69","Type":"ContainerStarted","Data":"6ee145ba3a7a44476a716e8d6e06467a204b1431c0f3807d9e82d3b83eb1b5e5"} Dec 10 15:44:17 crc kubenswrapper[4755]: I1210 15:44:17.125695 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-848cf88cfc-mff22" Dec 10 15:44:17 crc kubenswrapper[4755]: I1210 15:44:17.128527 4755 generic.go:334] "Generic (PLEG): container finished" podID="d03214c3-1a52-44fb-aa0d-45de3de8ff44" containerID="a35bfd696ac236809c06db16dd58362ac3efb1ac2374187b94c53e9671675607" exitCode=0 Dec 10 15:44:17 crc kubenswrapper[4755]: I1210 15:44:17.128656 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-gg57w" event={"ID":"d03214c3-1a52-44fb-aa0d-45de3de8ff44","Type":"ContainerDied","Data":"a35bfd696ac236809c06db16dd58362ac3efb1ac2374187b94c53e9671675607"} Dec 10 15:44:17 crc kubenswrapper[4755]: I1210 15:44:17.147787 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-75b8ff9576-fcxhh" podStartSLOduration=7.147768126 podStartE2EDuration="7.147768126s" podCreationTimestamp="2025-12-10 15:44:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:44:17.138085334 +0000 UTC m=+1253.738968986" watchObservedRunningTime="2025-12-10 15:44:17.147768126 +0000 UTC m=+1253.748651758" Dec 10 15:44:17 crc kubenswrapper[4755]: I1210 15:44:17.165919 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5f46cf586c-brqwd" podStartSLOduration=9.165902449 podStartE2EDuration="9.165902449s" podCreationTimestamp="2025-12-10 15:44:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:44:17.164165691 +0000 UTC m=+1253.765049323" watchObservedRunningTime="2025-12-10 15:44:17.165902449 +0000 UTC m=+1253.766786081" Dec 10 15:44:17 crc kubenswrapper[4755]: I1210 15:44:17.194738 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 10 15:44:17 crc kubenswrapper[4755]: I1210 15:44:17.205026 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 10 15:44:17 crc kubenswrapper[4755]: I1210 15:44:17.218030 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-848cf88cfc-mff22" podStartSLOduration=5.218000772 podStartE2EDuration="5.218000772s" podCreationTimestamp="2025-12-10 15:44:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:44:17.208648469 +0000 UTC m=+1253.809532101" watchObservedRunningTime="2025-12-10 15:44:17.218000772 +0000 UTC m=+1253.818884404" Dec 10 15:44:17 crc kubenswrapper[4755]: I1210 15:44:17.248882 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 10 15:44:17 crc kubenswrapper[4755]: E1210 15:44:17.249486 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d79c58e8-f9d1-4fde-945d-a6e175ec8fee" containerName="glance-httpd" Dec 10 15:44:17 crc kubenswrapper[4755]: I1210 15:44:17.249513 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="d79c58e8-f9d1-4fde-945d-a6e175ec8fee" containerName="glance-httpd" Dec 10 15:44:17 crc kubenswrapper[4755]: E1210 15:44:17.249532 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d79c58e8-f9d1-4fde-945d-a6e175ec8fee" containerName="glance-log" Dec 10 15:44:17 crc kubenswrapper[4755]: I1210 15:44:17.249543 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="d79c58e8-f9d1-4fde-945d-a6e175ec8fee" containerName="glance-log" Dec 10 15:44:17 crc kubenswrapper[4755]: I1210 15:44:17.249798 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="d79c58e8-f9d1-4fde-945d-a6e175ec8fee" containerName="glance-log" Dec 10 15:44:17 crc kubenswrapper[4755]: I1210 15:44:17.249837 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="d79c58e8-f9d1-4fde-945d-a6e175ec8fee" containerName="glance-httpd" Dec 10 15:44:17 crc kubenswrapper[4755]: I1210 15:44:17.251288 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 10 15:44:17 crc kubenswrapper[4755]: I1210 15:44:17.253458 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Dec 10 15:44:17 crc kubenswrapper[4755]: I1210 15:44:17.253565 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 10 15:44:17 crc kubenswrapper[4755]: I1210 15:44:17.274492 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 10 15:44:17 crc kubenswrapper[4755]: I1210 15:44:17.437993 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-fad4194d-6d90-49f1-a017-ae4167f764c9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fad4194d-6d90-49f1-a017-ae4167f764c9\") pod \"glance-default-external-api-0\" (UID: \"81d20e6f-155e-444c-b54f-1161b3dff224\") " pod="openstack/glance-default-external-api-0" Dec 10 15:44:17 crc kubenswrapper[4755]: I1210 15:44:17.438074 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81d20e6f-155e-444c-b54f-1161b3dff224-config-data\") pod \"glance-default-external-api-0\" (UID: \"81d20e6f-155e-444c-b54f-1161b3dff224\") " pod="openstack/glance-default-external-api-0" Dec 10 15:44:17 crc kubenswrapper[4755]: I1210 15:44:17.438149 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/81d20e6f-155e-444c-b54f-1161b3dff224-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"81d20e6f-155e-444c-b54f-1161b3dff224\") " pod="openstack/glance-default-external-api-0" Dec 10 15:44:17 crc kubenswrapper[4755]: I1210 15:44:17.438182 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81d20e6f-155e-444c-b54f-1161b3dff224-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"81d20e6f-155e-444c-b54f-1161b3dff224\") " pod="openstack/glance-default-external-api-0" Dec 10 15:44:17 crc kubenswrapper[4755]: I1210 15:44:17.438288 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6qrt\" (UniqueName: \"kubernetes.io/projected/81d20e6f-155e-444c-b54f-1161b3dff224-kube-api-access-k6qrt\") pod \"glance-default-external-api-0\" (UID: \"81d20e6f-155e-444c-b54f-1161b3dff224\") " pod="openstack/glance-default-external-api-0" Dec 10 15:44:17 crc kubenswrapper[4755]: I1210 15:44:17.438572 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/81d20e6f-155e-444c-b54f-1161b3dff224-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"81d20e6f-155e-444c-b54f-1161b3dff224\") " pod="openstack/glance-default-external-api-0" Dec 10 15:44:17 crc kubenswrapper[4755]: I1210 15:44:17.438658 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81d20e6f-155e-444c-b54f-1161b3dff224-logs\") pod \"glance-default-external-api-0\" (UID: \"81d20e6f-155e-444c-b54f-1161b3dff224\") " pod="openstack/glance-default-external-api-0" Dec 10 15:44:17 crc kubenswrapper[4755]: I1210 15:44:17.438699 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81d20e6f-155e-444c-b54f-1161b3dff224-scripts\") pod \"glance-default-external-api-0\" (UID: \"81d20e6f-155e-444c-b54f-1161b3dff224\") " pod="openstack/glance-default-external-api-0" Dec 10 15:44:17 crc kubenswrapper[4755]: I1210 15:44:17.540969 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-fad4194d-6d90-49f1-a017-ae4167f764c9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fad4194d-6d90-49f1-a017-ae4167f764c9\") pod \"glance-default-external-api-0\" (UID: \"81d20e6f-155e-444c-b54f-1161b3dff224\") " pod="openstack/glance-default-external-api-0" Dec 10 15:44:17 crc kubenswrapper[4755]: I1210 15:44:17.541031 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81d20e6f-155e-444c-b54f-1161b3dff224-config-data\") pod \"glance-default-external-api-0\" (UID: \"81d20e6f-155e-444c-b54f-1161b3dff224\") " pod="openstack/glance-default-external-api-0" Dec 10 15:44:17 crc kubenswrapper[4755]: I1210 15:44:17.541098 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/81d20e6f-155e-444c-b54f-1161b3dff224-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"81d20e6f-155e-444c-b54f-1161b3dff224\") " pod="openstack/glance-default-external-api-0" Dec 10 15:44:17 crc kubenswrapper[4755]: I1210 15:44:17.541127 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81d20e6f-155e-444c-b54f-1161b3dff224-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"81d20e6f-155e-444c-b54f-1161b3dff224\") " pod="openstack/glance-default-external-api-0" Dec 10 15:44:17 crc kubenswrapper[4755]: I1210 15:44:17.541218 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6qrt\" (UniqueName: \"kubernetes.io/projected/81d20e6f-155e-444c-b54f-1161b3dff224-kube-api-access-k6qrt\") pod \"glance-default-external-api-0\" (UID: \"81d20e6f-155e-444c-b54f-1161b3dff224\") " pod="openstack/glance-default-external-api-0" Dec 10 15:44:17 crc kubenswrapper[4755]: I1210 15:44:17.541278 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/81d20e6f-155e-444c-b54f-1161b3dff224-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"81d20e6f-155e-444c-b54f-1161b3dff224\") " pod="openstack/glance-default-external-api-0" Dec 10 15:44:17 crc kubenswrapper[4755]: I1210 15:44:17.541318 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81d20e6f-155e-444c-b54f-1161b3dff224-logs\") pod \"glance-default-external-api-0\" (UID: \"81d20e6f-155e-444c-b54f-1161b3dff224\") " pod="openstack/glance-default-external-api-0" Dec 10 15:44:17 crc kubenswrapper[4755]: I1210 15:44:17.541355 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81d20e6f-155e-444c-b54f-1161b3dff224-scripts\") pod \"glance-default-external-api-0\" (UID: \"81d20e6f-155e-444c-b54f-1161b3dff224\") " pod="openstack/glance-default-external-api-0" Dec 10 15:44:17 crc kubenswrapper[4755]: I1210 15:44:17.542457 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/81d20e6f-155e-444c-b54f-1161b3dff224-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"81d20e6f-155e-444c-b54f-1161b3dff224\") " pod="openstack/glance-default-external-api-0" Dec 10 15:44:17 crc kubenswrapper[4755]: I1210 15:44:17.542728 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81d20e6f-155e-444c-b54f-1161b3dff224-logs\") pod \"glance-default-external-api-0\" (UID: \"81d20e6f-155e-444c-b54f-1161b3dff224\") " pod="openstack/glance-default-external-api-0" Dec 10 15:44:17 crc kubenswrapper[4755]: I1210 15:44:17.549456 4755 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 10 15:44:17 crc kubenswrapper[4755]: I1210 15:44:17.549509 4755 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-fad4194d-6d90-49f1-a017-ae4167f764c9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fad4194d-6d90-49f1-a017-ae4167f764c9\") pod \"glance-default-external-api-0\" (UID: \"81d20e6f-155e-444c-b54f-1161b3dff224\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/2c29aae4dccb9080bd5d2f8d1cce721d31204eaed02b3364c97d3b8bf6504cd5/globalmount\"" pod="openstack/glance-default-external-api-0" Dec 10 15:44:17 crc kubenswrapper[4755]: I1210 15:44:17.551284 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81d20e6f-155e-444c-b54f-1161b3dff224-config-data\") pod \"glance-default-external-api-0\" (UID: \"81d20e6f-155e-444c-b54f-1161b3dff224\") " pod="openstack/glance-default-external-api-0" Dec 10 15:44:17 crc kubenswrapper[4755]: I1210 15:44:17.554188 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81d20e6f-155e-444c-b54f-1161b3dff224-scripts\") pod \"glance-default-external-api-0\" (UID: \"81d20e6f-155e-444c-b54f-1161b3dff224\") " pod="openstack/glance-default-external-api-0" Dec 10 15:44:17 crc kubenswrapper[4755]: I1210 15:44:17.558114 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81d20e6f-155e-444c-b54f-1161b3dff224-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"81d20e6f-155e-444c-b54f-1161b3dff224\") " pod="openstack/glance-default-external-api-0" Dec 10 15:44:17 crc kubenswrapper[4755]: I1210 15:44:17.564189 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/81d20e6f-155e-444c-b54f-1161b3dff224-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"81d20e6f-155e-444c-b54f-1161b3dff224\") " pod="openstack/glance-default-external-api-0" Dec 10 15:44:17 crc kubenswrapper[4755]: I1210 15:44:17.564389 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6qrt\" (UniqueName: \"kubernetes.io/projected/81d20e6f-155e-444c-b54f-1161b3dff224-kube-api-access-k6qrt\") pod \"glance-default-external-api-0\" (UID: \"81d20e6f-155e-444c-b54f-1161b3dff224\") " pod="openstack/glance-default-external-api-0" Dec 10 15:44:17 crc kubenswrapper[4755]: I1210 15:44:17.612777 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-fad4194d-6d90-49f1-a017-ae4167f764c9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fad4194d-6d90-49f1-a017-ae4167f764c9\") pod \"glance-default-external-api-0\" (UID: \"81d20e6f-155e-444c-b54f-1161b3dff224\") " pod="openstack/glance-default-external-api-0" Dec 10 15:44:17 crc kubenswrapper[4755]: I1210 15:44:17.768944 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d79c58e8-f9d1-4fde-945d-a6e175ec8fee" path="/var/lib/kubelet/pods/d79c58e8-f9d1-4fde-945d-a6e175ec8fee/volumes" Dec 10 15:44:17 crc kubenswrapper[4755]: I1210 15:44:17.885226 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 10 15:44:18 crc kubenswrapper[4755]: I1210 15:44:18.674712 4755 scope.go:117] "RemoveContainer" containerID="484644dc9f04c386c26fc0cd7b6116e4354c3540756dbc20ae2747f2de24c422" Dec 10 15:44:18 crc kubenswrapper[4755]: I1210 15:44:18.923578 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-gg57w" Dec 10 15:44:18 crc kubenswrapper[4755]: I1210 15:44:18.997367 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 10 15:44:19 crc kubenswrapper[4755]: I1210 15:44:19.073210 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d03214c3-1a52-44fb-aa0d-45de3de8ff44-dns-swift-storage-0\") pod \"d03214c3-1a52-44fb-aa0d-45de3de8ff44\" (UID: \"d03214c3-1a52-44fb-aa0d-45de3de8ff44\") " Dec 10 15:44:19 crc kubenswrapper[4755]: I1210 15:44:19.073277 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dbc85971-1a23-47af-bae0-708919198aee-scripts\") pod \"dbc85971-1a23-47af-bae0-708919198aee\" (UID: \"dbc85971-1a23-47af-bae0-708919198aee\") " Dec 10 15:44:19 crc kubenswrapper[4755]: I1210 15:44:19.073295 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dbc85971-1a23-47af-bae0-708919198aee-httpd-run\") pod \"dbc85971-1a23-47af-bae0-708919198aee\" (UID: \"dbc85971-1a23-47af-bae0-708919198aee\") " Dec 10 15:44:19 crc kubenswrapper[4755]: I1210 15:44:19.073327 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m2jsb\" (UniqueName: \"kubernetes.io/projected/d03214c3-1a52-44fb-aa0d-45de3de8ff44-kube-api-access-m2jsb\") pod \"d03214c3-1a52-44fb-aa0d-45de3de8ff44\" (UID: \"d03214c3-1a52-44fb-aa0d-45de3de8ff44\") " Dec 10 15:44:19 crc kubenswrapper[4755]: I1210 15:44:19.073380 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dbc85971-1a23-47af-bae0-708919198aee-logs\") pod \"dbc85971-1a23-47af-bae0-708919198aee\" (UID: \"dbc85971-1a23-47af-bae0-708919198aee\") " Dec 10 15:44:19 crc kubenswrapper[4755]: I1210 15:44:19.073445 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4rx8\" (UniqueName: \"kubernetes.io/projected/dbc85971-1a23-47af-bae0-708919198aee-kube-api-access-s4rx8\") pod \"dbc85971-1a23-47af-bae0-708919198aee\" (UID: \"dbc85971-1a23-47af-bae0-708919198aee\") " Dec 10 15:44:19 crc kubenswrapper[4755]: I1210 15:44:19.073507 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d03214c3-1a52-44fb-aa0d-45de3de8ff44-dns-svc\") pod \"d03214c3-1a52-44fb-aa0d-45de3de8ff44\" (UID: \"d03214c3-1a52-44fb-aa0d-45de3de8ff44\") " Dec 10 15:44:19 crc kubenswrapper[4755]: I1210 15:44:19.073541 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d03214c3-1a52-44fb-aa0d-45de3de8ff44-config\") pod \"d03214c3-1a52-44fb-aa0d-45de3de8ff44\" (UID: \"d03214c3-1a52-44fb-aa0d-45de3de8ff44\") " Dec 10 15:44:19 crc kubenswrapper[4755]: I1210 15:44:19.073639 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d03214c3-1a52-44fb-aa0d-45de3de8ff44-ovsdbserver-sb\") pod \"d03214c3-1a52-44fb-aa0d-45de3de8ff44\" (UID: \"d03214c3-1a52-44fb-aa0d-45de3de8ff44\") " Dec 10 15:44:19 crc kubenswrapper[4755]: I1210 15:44:19.073754 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d03214c3-1a52-44fb-aa0d-45de3de8ff44-ovsdbserver-nb\") pod \"d03214c3-1a52-44fb-aa0d-45de3de8ff44\" (UID: \"d03214c3-1a52-44fb-aa0d-45de3de8ff44\") " Dec 10 15:44:19 crc kubenswrapper[4755]: I1210 15:44:19.073774 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbc85971-1a23-47af-bae0-708919198aee-config-data\") pod \"dbc85971-1a23-47af-bae0-708919198aee\" (UID: \"dbc85971-1a23-47af-bae0-708919198aee\") " Dec 10 15:44:19 crc kubenswrapper[4755]: I1210 15:44:19.073803 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbc85971-1a23-47af-bae0-708919198aee-combined-ca-bundle\") pod \"dbc85971-1a23-47af-bae0-708919198aee\" (UID: \"dbc85971-1a23-47af-bae0-708919198aee\") " Dec 10 15:44:19 crc kubenswrapper[4755]: I1210 15:44:19.073923 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3c1ecacc-2c1b-4e07-9def-36c303767d2a\") pod \"dbc85971-1a23-47af-bae0-708919198aee\" (UID: \"dbc85971-1a23-47af-bae0-708919198aee\") " Dec 10 15:44:19 crc kubenswrapper[4755]: I1210 15:44:19.075271 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dbc85971-1a23-47af-bae0-708919198aee-logs" (OuterVolumeSpecName: "logs") pod "dbc85971-1a23-47af-bae0-708919198aee" (UID: "dbc85971-1a23-47af-bae0-708919198aee"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:44:19 crc kubenswrapper[4755]: I1210 15:44:19.079237 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dbc85971-1a23-47af-bae0-708919198aee-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "dbc85971-1a23-47af-bae0-708919198aee" (UID: "dbc85971-1a23-47af-bae0-708919198aee"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:44:19 crc kubenswrapper[4755]: I1210 15:44:19.089749 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d03214c3-1a52-44fb-aa0d-45de3de8ff44-kube-api-access-m2jsb" (OuterVolumeSpecName: "kube-api-access-m2jsb") pod "d03214c3-1a52-44fb-aa0d-45de3de8ff44" (UID: "d03214c3-1a52-44fb-aa0d-45de3de8ff44"). InnerVolumeSpecName "kube-api-access-m2jsb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:44:19 crc kubenswrapper[4755]: I1210 15:44:19.093517 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbc85971-1a23-47af-bae0-708919198aee-kube-api-access-s4rx8" (OuterVolumeSpecName: "kube-api-access-s4rx8") pod "dbc85971-1a23-47af-bae0-708919198aee" (UID: "dbc85971-1a23-47af-bae0-708919198aee"). InnerVolumeSpecName "kube-api-access-s4rx8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:44:19 crc kubenswrapper[4755]: I1210 15:44:19.101055 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbc85971-1a23-47af-bae0-708919198aee-scripts" (OuterVolumeSpecName: "scripts") pod "dbc85971-1a23-47af-bae0-708919198aee" (UID: "dbc85971-1a23-47af-bae0-708919198aee"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:44:19 crc kubenswrapper[4755]: I1210 15:44:19.153910 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"dbc85971-1a23-47af-bae0-708919198aee","Type":"ContainerDied","Data":"37de2f26e55b5b74c2d1119d9b29aac8c99afc5a6cea8be7bcc1b2f5ac5a5c36"} Dec 10 15:44:19 crc kubenswrapper[4755]: I1210 15:44:19.154007 4755 scope.go:117] "RemoveContainer" containerID="31264b3af3717e9e0d6dfd275c96394d4cedfe9e708d617c9826b6c28e75dd5e" Dec 10 15:44:19 crc kubenswrapper[4755]: I1210 15:44:19.154119 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 10 15:44:19 crc kubenswrapper[4755]: I1210 15:44:19.163581 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3c1ecacc-2c1b-4e07-9def-36c303767d2a" (OuterVolumeSpecName: "glance") pod "dbc85971-1a23-47af-bae0-708919198aee" (UID: "dbc85971-1a23-47af-bae0-708919198aee"). InnerVolumeSpecName "pvc-3c1ecacc-2c1b-4e07-9def-36c303767d2a". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 10 15:44:19 crc kubenswrapper[4755]: I1210 15:44:19.168093 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-gg57w" event={"ID":"d03214c3-1a52-44fb-aa0d-45de3de8ff44","Type":"ContainerDied","Data":"eeefa94bf87fe574554df62b1927047a98d82b0d8a2500feceebeca95396bc6f"} Dec 10 15:44:19 crc kubenswrapper[4755]: I1210 15:44:19.168200 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-gg57w" Dec 10 15:44:19 crc kubenswrapper[4755]: I1210 15:44:19.177492 4755 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-3c1ecacc-2c1b-4e07-9def-36c303767d2a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3c1ecacc-2c1b-4e07-9def-36c303767d2a\") on node \"crc\" " Dec 10 15:44:19 crc kubenswrapper[4755]: I1210 15:44:19.177520 4755 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dbc85971-1a23-47af-bae0-708919198aee-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:19 crc kubenswrapper[4755]: I1210 15:44:19.177530 4755 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dbc85971-1a23-47af-bae0-708919198aee-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:19 crc kubenswrapper[4755]: I1210 15:44:19.177539 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m2jsb\" (UniqueName: \"kubernetes.io/projected/d03214c3-1a52-44fb-aa0d-45de3de8ff44-kube-api-access-m2jsb\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:19 crc kubenswrapper[4755]: I1210 15:44:19.177563 4755 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dbc85971-1a23-47af-bae0-708919198aee-logs\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:19 crc kubenswrapper[4755]: I1210 15:44:19.177572 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4rx8\" (UniqueName: \"kubernetes.io/projected/dbc85971-1a23-47af-bae0-708919198aee-kube-api-access-s4rx8\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:19 crc kubenswrapper[4755]: I1210 15:44:19.191973 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-55f9947ffb-mpljd" event={"ID":"add12461-ad93-468e-9e20-de46b26414a0","Type":"ContainerStarted","Data":"bf03bef16d16d83fab75e9c621e6cec844d2c220dce79d40151a15bef7d4dda4"} Dec 10 15:44:19 crc kubenswrapper[4755]: I1210 15:44:19.192538 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-55f9947ffb-mpljd" Dec 10 15:44:19 crc kubenswrapper[4755]: I1210 15:44:19.192599 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-55f9947ffb-mpljd" Dec 10 15:44:19 crc kubenswrapper[4755]: I1210 15:44:19.198327 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbc85971-1a23-47af-bae0-708919198aee-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dbc85971-1a23-47af-bae0-708919198aee" (UID: "dbc85971-1a23-47af-bae0-708919198aee"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:44:19 crc kubenswrapper[4755]: I1210 15:44:19.229860 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d03214c3-1a52-44fb-aa0d-45de3de8ff44-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d03214c3-1a52-44fb-aa0d-45de3de8ff44" (UID: "d03214c3-1a52-44fb-aa0d-45de3de8ff44"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:44:19 crc kubenswrapper[4755]: I1210 15:44:19.235213 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-55f9947ffb-mpljd" podStartSLOduration=5.235167046 podStartE2EDuration="5.235167046s" podCreationTimestamp="2025-12-10 15:44:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:44:19.21132803 +0000 UTC m=+1255.812211662" watchObservedRunningTime="2025-12-10 15:44:19.235167046 +0000 UTC m=+1255.836050678" Dec 10 15:44:19 crc kubenswrapper[4755]: I1210 15:44:19.246667 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d03214c3-1a52-44fb-aa0d-45de3de8ff44-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d03214c3-1a52-44fb-aa0d-45de3de8ff44" (UID: "d03214c3-1a52-44fb-aa0d-45de3de8ff44"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:44:19 crc kubenswrapper[4755]: I1210 15:44:19.255971 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 10 15:44:19 crc kubenswrapper[4755]: I1210 15:44:19.258885 4755 scope.go:117] "RemoveContainer" containerID="eee6d33760bd3448c482537ce51f449b5f55274b2eb0c62de4ca45a51b43ab7c" Dec 10 15:44:19 crc kubenswrapper[4755]: I1210 15:44:19.261850 4755 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Dec 10 15:44:19 crc kubenswrapper[4755]: I1210 15:44:19.262021 4755 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-3c1ecacc-2c1b-4e07-9def-36c303767d2a" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3c1ecacc-2c1b-4e07-9def-36c303767d2a") on node "crc" Dec 10 15:44:19 crc kubenswrapper[4755]: I1210 15:44:19.267430 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbc85971-1a23-47af-bae0-708919198aee-config-data" (OuterVolumeSpecName: "config-data") pod "dbc85971-1a23-47af-bae0-708919198aee" (UID: "dbc85971-1a23-47af-bae0-708919198aee"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:44:19 crc kubenswrapper[4755]: I1210 15:44:19.270399 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d03214c3-1a52-44fb-aa0d-45de3de8ff44-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d03214c3-1a52-44fb-aa0d-45de3de8ff44" (UID: "d03214c3-1a52-44fb-aa0d-45de3de8ff44"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:44:19 crc kubenswrapper[4755]: I1210 15:44:19.281935 4755 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d03214c3-1a52-44fb-aa0d-45de3de8ff44-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:19 crc kubenswrapper[4755]: I1210 15:44:19.282020 4755 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbc85971-1a23-47af-bae0-708919198aee-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:19 crc kubenswrapper[4755]: I1210 15:44:19.282034 4755 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbc85971-1a23-47af-bae0-708919198aee-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:19 crc kubenswrapper[4755]: I1210 15:44:19.282045 4755 reconciler_common.go:293] "Volume detached for volume \"pvc-3c1ecacc-2c1b-4e07-9def-36c303767d2a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3c1ecacc-2c1b-4e07-9def-36c303767d2a\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:19 crc kubenswrapper[4755]: I1210 15:44:19.282057 4755 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d03214c3-1a52-44fb-aa0d-45de3de8ff44-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:19 crc kubenswrapper[4755]: I1210 15:44:19.282069 4755 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d03214c3-1a52-44fb-aa0d-45de3de8ff44-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:19 crc kubenswrapper[4755]: I1210 15:44:19.295192 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d03214c3-1a52-44fb-aa0d-45de3de8ff44-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d03214c3-1a52-44fb-aa0d-45de3de8ff44" (UID: "d03214c3-1a52-44fb-aa0d-45de3de8ff44"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:44:19 crc kubenswrapper[4755]: I1210 15:44:19.305176 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d03214c3-1a52-44fb-aa0d-45de3de8ff44-config" (OuterVolumeSpecName: "config") pod "d03214c3-1a52-44fb-aa0d-45de3de8ff44" (UID: "d03214c3-1a52-44fb-aa0d-45de3de8ff44"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:44:19 crc kubenswrapper[4755]: I1210 15:44:19.308254 4755 scope.go:117] "RemoveContainer" containerID="a35bfd696ac236809c06db16dd58362ac3efb1ac2374187b94c53e9671675607" Dec 10 15:44:19 crc kubenswrapper[4755]: I1210 15:44:19.383557 4755 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d03214c3-1a52-44fb-aa0d-45de3de8ff44-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:19 crc kubenswrapper[4755]: I1210 15:44:19.383593 4755 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d03214c3-1a52-44fb-aa0d-45de3de8ff44-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:19 crc kubenswrapper[4755]: I1210 15:44:19.406768 4755 scope.go:117] "RemoveContainer" containerID="82828f2c1d8e3dfb9e1893b99609480c4b19ff32d368f166bc9ad6821a41c584" Dec 10 15:44:19 crc kubenswrapper[4755]: I1210 15:44:19.499401 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 10 15:44:19 crc kubenswrapper[4755]: I1210 15:44:19.550641 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 10 15:44:19 crc kubenswrapper[4755]: I1210 15:44:19.600383 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-gg57w"] Dec 10 15:44:19 crc kubenswrapper[4755]: I1210 15:44:19.629347 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 10 15:44:19 crc kubenswrapper[4755]: E1210 15:44:19.629816 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d03214c3-1a52-44fb-aa0d-45de3de8ff44" containerName="dnsmasq-dns" Dec 10 15:44:19 crc kubenswrapper[4755]: I1210 15:44:19.629831 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="d03214c3-1a52-44fb-aa0d-45de3de8ff44" containerName="dnsmasq-dns" Dec 10 15:44:19 crc kubenswrapper[4755]: E1210 15:44:19.629853 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbc85971-1a23-47af-bae0-708919198aee" containerName="glance-httpd" Dec 10 15:44:19 crc kubenswrapper[4755]: I1210 15:44:19.629859 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbc85971-1a23-47af-bae0-708919198aee" containerName="glance-httpd" Dec 10 15:44:19 crc kubenswrapper[4755]: E1210 15:44:19.629871 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d03214c3-1a52-44fb-aa0d-45de3de8ff44" containerName="init" Dec 10 15:44:19 crc kubenswrapper[4755]: I1210 15:44:19.629877 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="d03214c3-1a52-44fb-aa0d-45de3de8ff44" containerName="init" Dec 10 15:44:19 crc kubenswrapper[4755]: E1210 15:44:19.629893 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbc85971-1a23-47af-bae0-708919198aee" containerName="glance-log" Dec 10 15:44:19 crc kubenswrapper[4755]: I1210 15:44:19.629900 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbc85971-1a23-47af-bae0-708919198aee" containerName="glance-log" Dec 10 15:44:19 crc kubenswrapper[4755]: I1210 15:44:19.630078 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbc85971-1a23-47af-bae0-708919198aee" containerName="glance-httpd" Dec 10 15:44:19 crc kubenswrapper[4755]: I1210 15:44:19.630093 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="d03214c3-1a52-44fb-aa0d-45de3de8ff44" containerName="dnsmasq-dns" Dec 10 15:44:19 crc kubenswrapper[4755]: I1210 15:44:19.630114 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbc85971-1a23-47af-bae0-708919198aee" containerName="glance-log" Dec 10 15:44:19 crc kubenswrapper[4755]: I1210 15:44:19.631276 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 10 15:44:19 crc kubenswrapper[4755]: I1210 15:44:19.634948 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Dec 10 15:44:19 crc kubenswrapper[4755]: I1210 15:44:19.635113 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 10 15:44:19 crc kubenswrapper[4755]: I1210 15:44:19.644084 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-gg57w"] Dec 10 15:44:19 crc kubenswrapper[4755]: I1210 15:44:19.719441 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 10 15:44:19 crc kubenswrapper[4755]: I1210 15:44:19.805673 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d03214c3-1a52-44fb-aa0d-45de3de8ff44" path="/var/lib/kubelet/pods/d03214c3-1a52-44fb-aa0d-45de3de8ff44/volumes" Dec 10 15:44:19 crc kubenswrapper[4755]: I1210 15:44:19.805855 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-3c1ecacc-2c1b-4e07-9def-36c303767d2a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3c1ecacc-2c1b-4e07-9def-36c303767d2a\") pod \"glance-default-internal-api-0\" (UID: \"f8dcc743-2980-4fbb-94bc-b4a8afb79bad\") " pod="openstack/glance-default-internal-api-0" Dec 10 15:44:19 crc kubenswrapper[4755]: I1210 15:44:19.805889 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f8dcc743-2980-4fbb-94bc-b4a8afb79bad-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f8dcc743-2980-4fbb-94bc-b4a8afb79bad\") " pod="openstack/glance-default-internal-api-0" Dec 10 15:44:19 crc kubenswrapper[4755]: I1210 15:44:19.805971 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8dcc743-2980-4fbb-94bc-b4a8afb79bad-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f8dcc743-2980-4fbb-94bc-b4a8afb79bad\") " pod="openstack/glance-default-internal-api-0" Dec 10 15:44:19 crc kubenswrapper[4755]: I1210 15:44:19.806052 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f8dcc743-2980-4fbb-94bc-b4a8afb79bad-logs\") pod \"glance-default-internal-api-0\" (UID: \"f8dcc743-2980-4fbb-94bc-b4a8afb79bad\") " pod="openstack/glance-default-internal-api-0" Dec 10 15:44:19 crc kubenswrapper[4755]: I1210 15:44:19.806073 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8dcc743-2980-4fbb-94bc-b4a8afb79bad-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f8dcc743-2980-4fbb-94bc-b4a8afb79bad\") " pod="openstack/glance-default-internal-api-0" Dec 10 15:44:19 crc kubenswrapper[4755]: I1210 15:44:19.806109 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f8dcc743-2980-4fbb-94bc-b4a8afb79bad-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f8dcc743-2980-4fbb-94bc-b4a8afb79bad\") " pod="openstack/glance-default-internal-api-0" Dec 10 15:44:19 crc kubenswrapper[4755]: I1210 15:44:19.806128 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8dcc743-2980-4fbb-94bc-b4a8afb79bad-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"f8dcc743-2980-4fbb-94bc-b4a8afb79bad\") " pod="openstack/glance-default-internal-api-0" Dec 10 15:44:19 crc kubenswrapper[4755]: I1210 15:44:19.806165 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjqn4\" (UniqueName: \"kubernetes.io/projected/f8dcc743-2980-4fbb-94bc-b4a8afb79bad-kube-api-access-jjqn4\") pod \"glance-default-internal-api-0\" (UID: \"f8dcc743-2980-4fbb-94bc-b4a8afb79bad\") " pod="openstack/glance-default-internal-api-0" Dec 10 15:44:19 crc kubenswrapper[4755]: I1210 15:44:19.806952 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dbc85971-1a23-47af-bae0-708919198aee" path="/var/lib/kubelet/pods/dbc85971-1a23-47af-bae0-708919198aee/volumes" Dec 10 15:44:19 crc kubenswrapper[4755]: I1210 15:44:19.907808 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8dcc743-2980-4fbb-94bc-b4a8afb79bad-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f8dcc743-2980-4fbb-94bc-b4a8afb79bad\") " pod="openstack/glance-default-internal-api-0" Dec 10 15:44:19 crc kubenswrapper[4755]: I1210 15:44:19.907906 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f8dcc743-2980-4fbb-94bc-b4a8afb79bad-logs\") pod \"glance-default-internal-api-0\" (UID: \"f8dcc743-2980-4fbb-94bc-b4a8afb79bad\") " pod="openstack/glance-default-internal-api-0" Dec 10 15:44:19 crc kubenswrapper[4755]: I1210 15:44:19.907931 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8dcc743-2980-4fbb-94bc-b4a8afb79bad-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f8dcc743-2980-4fbb-94bc-b4a8afb79bad\") " pod="openstack/glance-default-internal-api-0" Dec 10 15:44:19 crc kubenswrapper[4755]: I1210 15:44:19.907968 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f8dcc743-2980-4fbb-94bc-b4a8afb79bad-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f8dcc743-2980-4fbb-94bc-b4a8afb79bad\") " pod="openstack/glance-default-internal-api-0" Dec 10 15:44:19 crc kubenswrapper[4755]: I1210 15:44:19.907985 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8dcc743-2980-4fbb-94bc-b4a8afb79bad-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"f8dcc743-2980-4fbb-94bc-b4a8afb79bad\") " pod="openstack/glance-default-internal-api-0" Dec 10 15:44:19 crc kubenswrapper[4755]: I1210 15:44:19.908025 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjqn4\" (UniqueName: \"kubernetes.io/projected/f8dcc743-2980-4fbb-94bc-b4a8afb79bad-kube-api-access-jjqn4\") pod \"glance-default-internal-api-0\" (UID: \"f8dcc743-2980-4fbb-94bc-b4a8afb79bad\") " pod="openstack/glance-default-internal-api-0" Dec 10 15:44:19 crc kubenswrapper[4755]: I1210 15:44:19.908043 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-3c1ecacc-2c1b-4e07-9def-36c303767d2a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3c1ecacc-2c1b-4e07-9def-36c303767d2a\") pod \"glance-default-internal-api-0\" (UID: \"f8dcc743-2980-4fbb-94bc-b4a8afb79bad\") " pod="openstack/glance-default-internal-api-0" Dec 10 15:44:19 crc kubenswrapper[4755]: I1210 15:44:19.908061 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f8dcc743-2980-4fbb-94bc-b4a8afb79bad-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f8dcc743-2980-4fbb-94bc-b4a8afb79bad\") " pod="openstack/glance-default-internal-api-0" Dec 10 15:44:19 crc kubenswrapper[4755]: I1210 15:44:19.908792 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f8dcc743-2980-4fbb-94bc-b4a8afb79bad-logs\") pod \"glance-default-internal-api-0\" (UID: \"f8dcc743-2980-4fbb-94bc-b4a8afb79bad\") " pod="openstack/glance-default-internal-api-0" Dec 10 15:44:19 crc kubenswrapper[4755]: I1210 15:44:19.908903 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f8dcc743-2980-4fbb-94bc-b4a8afb79bad-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f8dcc743-2980-4fbb-94bc-b4a8afb79bad\") " pod="openstack/glance-default-internal-api-0" Dec 10 15:44:19 crc kubenswrapper[4755]: I1210 15:44:19.910874 4755 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 10 15:44:19 crc kubenswrapper[4755]: I1210 15:44:19.910996 4755 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-3c1ecacc-2c1b-4e07-9def-36c303767d2a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3c1ecacc-2c1b-4e07-9def-36c303767d2a\") pod \"glance-default-internal-api-0\" (UID: \"f8dcc743-2980-4fbb-94bc-b4a8afb79bad\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e7209b79faa29a25e04bc03f8c7f38aa826c0ab7d3d63e0c6698575f30077871/globalmount\"" pod="openstack/glance-default-internal-api-0" Dec 10 15:44:19 crc kubenswrapper[4755]: I1210 15:44:19.913643 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8dcc743-2980-4fbb-94bc-b4a8afb79bad-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f8dcc743-2980-4fbb-94bc-b4a8afb79bad\") " pod="openstack/glance-default-internal-api-0" Dec 10 15:44:19 crc kubenswrapper[4755]: I1210 15:44:19.920715 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f8dcc743-2980-4fbb-94bc-b4a8afb79bad-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f8dcc743-2980-4fbb-94bc-b4a8afb79bad\") " pod="openstack/glance-default-internal-api-0" Dec 10 15:44:19 crc kubenswrapper[4755]: I1210 15:44:19.920854 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8dcc743-2980-4fbb-94bc-b4a8afb79bad-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f8dcc743-2980-4fbb-94bc-b4a8afb79bad\") " pod="openstack/glance-default-internal-api-0" Dec 10 15:44:19 crc kubenswrapper[4755]: I1210 15:44:19.921571 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8dcc743-2980-4fbb-94bc-b4a8afb79bad-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"f8dcc743-2980-4fbb-94bc-b4a8afb79bad\") " pod="openstack/glance-default-internal-api-0" Dec 10 15:44:19 crc kubenswrapper[4755]: I1210 15:44:19.929063 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjqn4\" (UniqueName: \"kubernetes.io/projected/f8dcc743-2980-4fbb-94bc-b4a8afb79bad-kube-api-access-jjqn4\") pod \"glance-default-internal-api-0\" (UID: \"f8dcc743-2980-4fbb-94bc-b4a8afb79bad\") " pod="openstack/glance-default-internal-api-0" Dec 10 15:44:19 crc kubenswrapper[4755]: I1210 15:44:19.957374 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-3c1ecacc-2c1b-4e07-9def-36c303767d2a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3c1ecacc-2c1b-4e07-9def-36c303767d2a\") pod \"glance-default-internal-api-0\" (UID: \"f8dcc743-2980-4fbb-94bc-b4a8afb79bad\") " pod="openstack/glance-default-internal-api-0" Dec 10 15:44:20 crc kubenswrapper[4755]: I1210 15:44:20.003648 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 10 15:44:20 crc kubenswrapper[4755]: I1210 15:44:20.197973 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"81d20e6f-155e-444c-b54f-1161b3dff224","Type":"ContainerStarted","Data":"0867f02bde645bdf6f0dcc0ee9f857046f49780a95a9c5af19de9e9ded37c273"} Dec 10 15:44:20 crc kubenswrapper[4755]: I1210 15:44:20.201792 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-jr6l4" event={"ID":"cbc4e627-8238-49b1-a0ac-48d07a29c23a","Type":"ContainerStarted","Data":"ccfba5e44c738428b1efb1985fd1190a7901f21f3afd822dd48a953a331b8305"} Dec 10 15:44:20 crc kubenswrapper[4755]: I1210 15:44:20.219703 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-db-sync-jr6l4" podStartSLOduration=3.94883519 podStartE2EDuration="1m4.219689541s" podCreationTimestamp="2025-12-10 15:43:16 +0000 UTC" firstStartedPulling="2025-12-10 15:43:18.567589567 +0000 UTC m=+1195.168473199" lastFinishedPulling="2025-12-10 15:44:18.838443918 +0000 UTC m=+1255.439327550" observedRunningTime="2025-12-10 15:44:20.215340943 +0000 UTC m=+1256.816224575" watchObservedRunningTime="2025-12-10 15:44:20.219689541 +0000 UTC m=+1256.820573173" Dec 10 15:44:20 crc kubenswrapper[4755]: I1210 15:44:20.552287 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 10 15:44:20 crc kubenswrapper[4755]: W1210 15:44:20.561448 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf8dcc743_2980_4fbb_94bc_b4a8afb79bad.slice/crio-ffdfd9c4fbaeb9121ac40f2906d6f9c3dd2fdc8242778cad0c711608d9beed17 WatchSource:0}: Error finding container ffdfd9c4fbaeb9121ac40f2906d6f9c3dd2fdc8242778cad0c711608d9beed17: Status 404 returned error can't find the container with id ffdfd9c4fbaeb9121ac40f2906d6f9c3dd2fdc8242778cad0c711608d9beed17 Dec 10 15:44:21 crc kubenswrapper[4755]: I1210 15:44:21.224665 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"81d20e6f-155e-444c-b54f-1161b3dff224","Type":"ContainerStarted","Data":"ed9dd5c80054f8faca85036b1dc2e174ad52044a09ee2641742bbc78e5aac6c0"} Dec 10 15:44:21 crc kubenswrapper[4755]: I1210 15:44:21.229166 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-cwjsz" event={"ID":"9b9ab1e5-2daa-4057-84e3-50bef68bbaca","Type":"ContainerStarted","Data":"d52536c354af758c503c73ece6e28c53b5786a281589f9ca634611750884ffef"} Dec 10 15:44:21 crc kubenswrapper[4755]: I1210 15:44:21.231158 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f8dcc743-2980-4fbb-94bc-b4a8afb79bad","Type":"ContainerStarted","Data":"ffdfd9c4fbaeb9121ac40f2906d6f9c3dd2fdc8242778cad0c711608d9beed17"} Dec 10 15:44:21 crc kubenswrapper[4755]: I1210 15:44:21.257618 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-cwjsz" podStartSLOduration=5.127979417 podStartE2EDuration="1m5.257598214s" podCreationTimestamp="2025-12-10 15:43:16 +0000 UTC" firstStartedPulling="2025-12-10 15:43:18.550531694 +0000 UTC m=+1195.151415326" lastFinishedPulling="2025-12-10 15:44:18.680150481 +0000 UTC m=+1255.281034123" observedRunningTime="2025-12-10 15:44:21.249761321 +0000 UTC m=+1257.850644973" watchObservedRunningTime="2025-12-10 15:44:21.257598214 +0000 UTC m=+1257.858481846" Dec 10 15:44:22 crc kubenswrapper[4755]: I1210 15:44:22.241365 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f8dcc743-2980-4fbb-94bc-b4a8afb79bad","Type":"ContainerStarted","Data":"334eb45f075fb7aee2f28fb137282499b0eecefc1171619fb75ae904f14a67d3"} Dec 10 15:44:22 crc kubenswrapper[4755]: I1210 15:44:22.780574 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-848cf88cfc-mff22" Dec 10 15:44:22 crc kubenswrapper[4755]: I1210 15:44:22.841107 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-9gscg"] Dec 10 15:44:22 crc kubenswrapper[4755]: I1210 15:44:22.841479 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-cf78879c9-9gscg" podUID="b52de9e0-e981-4d39-addb-6c732611ea50" containerName="dnsmasq-dns" containerID="cri-o://8e8277ff2cd2c02b7e758c2788d5e790e80a676a4daf40aff4669c6c26ffa148" gracePeriod=10 Dec 10 15:44:24 crc kubenswrapper[4755]: I1210 15:44:24.683128 4755 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-cf78879c9-9gscg" podUID="b52de9e0-e981-4d39-addb-6c732611ea50" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.161:5353: connect: connection refused" Dec 10 15:44:25 crc kubenswrapper[4755]: I1210 15:44:25.284128 4755 generic.go:334] "Generic (PLEG): container finished" podID="b52de9e0-e981-4d39-addb-6c732611ea50" containerID="8e8277ff2cd2c02b7e758c2788d5e790e80a676a4daf40aff4669c6c26ffa148" exitCode=0 Dec 10 15:44:25 crc kubenswrapper[4755]: I1210 15:44:25.284149 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-9gscg" event={"ID":"b52de9e0-e981-4d39-addb-6c732611ea50","Type":"ContainerDied","Data":"8e8277ff2cd2c02b7e758c2788d5e790e80a676a4daf40aff4669c6c26ffa148"} Dec 10 15:44:26 crc kubenswrapper[4755]: I1210 15:44:26.237633 4755 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-55f9947ffb-mpljd" podUID="add12461-ad93-468e-9e20-de46b26414a0" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.175:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 10 15:44:26 crc kubenswrapper[4755]: I1210 15:44:26.372023 4755 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-55f9947ffb-mpljd" podUID="add12461-ad93-468e-9e20-de46b26414a0" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.175:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 10 15:44:26 crc kubenswrapper[4755]: I1210 15:44:26.465524 4755 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-699d79cf4-kwqcl" podUID="af70a62a-214d-4ed0-8508-de8e52dd68a3" containerName="barbican-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 10 15:44:26 crc kubenswrapper[4755]: I1210 15:44:26.472559 4755 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-699d79cf4-kwqcl" podUID="af70a62a-214d-4ed0-8508-de8e52dd68a3" containerName="barbican-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 10 15:44:26 crc kubenswrapper[4755]: I1210 15:44:26.491010 4755 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-699d79cf4-kwqcl" podUID="af70a62a-214d-4ed0-8508-de8e52dd68a3" containerName="barbican-api-log" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 10 15:44:27 crc kubenswrapper[4755]: I1210 15:44:27.655367 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-55f9947ffb-mpljd" Dec 10 15:44:27 crc kubenswrapper[4755]: I1210 15:44:27.822900 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-55f9947ffb-mpljd" Dec 10 15:44:27 crc kubenswrapper[4755]: I1210 15:44:27.901041 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-699d79cf4-kwqcl"] Dec 10 15:44:27 crc kubenswrapper[4755]: I1210 15:44:27.901476 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-699d79cf4-kwqcl" podUID="af70a62a-214d-4ed0-8508-de8e52dd68a3" containerName="barbican-api" containerID="cri-o://eba94b308e2e1c01708d1c6a4068802247c409b7e4f8f84ffc5448b5b0fdde23" gracePeriod=30 Dec 10 15:44:27 crc kubenswrapper[4755]: I1210 15:44:27.902159 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-699d79cf4-kwqcl" podUID="af70a62a-214d-4ed0-8508-de8e52dd68a3" containerName="barbican-api-log" containerID="cri-o://c7b3bb22d66f88d472d636b1ffd7b2a9e19fc9627f1e47c7c907fc690e35514c" gracePeriod=30 Dec 10 15:44:27 crc kubenswrapper[4755]: I1210 15:44:27.918321 4755 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-699d79cf4-kwqcl" podUID="af70a62a-214d-4ed0-8508-de8e52dd68a3" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.174:9311/healthcheck\": EOF" Dec 10 15:44:27 crc kubenswrapper[4755]: I1210 15:44:27.939401 4755 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-699d79cf4-kwqcl" podUID="af70a62a-214d-4ed0-8508-de8e52dd68a3" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.174:9311/healthcheck\": EOF" Dec 10 15:44:27 crc kubenswrapper[4755]: I1210 15:44:27.939413 4755 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-699d79cf4-kwqcl" podUID="af70a62a-214d-4ed0-8508-de8e52dd68a3" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.174:9311/healthcheck\": EOF" Dec 10 15:44:28 crc kubenswrapper[4755]: I1210 15:44:28.323251 4755 generic.go:334] "Generic (PLEG): container finished" podID="af70a62a-214d-4ed0-8508-de8e52dd68a3" containerID="c7b3bb22d66f88d472d636b1ffd7b2a9e19fc9627f1e47c7c907fc690e35514c" exitCode=143 Dec 10 15:44:28 crc kubenswrapper[4755]: I1210 15:44:28.323339 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-699d79cf4-kwqcl" event={"ID":"af70a62a-214d-4ed0-8508-de8e52dd68a3","Type":"ContainerDied","Data":"c7b3bb22d66f88d472d636b1ffd7b2a9e19fc9627f1e47c7c907fc690e35514c"} Dec 10 15:44:28 crc kubenswrapper[4755]: I1210 15:44:28.673887 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cf78879c9-9gscg" Dec 10 15:44:28 crc kubenswrapper[4755]: I1210 15:44:28.842740 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b52de9e0-e981-4d39-addb-6c732611ea50-dns-svc\") pod \"b52de9e0-e981-4d39-addb-6c732611ea50\" (UID: \"b52de9e0-e981-4d39-addb-6c732611ea50\") " Dec 10 15:44:28 crc kubenswrapper[4755]: I1210 15:44:28.842828 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b52de9e0-e981-4d39-addb-6c732611ea50-dns-swift-storage-0\") pod \"b52de9e0-e981-4d39-addb-6c732611ea50\" (UID: \"b52de9e0-e981-4d39-addb-6c732611ea50\") " Dec 10 15:44:28 crc kubenswrapper[4755]: I1210 15:44:28.842939 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b52de9e0-e981-4d39-addb-6c732611ea50-ovsdbserver-sb\") pod \"b52de9e0-e981-4d39-addb-6c732611ea50\" (UID: \"b52de9e0-e981-4d39-addb-6c732611ea50\") " Dec 10 15:44:28 crc kubenswrapper[4755]: I1210 15:44:28.843026 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b52de9e0-e981-4d39-addb-6c732611ea50-config\") pod \"b52de9e0-e981-4d39-addb-6c732611ea50\" (UID: \"b52de9e0-e981-4d39-addb-6c732611ea50\") " Dec 10 15:44:28 crc kubenswrapper[4755]: I1210 15:44:28.843135 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wf4ct\" (UniqueName: \"kubernetes.io/projected/b52de9e0-e981-4d39-addb-6c732611ea50-kube-api-access-wf4ct\") pod \"b52de9e0-e981-4d39-addb-6c732611ea50\" (UID: \"b52de9e0-e981-4d39-addb-6c732611ea50\") " Dec 10 15:44:28 crc kubenswrapper[4755]: I1210 15:44:28.843156 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b52de9e0-e981-4d39-addb-6c732611ea50-ovsdbserver-nb\") pod \"b52de9e0-e981-4d39-addb-6c732611ea50\" (UID: \"b52de9e0-e981-4d39-addb-6c732611ea50\") " Dec 10 15:44:28 crc kubenswrapper[4755]: I1210 15:44:28.854838 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b52de9e0-e981-4d39-addb-6c732611ea50-kube-api-access-wf4ct" (OuterVolumeSpecName: "kube-api-access-wf4ct") pod "b52de9e0-e981-4d39-addb-6c732611ea50" (UID: "b52de9e0-e981-4d39-addb-6c732611ea50"). InnerVolumeSpecName "kube-api-access-wf4ct". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:44:28 crc kubenswrapper[4755]: I1210 15:44:28.921349 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b52de9e0-e981-4d39-addb-6c732611ea50-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "b52de9e0-e981-4d39-addb-6c732611ea50" (UID: "b52de9e0-e981-4d39-addb-6c732611ea50"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:44:28 crc kubenswrapper[4755]: I1210 15:44:28.921489 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b52de9e0-e981-4d39-addb-6c732611ea50-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b52de9e0-e981-4d39-addb-6c732611ea50" (UID: "b52de9e0-e981-4d39-addb-6c732611ea50"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:44:28 crc kubenswrapper[4755]: I1210 15:44:28.936774 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b52de9e0-e981-4d39-addb-6c732611ea50-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b52de9e0-e981-4d39-addb-6c732611ea50" (UID: "b52de9e0-e981-4d39-addb-6c732611ea50"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:44:28 crc kubenswrapper[4755]: I1210 15:44:28.947521 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wf4ct\" (UniqueName: \"kubernetes.io/projected/b52de9e0-e981-4d39-addb-6c732611ea50-kube-api-access-wf4ct\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:28 crc kubenswrapper[4755]: I1210 15:44:28.947565 4755 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b52de9e0-e981-4d39-addb-6c732611ea50-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:28 crc kubenswrapper[4755]: I1210 15:44:28.947575 4755 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b52de9e0-e981-4d39-addb-6c732611ea50-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:28 crc kubenswrapper[4755]: I1210 15:44:28.947583 4755 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b52de9e0-e981-4d39-addb-6c732611ea50-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:28 crc kubenswrapper[4755]: I1210 15:44:28.951611 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b52de9e0-e981-4d39-addb-6c732611ea50-config" (OuterVolumeSpecName: "config") pod "b52de9e0-e981-4d39-addb-6c732611ea50" (UID: "b52de9e0-e981-4d39-addb-6c732611ea50"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:44:28 crc kubenswrapper[4755]: I1210 15:44:28.999936 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b52de9e0-e981-4d39-addb-6c732611ea50-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b52de9e0-e981-4d39-addb-6c732611ea50" (UID: "b52de9e0-e981-4d39-addb-6c732611ea50"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:44:29 crc kubenswrapper[4755]: I1210 15:44:29.049153 4755 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b52de9e0-e981-4d39-addb-6c732611ea50-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:29 crc kubenswrapper[4755]: I1210 15:44:29.049186 4755 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b52de9e0-e981-4d39-addb-6c732611ea50-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:29 crc kubenswrapper[4755]: I1210 15:44:29.338547 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-9gscg" event={"ID":"b52de9e0-e981-4d39-addb-6c732611ea50","Type":"ContainerDied","Data":"ae676adee217d0d01dac6ff12cad01968ca51415886da1b545f76b2674ce8bd6"} Dec 10 15:44:29 crc kubenswrapper[4755]: I1210 15:44:29.338956 4755 scope.go:117] "RemoveContainer" containerID="8e8277ff2cd2c02b7e758c2788d5e790e80a676a4daf40aff4669c6c26ffa148" Dec 10 15:44:29 crc kubenswrapper[4755]: I1210 15:44:29.339171 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cf78879c9-9gscg" Dec 10 15:44:29 crc kubenswrapper[4755]: I1210 15:44:29.382379 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-9gscg"] Dec 10 15:44:29 crc kubenswrapper[4755]: I1210 15:44:29.390718 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-9gscg"] Dec 10 15:44:29 crc kubenswrapper[4755]: I1210 15:44:29.771576 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b52de9e0-e981-4d39-addb-6c732611ea50" path="/var/lib/kubelet/pods/b52de9e0-e981-4d39-addb-6c732611ea50/volumes" Dec 10 15:44:31 crc kubenswrapper[4755]: I1210 15:44:31.418809 4755 scope.go:117] "RemoveContainer" containerID="6eb54c8013ad59f9914cf37119f3febd41540b21f0dd34434e38ec9508aa118b" Dec 10 15:44:31 crc kubenswrapper[4755]: E1210 15:44:31.846291 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="0ca4e52f-2a99-42bb-abb3-20a9ee8594b5" Dec 10 15:44:32 crc kubenswrapper[4755]: I1210 15:44:32.366537 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-979964fb-8vrlp" event={"ID":"a5eb4f86-4f65-41b4-8694-279c44c08491","Type":"ContainerStarted","Data":"832152b5de6fdf029fb58a16e53b70d2bb0e615c4204223dc5e395e871c49ab3"} Dec 10 15:44:32 crc kubenswrapper[4755]: I1210 15:44:32.366830 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-979964fb-8vrlp" event={"ID":"a5eb4f86-4f65-41b4-8694-279c44c08491","Type":"ContainerStarted","Data":"c6d0b547143d6e87c86a5cd49788e50d6f8447a5c2e95a5abbab94074aa1d6da"} Dec 10 15:44:32 crc kubenswrapper[4755]: I1210 15:44:32.368731 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f8dcc743-2980-4fbb-94bc-b4a8afb79bad","Type":"ContainerStarted","Data":"588f396a3a268f85f05e46a46e1d5d5d38096cb5d896181631d658d4943bc1d1"} Dec 10 15:44:32 crc kubenswrapper[4755]: I1210 15:44:32.370630 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"81d20e6f-155e-444c-b54f-1161b3dff224","Type":"ContainerStarted","Data":"e5368def68397d95bf652e17404d4b470cf1f532242f4593d2add2b49a02405a"} Dec 10 15:44:32 crc kubenswrapper[4755]: I1210 15:44:32.373614 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5d855b58d9-fzhd2" event={"ID":"83de1ea1-f46a-43ab-9a89-f5980d7bed78","Type":"ContainerStarted","Data":"b56728305a9e8699e325d33ed255e93fcf584fce0821cbda0805588a56e0f3fd"} Dec 10 15:44:32 crc kubenswrapper[4755]: I1210 15:44:32.373659 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5d855b58d9-fzhd2" event={"ID":"83de1ea1-f46a-43ab-9a89-f5980d7bed78","Type":"ContainerStarted","Data":"368b3d6ded1e7fb4edc589af30897d80c2cb87e16f3f42d51f32c063bfe0e826"} Dec 10 15:44:32 crc kubenswrapper[4755]: I1210 15:44:32.375929 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ca4e52f-2a99-42bb-abb3-20a9ee8594b5","Type":"ContainerStarted","Data":"d6adbe18dd35f5d02a30d41ed6b29a710b2fed84a0d0341a3649b250b59312c9"} Dec 10 15:44:32 crc kubenswrapper[4755]: I1210 15:44:32.376036 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0ca4e52f-2a99-42bb-abb3-20a9ee8594b5" containerName="ceilometer-notification-agent" containerID="cri-o://2eb738bec907df392d198953ddb82b679095fed813616a75999ee2ee4466451c" gracePeriod=30 Dec 10 15:44:32 crc kubenswrapper[4755]: I1210 15:44:32.376055 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 10 15:44:32 crc kubenswrapper[4755]: I1210 15:44:32.376098 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0ca4e52f-2a99-42bb-abb3-20a9ee8594b5" containerName="proxy-httpd" containerID="cri-o://d6adbe18dd35f5d02a30d41ed6b29a710b2fed84a0d0341a3649b250b59312c9" gracePeriod=30 Dec 10 15:44:32 crc kubenswrapper[4755]: I1210 15:44:32.376106 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0ca4e52f-2a99-42bb-abb3-20a9ee8594b5" containerName="sg-core" containerID="cri-o://aabe60e3adf7bc0269c08a71e4fe1d986b4d7a53ed2892180259e68165aa5cb7" gracePeriod=30 Dec 10 15:44:32 crc kubenswrapper[4755]: I1210 15:44:32.385950 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-979964fb-8vrlp" podStartSLOduration=2.345183163 podStartE2EDuration="20.385930754s" podCreationTimestamp="2025-12-10 15:44:12 +0000 UTC" firstStartedPulling="2025-12-10 15:44:13.412336031 +0000 UTC m=+1250.013219663" lastFinishedPulling="2025-12-10 15:44:31.453083612 +0000 UTC m=+1268.053967254" observedRunningTime="2025-12-10 15:44:32.383646052 +0000 UTC m=+1268.984529694" watchObservedRunningTime="2025-12-10 15:44:32.385930754 +0000 UTC m=+1268.986814386" Dec 10 15:44:32 crc kubenswrapper[4755]: I1210 15:44:32.468123 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=13.468105924 podStartE2EDuration="13.468105924s" podCreationTimestamp="2025-12-10 15:44:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:44:32.459969043 +0000 UTC m=+1269.060852675" watchObservedRunningTime="2025-12-10 15:44:32.468105924 +0000 UTC m=+1269.068989556" Dec 10 15:44:32 crc kubenswrapper[4755]: I1210 15:44:32.483934 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-5d855b58d9-fzhd2" podStartSLOduration=2.433630514 podStartE2EDuration="20.483913064s" podCreationTimestamp="2025-12-10 15:44:12 +0000 UTC" firstStartedPulling="2025-12-10 15:44:13.402821862 +0000 UTC m=+1250.003705505" lastFinishedPulling="2025-12-10 15:44:31.453104423 +0000 UTC m=+1268.053988055" observedRunningTime="2025-12-10 15:44:32.479602506 +0000 UTC m=+1269.080486158" watchObservedRunningTime="2025-12-10 15:44:32.483913064 +0000 UTC m=+1269.084796696" Dec 10 15:44:32 crc kubenswrapper[4755]: I1210 15:44:32.516740 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=15.516719053 podStartE2EDuration="15.516719053s" podCreationTimestamp="2025-12-10 15:44:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:44:32.502630001 +0000 UTC m=+1269.103513633" watchObservedRunningTime="2025-12-10 15:44:32.516719053 +0000 UTC m=+1269.117602685" Dec 10 15:44:33 crc kubenswrapper[4755]: I1210 15:44:33.022749 4755 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-699d79cf4-kwqcl" podUID="af70a62a-214d-4ed0-8508-de8e52dd68a3" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.174:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 10 15:44:33 crc kubenswrapper[4755]: I1210 15:44:33.022956 4755 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-699d79cf4-kwqcl" podUID="af70a62a-214d-4ed0-8508-de8e52dd68a3" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.174:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 10 15:44:33 crc kubenswrapper[4755]: I1210 15:44:33.285584 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 15:44:33 crc kubenswrapper[4755]: I1210 15:44:33.364168 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ca4e52f-2a99-42bb-abb3-20a9ee8594b5-config-data\") pod \"0ca4e52f-2a99-42bb-abb3-20a9ee8594b5\" (UID: \"0ca4e52f-2a99-42bb-abb3-20a9ee8594b5\") " Dec 10 15:44:33 crc kubenswrapper[4755]: I1210 15:44:33.364247 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ca4e52f-2a99-42bb-abb3-20a9ee8594b5-scripts\") pod \"0ca4e52f-2a99-42bb-abb3-20a9ee8594b5\" (UID: \"0ca4e52f-2a99-42bb-abb3-20a9ee8594b5\") " Dec 10 15:44:33 crc kubenswrapper[4755]: I1210 15:44:33.364297 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0ca4e52f-2a99-42bb-abb3-20a9ee8594b5-sg-core-conf-yaml\") pod \"0ca4e52f-2a99-42bb-abb3-20a9ee8594b5\" (UID: \"0ca4e52f-2a99-42bb-abb3-20a9ee8594b5\") " Dec 10 15:44:33 crc kubenswrapper[4755]: I1210 15:44:33.364360 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ca4e52f-2a99-42bb-abb3-20a9ee8594b5-run-httpd\") pod \"0ca4e52f-2a99-42bb-abb3-20a9ee8594b5\" (UID: \"0ca4e52f-2a99-42bb-abb3-20a9ee8594b5\") " Dec 10 15:44:33 crc kubenswrapper[4755]: I1210 15:44:33.364383 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ca4e52f-2a99-42bb-abb3-20a9ee8594b5-combined-ca-bundle\") pod \"0ca4e52f-2a99-42bb-abb3-20a9ee8594b5\" (UID: \"0ca4e52f-2a99-42bb-abb3-20a9ee8594b5\") " Dec 10 15:44:33 crc kubenswrapper[4755]: I1210 15:44:33.364444 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ca4e52f-2a99-42bb-abb3-20a9ee8594b5-log-httpd\") pod \"0ca4e52f-2a99-42bb-abb3-20a9ee8594b5\" (UID: \"0ca4e52f-2a99-42bb-abb3-20a9ee8594b5\") " Dec 10 15:44:33 crc kubenswrapper[4755]: I1210 15:44:33.364629 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sllbg\" (UniqueName: \"kubernetes.io/projected/0ca4e52f-2a99-42bb-abb3-20a9ee8594b5-kube-api-access-sllbg\") pod \"0ca4e52f-2a99-42bb-abb3-20a9ee8594b5\" (UID: \"0ca4e52f-2a99-42bb-abb3-20a9ee8594b5\") " Dec 10 15:44:33 crc kubenswrapper[4755]: I1210 15:44:33.365012 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ca4e52f-2a99-42bb-abb3-20a9ee8594b5-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "0ca4e52f-2a99-42bb-abb3-20a9ee8594b5" (UID: "0ca4e52f-2a99-42bb-abb3-20a9ee8594b5"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:44:33 crc kubenswrapper[4755]: I1210 15:44:33.365252 4755 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ca4e52f-2a99-42bb-abb3-20a9ee8594b5-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:33 crc kubenswrapper[4755]: I1210 15:44:33.365618 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ca4e52f-2a99-42bb-abb3-20a9ee8594b5-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "0ca4e52f-2a99-42bb-abb3-20a9ee8594b5" (UID: "0ca4e52f-2a99-42bb-abb3-20a9ee8594b5"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:44:33 crc kubenswrapper[4755]: I1210 15:44:33.370740 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ca4e52f-2a99-42bb-abb3-20a9ee8594b5-scripts" (OuterVolumeSpecName: "scripts") pod "0ca4e52f-2a99-42bb-abb3-20a9ee8594b5" (UID: "0ca4e52f-2a99-42bb-abb3-20a9ee8594b5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:44:33 crc kubenswrapper[4755]: I1210 15:44:33.372771 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ca4e52f-2a99-42bb-abb3-20a9ee8594b5-kube-api-access-sllbg" (OuterVolumeSpecName: "kube-api-access-sllbg") pod "0ca4e52f-2a99-42bb-abb3-20a9ee8594b5" (UID: "0ca4e52f-2a99-42bb-abb3-20a9ee8594b5"). InnerVolumeSpecName "kube-api-access-sllbg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:44:33 crc kubenswrapper[4755]: I1210 15:44:33.385228 4755 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-699d79cf4-kwqcl" podUID="af70a62a-214d-4ed0-8508-de8e52dd68a3" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.174:9311/healthcheck\": read tcp 10.217.0.2:52920->10.217.0.174:9311: read: connection reset by peer" Dec 10 15:44:33 crc kubenswrapper[4755]: I1210 15:44:33.385235 4755 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-699d79cf4-kwqcl" podUID="af70a62a-214d-4ed0-8508-de8e52dd68a3" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.174:9311/healthcheck\": read tcp 10.217.0.2:52904->10.217.0.174:9311: read: connection reset by peer" Dec 10 15:44:33 crc kubenswrapper[4755]: I1210 15:44:33.393954 4755 generic.go:334] "Generic (PLEG): container finished" podID="0ca4e52f-2a99-42bb-abb3-20a9ee8594b5" containerID="d6adbe18dd35f5d02a30d41ed6b29a710b2fed84a0d0341a3649b250b59312c9" exitCode=0 Dec 10 15:44:33 crc kubenswrapper[4755]: I1210 15:44:33.394225 4755 generic.go:334] "Generic (PLEG): container finished" podID="0ca4e52f-2a99-42bb-abb3-20a9ee8594b5" containerID="aabe60e3adf7bc0269c08a71e4fe1d986b4d7a53ed2892180259e68165aa5cb7" exitCode=2 Dec 10 15:44:33 crc kubenswrapper[4755]: I1210 15:44:33.394237 4755 generic.go:334] "Generic (PLEG): container finished" podID="0ca4e52f-2a99-42bb-abb3-20a9ee8594b5" containerID="2eb738bec907df392d198953ddb82b679095fed813616a75999ee2ee4466451c" exitCode=0 Dec 10 15:44:33 crc kubenswrapper[4755]: I1210 15:44:33.395440 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 15:44:33 crc kubenswrapper[4755]: I1210 15:44:33.395513 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ca4e52f-2a99-42bb-abb3-20a9ee8594b5","Type":"ContainerDied","Data":"d6adbe18dd35f5d02a30d41ed6b29a710b2fed84a0d0341a3649b250b59312c9"} Dec 10 15:44:33 crc kubenswrapper[4755]: I1210 15:44:33.395574 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ca4e52f-2a99-42bb-abb3-20a9ee8594b5","Type":"ContainerDied","Data":"aabe60e3adf7bc0269c08a71e4fe1d986b4d7a53ed2892180259e68165aa5cb7"} Dec 10 15:44:33 crc kubenswrapper[4755]: I1210 15:44:33.395591 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ca4e52f-2a99-42bb-abb3-20a9ee8594b5","Type":"ContainerDied","Data":"2eb738bec907df392d198953ddb82b679095fed813616a75999ee2ee4466451c"} Dec 10 15:44:33 crc kubenswrapper[4755]: I1210 15:44:33.395602 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ca4e52f-2a99-42bb-abb3-20a9ee8594b5","Type":"ContainerDied","Data":"fc8b0507fb5263093be4c6eeeb13faf83f94628ca65a23e2a6d4d687727e6c5e"} Dec 10 15:44:33 crc kubenswrapper[4755]: I1210 15:44:33.395621 4755 scope.go:117] "RemoveContainer" containerID="d6adbe18dd35f5d02a30d41ed6b29a710b2fed84a0d0341a3649b250b59312c9" Dec 10 15:44:33 crc kubenswrapper[4755]: I1210 15:44:33.402737 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ca4e52f-2a99-42bb-abb3-20a9ee8594b5-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "0ca4e52f-2a99-42bb-abb3-20a9ee8594b5" (UID: "0ca4e52f-2a99-42bb-abb3-20a9ee8594b5"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:44:33 crc kubenswrapper[4755]: I1210 15:44:33.433931 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ca4e52f-2a99-42bb-abb3-20a9ee8594b5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0ca4e52f-2a99-42bb-abb3-20a9ee8594b5" (UID: "0ca4e52f-2a99-42bb-abb3-20a9ee8594b5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:44:33 crc kubenswrapper[4755]: I1210 15:44:33.475740 4755 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0ca4e52f-2a99-42bb-abb3-20a9ee8594b5-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:33 crc kubenswrapper[4755]: I1210 15:44:33.475767 4755 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ca4e52f-2a99-42bb-abb3-20a9ee8594b5-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:33 crc kubenswrapper[4755]: I1210 15:44:33.475776 4755 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ca4e52f-2a99-42bb-abb3-20a9ee8594b5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:33 crc kubenswrapper[4755]: I1210 15:44:33.475788 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sllbg\" (UniqueName: \"kubernetes.io/projected/0ca4e52f-2a99-42bb-abb3-20a9ee8594b5-kube-api-access-sllbg\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:33 crc kubenswrapper[4755]: I1210 15:44:33.475796 4755 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ca4e52f-2a99-42bb-abb3-20a9ee8594b5-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:33 crc kubenswrapper[4755]: I1210 15:44:33.502630 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ca4e52f-2a99-42bb-abb3-20a9ee8594b5-config-data" (OuterVolumeSpecName: "config-data") pod "0ca4e52f-2a99-42bb-abb3-20a9ee8594b5" (UID: "0ca4e52f-2a99-42bb-abb3-20a9ee8594b5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:44:33 crc kubenswrapper[4755]: I1210 15:44:33.578800 4755 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ca4e52f-2a99-42bb-abb3-20a9ee8594b5-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:33 crc kubenswrapper[4755]: I1210 15:44:33.605153 4755 scope.go:117] "RemoveContainer" containerID="aabe60e3adf7bc0269c08a71e4fe1d986b4d7a53ed2892180259e68165aa5cb7" Dec 10 15:44:33 crc kubenswrapper[4755]: I1210 15:44:33.629233 4755 scope.go:117] "RemoveContainer" containerID="2eb738bec907df392d198953ddb82b679095fed813616a75999ee2ee4466451c" Dec 10 15:44:33 crc kubenswrapper[4755]: I1210 15:44:33.660659 4755 scope.go:117] "RemoveContainer" containerID="d6adbe18dd35f5d02a30d41ed6b29a710b2fed84a0d0341a3649b250b59312c9" Dec 10 15:44:33 crc kubenswrapper[4755]: E1210 15:44:33.661113 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6adbe18dd35f5d02a30d41ed6b29a710b2fed84a0d0341a3649b250b59312c9\": container with ID starting with d6adbe18dd35f5d02a30d41ed6b29a710b2fed84a0d0341a3649b250b59312c9 not found: ID does not exist" containerID="d6adbe18dd35f5d02a30d41ed6b29a710b2fed84a0d0341a3649b250b59312c9" Dec 10 15:44:33 crc kubenswrapper[4755]: I1210 15:44:33.661156 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6adbe18dd35f5d02a30d41ed6b29a710b2fed84a0d0341a3649b250b59312c9"} err="failed to get container status \"d6adbe18dd35f5d02a30d41ed6b29a710b2fed84a0d0341a3649b250b59312c9\": rpc error: code = NotFound desc = could not find container \"d6adbe18dd35f5d02a30d41ed6b29a710b2fed84a0d0341a3649b250b59312c9\": container with ID starting with d6adbe18dd35f5d02a30d41ed6b29a710b2fed84a0d0341a3649b250b59312c9 not found: ID does not exist" Dec 10 15:44:33 crc kubenswrapper[4755]: I1210 15:44:33.661181 4755 scope.go:117] "RemoveContainer" containerID="aabe60e3adf7bc0269c08a71e4fe1d986b4d7a53ed2892180259e68165aa5cb7" Dec 10 15:44:33 crc kubenswrapper[4755]: E1210 15:44:33.661447 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aabe60e3adf7bc0269c08a71e4fe1d986b4d7a53ed2892180259e68165aa5cb7\": container with ID starting with aabe60e3adf7bc0269c08a71e4fe1d986b4d7a53ed2892180259e68165aa5cb7 not found: ID does not exist" containerID="aabe60e3adf7bc0269c08a71e4fe1d986b4d7a53ed2892180259e68165aa5cb7" Dec 10 15:44:33 crc kubenswrapper[4755]: I1210 15:44:33.661500 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aabe60e3adf7bc0269c08a71e4fe1d986b4d7a53ed2892180259e68165aa5cb7"} err="failed to get container status \"aabe60e3adf7bc0269c08a71e4fe1d986b4d7a53ed2892180259e68165aa5cb7\": rpc error: code = NotFound desc = could not find container \"aabe60e3adf7bc0269c08a71e4fe1d986b4d7a53ed2892180259e68165aa5cb7\": container with ID starting with aabe60e3adf7bc0269c08a71e4fe1d986b4d7a53ed2892180259e68165aa5cb7 not found: ID does not exist" Dec 10 15:44:33 crc kubenswrapper[4755]: I1210 15:44:33.661521 4755 scope.go:117] "RemoveContainer" containerID="2eb738bec907df392d198953ddb82b679095fed813616a75999ee2ee4466451c" Dec 10 15:44:33 crc kubenswrapper[4755]: E1210 15:44:33.661808 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2eb738bec907df392d198953ddb82b679095fed813616a75999ee2ee4466451c\": container with ID starting with 2eb738bec907df392d198953ddb82b679095fed813616a75999ee2ee4466451c not found: ID does not exist" containerID="2eb738bec907df392d198953ddb82b679095fed813616a75999ee2ee4466451c" Dec 10 15:44:33 crc kubenswrapper[4755]: I1210 15:44:33.661832 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2eb738bec907df392d198953ddb82b679095fed813616a75999ee2ee4466451c"} err="failed to get container status \"2eb738bec907df392d198953ddb82b679095fed813616a75999ee2ee4466451c\": rpc error: code = NotFound desc = could not find container \"2eb738bec907df392d198953ddb82b679095fed813616a75999ee2ee4466451c\": container with ID starting with 2eb738bec907df392d198953ddb82b679095fed813616a75999ee2ee4466451c not found: ID does not exist" Dec 10 15:44:33 crc kubenswrapper[4755]: I1210 15:44:33.661850 4755 scope.go:117] "RemoveContainer" containerID="d6adbe18dd35f5d02a30d41ed6b29a710b2fed84a0d0341a3649b250b59312c9" Dec 10 15:44:33 crc kubenswrapper[4755]: I1210 15:44:33.662140 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6adbe18dd35f5d02a30d41ed6b29a710b2fed84a0d0341a3649b250b59312c9"} err="failed to get container status \"d6adbe18dd35f5d02a30d41ed6b29a710b2fed84a0d0341a3649b250b59312c9\": rpc error: code = NotFound desc = could not find container \"d6adbe18dd35f5d02a30d41ed6b29a710b2fed84a0d0341a3649b250b59312c9\": container with ID starting with d6adbe18dd35f5d02a30d41ed6b29a710b2fed84a0d0341a3649b250b59312c9 not found: ID does not exist" Dec 10 15:44:33 crc kubenswrapper[4755]: I1210 15:44:33.662160 4755 scope.go:117] "RemoveContainer" containerID="aabe60e3adf7bc0269c08a71e4fe1d986b4d7a53ed2892180259e68165aa5cb7" Dec 10 15:44:33 crc kubenswrapper[4755]: I1210 15:44:33.662454 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aabe60e3adf7bc0269c08a71e4fe1d986b4d7a53ed2892180259e68165aa5cb7"} err="failed to get container status \"aabe60e3adf7bc0269c08a71e4fe1d986b4d7a53ed2892180259e68165aa5cb7\": rpc error: code = NotFound desc = could not find container \"aabe60e3adf7bc0269c08a71e4fe1d986b4d7a53ed2892180259e68165aa5cb7\": container with ID starting with aabe60e3adf7bc0269c08a71e4fe1d986b4d7a53ed2892180259e68165aa5cb7 not found: ID does not exist" Dec 10 15:44:33 crc kubenswrapper[4755]: I1210 15:44:33.662719 4755 scope.go:117] "RemoveContainer" containerID="2eb738bec907df392d198953ddb82b679095fed813616a75999ee2ee4466451c" Dec 10 15:44:33 crc kubenswrapper[4755]: I1210 15:44:33.664493 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2eb738bec907df392d198953ddb82b679095fed813616a75999ee2ee4466451c"} err="failed to get container status \"2eb738bec907df392d198953ddb82b679095fed813616a75999ee2ee4466451c\": rpc error: code = NotFound desc = could not find container \"2eb738bec907df392d198953ddb82b679095fed813616a75999ee2ee4466451c\": container with ID starting with 2eb738bec907df392d198953ddb82b679095fed813616a75999ee2ee4466451c not found: ID does not exist" Dec 10 15:44:33 crc kubenswrapper[4755]: I1210 15:44:33.664531 4755 scope.go:117] "RemoveContainer" containerID="d6adbe18dd35f5d02a30d41ed6b29a710b2fed84a0d0341a3649b250b59312c9" Dec 10 15:44:33 crc kubenswrapper[4755]: I1210 15:44:33.664888 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6adbe18dd35f5d02a30d41ed6b29a710b2fed84a0d0341a3649b250b59312c9"} err="failed to get container status \"d6adbe18dd35f5d02a30d41ed6b29a710b2fed84a0d0341a3649b250b59312c9\": rpc error: code = NotFound desc = could not find container \"d6adbe18dd35f5d02a30d41ed6b29a710b2fed84a0d0341a3649b250b59312c9\": container with ID starting with d6adbe18dd35f5d02a30d41ed6b29a710b2fed84a0d0341a3649b250b59312c9 not found: ID does not exist" Dec 10 15:44:33 crc kubenswrapper[4755]: I1210 15:44:33.664933 4755 scope.go:117] "RemoveContainer" containerID="aabe60e3adf7bc0269c08a71e4fe1d986b4d7a53ed2892180259e68165aa5cb7" Dec 10 15:44:33 crc kubenswrapper[4755]: I1210 15:44:33.665178 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aabe60e3adf7bc0269c08a71e4fe1d986b4d7a53ed2892180259e68165aa5cb7"} err="failed to get container status \"aabe60e3adf7bc0269c08a71e4fe1d986b4d7a53ed2892180259e68165aa5cb7\": rpc error: code = NotFound desc = could not find container \"aabe60e3adf7bc0269c08a71e4fe1d986b4d7a53ed2892180259e68165aa5cb7\": container with ID starting with aabe60e3adf7bc0269c08a71e4fe1d986b4d7a53ed2892180259e68165aa5cb7 not found: ID does not exist" Dec 10 15:44:33 crc kubenswrapper[4755]: I1210 15:44:33.665207 4755 scope.go:117] "RemoveContainer" containerID="2eb738bec907df392d198953ddb82b679095fed813616a75999ee2ee4466451c" Dec 10 15:44:33 crc kubenswrapper[4755]: I1210 15:44:33.665630 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2eb738bec907df392d198953ddb82b679095fed813616a75999ee2ee4466451c"} err="failed to get container status \"2eb738bec907df392d198953ddb82b679095fed813616a75999ee2ee4466451c\": rpc error: code = NotFound desc = could not find container \"2eb738bec907df392d198953ddb82b679095fed813616a75999ee2ee4466451c\": container with ID starting with 2eb738bec907df392d198953ddb82b679095fed813616a75999ee2ee4466451c not found: ID does not exist" Dec 10 15:44:33 crc kubenswrapper[4755]: I1210 15:44:33.845069 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 10 15:44:33 crc kubenswrapper[4755]: I1210 15:44:33.861689 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 10 15:44:33 crc kubenswrapper[4755]: I1210 15:44:33.879499 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 10 15:44:33 crc kubenswrapper[4755]: E1210 15:44:33.880147 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b52de9e0-e981-4d39-addb-6c732611ea50" containerName="dnsmasq-dns" Dec 10 15:44:33 crc kubenswrapper[4755]: I1210 15:44:33.880167 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="b52de9e0-e981-4d39-addb-6c732611ea50" containerName="dnsmasq-dns" Dec 10 15:44:33 crc kubenswrapper[4755]: E1210 15:44:33.880190 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ca4e52f-2a99-42bb-abb3-20a9ee8594b5" containerName="sg-core" Dec 10 15:44:33 crc kubenswrapper[4755]: I1210 15:44:33.880197 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ca4e52f-2a99-42bb-abb3-20a9ee8594b5" containerName="sg-core" Dec 10 15:44:33 crc kubenswrapper[4755]: E1210 15:44:33.880210 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ca4e52f-2a99-42bb-abb3-20a9ee8594b5" containerName="proxy-httpd" Dec 10 15:44:33 crc kubenswrapper[4755]: I1210 15:44:33.880216 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ca4e52f-2a99-42bb-abb3-20a9ee8594b5" containerName="proxy-httpd" Dec 10 15:44:33 crc kubenswrapper[4755]: E1210 15:44:33.880230 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b52de9e0-e981-4d39-addb-6c732611ea50" containerName="init" Dec 10 15:44:33 crc kubenswrapper[4755]: I1210 15:44:33.880236 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="b52de9e0-e981-4d39-addb-6c732611ea50" containerName="init" Dec 10 15:44:33 crc kubenswrapper[4755]: E1210 15:44:33.880246 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ca4e52f-2a99-42bb-abb3-20a9ee8594b5" containerName="ceilometer-notification-agent" Dec 10 15:44:33 crc kubenswrapper[4755]: I1210 15:44:33.880251 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ca4e52f-2a99-42bb-abb3-20a9ee8594b5" containerName="ceilometer-notification-agent" Dec 10 15:44:33 crc kubenswrapper[4755]: I1210 15:44:33.880657 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ca4e52f-2a99-42bb-abb3-20a9ee8594b5" containerName="sg-core" Dec 10 15:44:33 crc kubenswrapper[4755]: I1210 15:44:33.880686 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ca4e52f-2a99-42bb-abb3-20a9ee8594b5" containerName="ceilometer-notification-agent" Dec 10 15:44:33 crc kubenswrapper[4755]: I1210 15:44:33.880698 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ca4e52f-2a99-42bb-abb3-20a9ee8594b5" containerName="proxy-httpd" Dec 10 15:44:33 crc kubenswrapper[4755]: I1210 15:44:33.880709 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="b52de9e0-e981-4d39-addb-6c732611ea50" containerName="dnsmasq-dns" Dec 10 15:44:33 crc kubenswrapper[4755]: I1210 15:44:33.882711 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 15:44:33 crc kubenswrapper[4755]: I1210 15:44:33.884935 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 10 15:44:33 crc kubenswrapper[4755]: I1210 15:44:33.885184 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 10 15:44:33 crc kubenswrapper[4755]: I1210 15:44:33.891558 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 10 15:44:34 crc kubenswrapper[4755]: I1210 15:44:34.003975 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-699d79cf4-kwqcl" Dec 10 15:44:34 crc kubenswrapper[4755]: I1210 15:44:34.010853 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/05ad143b-bb62-4f04-94da-b4473be95da2-log-httpd\") pod \"ceilometer-0\" (UID: \"05ad143b-bb62-4f04-94da-b4473be95da2\") " pod="openstack/ceilometer-0" Dec 10 15:44:34 crc kubenswrapper[4755]: I1210 15:44:34.011152 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbhgg\" (UniqueName: \"kubernetes.io/projected/05ad143b-bb62-4f04-94da-b4473be95da2-kube-api-access-xbhgg\") pod \"ceilometer-0\" (UID: \"05ad143b-bb62-4f04-94da-b4473be95da2\") " pod="openstack/ceilometer-0" Dec 10 15:44:34 crc kubenswrapper[4755]: I1210 15:44:34.011195 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05ad143b-bb62-4f04-94da-b4473be95da2-scripts\") pod \"ceilometer-0\" (UID: \"05ad143b-bb62-4f04-94da-b4473be95da2\") " pod="openstack/ceilometer-0" Dec 10 15:44:34 crc kubenswrapper[4755]: I1210 15:44:34.011258 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/05ad143b-bb62-4f04-94da-b4473be95da2-run-httpd\") pod \"ceilometer-0\" (UID: \"05ad143b-bb62-4f04-94da-b4473be95da2\") " pod="openstack/ceilometer-0" Dec 10 15:44:34 crc kubenswrapper[4755]: I1210 15:44:34.011277 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/05ad143b-bb62-4f04-94da-b4473be95da2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"05ad143b-bb62-4f04-94da-b4473be95da2\") " pod="openstack/ceilometer-0" Dec 10 15:44:34 crc kubenswrapper[4755]: I1210 15:44:34.011328 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05ad143b-bb62-4f04-94da-b4473be95da2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"05ad143b-bb62-4f04-94da-b4473be95da2\") " pod="openstack/ceilometer-0" Dec 10 15:44:34 crc kubenswrapper[4755]: I1210 15:44:34.011353 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05ad143b-bb62-4f04-94da-b4473be95da2-config-data\") pod \"ceilometer-0\" (UID: \"05ad143b-bb62-4f04-94da-b4473be95da2\") " pod="openstack/ceilometer-0" Dec 10 15:44:34 crc kubenswrapper[4755]: E1210 15:44:34.089320 4755 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9b9ab1e5_2daa_4057_84e3_50bef68bbaca.slice/crio-conmon-d52536c354af758c503c73ece6e28c53b5786a281589f9ca634611750884ffef.scope\": RecentStats: unable to find data in memory cache]" Dec 10 15:44:34 crc kubenswrapper[4755]: I1210 15:44:34.111961 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c9f5h\" (UniqueName: \"kubernetes.io/projected/af70a62a-214d-4ed0-8508-de8e52dd68a3-kube-api-access-c9f5h\") pod \"af70a62a-214d-4ed0-8508-de8e52dd68a3\" (UID: \"af70a62a-214d-4ed0-8508-de8e52dd68a3\") " Dec 10 15:44:34 crc kubenswrapper[4755]: I1210 15:44:34.112154 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af70a62a-214d-4ed0-8508-de8e52dd68a3-combined-ca-bundle\") pod \"af70a62a-214d-4ed0-8508-de8e52dd68a3\" (UID: \"af70a62a-214d-4ed0-8508-de8e52dd68a3\") " Dec 10 15:44:34 crc kubenswrapper[4755]: I1210 15:44:34.112296 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af70a62a-214d-4ed0-8508-de8e52dd68a3-config-data\") pod \"af70a62a-214d-4ed0-8508-de8e52dd68a3\" (UID: \"af70a62a-214d-4ed0-8508-de8e52dd68a3\") " Dec 10 15:44:34 crc kubenswrapper[4755]: I1210 15:44:34.112432 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/af70a62a-214d-4ed0-8508-de8e52dd68a3-config-data-custom\") pod \"af70a62a-214d-4ed0-8508-de8e52dd68a3\" (UID: \"af70a62a-214d-4ed0-8508-de8e52dd68a3\") " Dec 10 15:44:34 crc kubenswrapper[4755]: I1210 15:44:34.112623 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af70a62a-214d-4ed0-8508-de8e52dd68a3-logs\") pod \"af70a62a-214d-4ed0-8508-de8e52dd68a3\" (UID: \"af70a62a-214d-4ed0-8508-de8e52dd68a3\") " Dec 10 15:44:34 crc kubenswrapper[4755]: I1210 15:44:34.113205 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05ad143b-bb62-4f04-94da-b4473be95da2-scripts\") pod \"ceilometer-0\" (UID: \"05ad143b-bb62-4f04-94da-b4473be95da2\") " pod="openstack/ceilometer-0" Dec 10 15:44:34 crc kubenswrapper[4755]: I1210 15:44:34.113341 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/05ad143b-bb62-4f04-94da-b4473be95da2-run-httpd\") pod \"ceilometer-0\" (UID: \"05ad143b-bb62-4f04-94da-b4473be95da2\") " pod="openstack/ceilometer-0" Dec 10 15:44:34 crc kubenswrapper[4755]: I1210 15:44:34.113407 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/05ad143b-bb62-4f04-94da-b4473be95da2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"05ad143b-bb62-4f04-94da-b4473be95da2\") " pod="openstack/ceilometer-0" Dec 10 15:44:34 crc kubenswrapper[4755]: I1210 15:44:34.113525 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05ad143b-bb62-4f04-94da-b4473be95da2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"05ad143b-bb62-4f04-94da-b4473be95da2\") " pod="openstack/ceilometer-0" Dec 10 15:44:34 crc kubenswrapper[4755]: I1210 15:44:34.113617 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05ad143b-bb62-4f04-94da-b4473be95da2-config-data\") pod \"ceilometer-0\" (UID: \"05ad143b-bb62-4f04-94da-b4473be95da2\") " pod="openstack/ceilometer-0" Dec 10 15:44:34 crc kubenswrapper[4755]: I1210 15:44:34.113711 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/05ad143b-bb62-4f04-94da-b4473be95da2-log-httpd\") pod \"ceilometer-0\" (UID: \"05ad143b-bb62-4f04-94da-b4473be95da2\") " pod="openstack/ceilometer-0" Dec 10 15:44:34 crc kubenswrapper[4755]: I1210 15:44:34.113787 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbhgg\" (UniqueName: \"kubernetes.io/projected/05ad143b-bb62-4f04-94da-b4473be95da2-kube-api-access-xbhgg\") pod \"ceilometer-0\" (UID: \"05ad143b-bb62-4f04-94da-b4473be95da2\") " pod="openstack/ceilometer-0" Dec 10 15:44:34 crc kubenswrapper[4755]: I1210 15:44:34.117236 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/05ad143b-bb62-4f04-94da-b4473be95da2-log-httpd\") pod \"ceilometer-0\" (UID: \"05ad143b-bb62-4f04-94da-b4473be95da2\") " pod="openstack/ceilometer-0" Dec 10 15:44:34 crc kubenswrapper[4755]: I1210 15:44:34.117631 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/05ad143b-bb62-4f04-94da-b4473be95da2-run-httpd\") pod \"ceilometer-0\" (UID: \"05ad143b-bb62-4f04-94da-b4473be95da2\") " pod="openstack/ceilometer-0" Dec 10 15:44:34 crc kubenswrapper[4755]: I1210 15:44:34.118054 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af70a62a-214d-4ed0-8508-de8e52dd68a3-logs" (OuterVolumeSpecName: "logs") pod "af70a62a-214d-4ed0-8508-de8e52dd68a3" (UID: "af70a62a-214d-4ed0-8508-de8e52dd68a3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:44:34 crc kubenswrapper[4755]: I1210 15:44:34.145645 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af70a62a-214d-4ed0-8508-de8e52dd68a3-kube-api-access-c9f5h" (OuterVolumeSpecName: "kube-api-access-c9f5h") pod "af70a62a-214d-4ed0-8508-de8e52dd68a3" (UID: "af70a62a-214d-4ed0-8508-de8e52dd68a3"). InnerVolumeSpecName "kube-api-access-c9f5h". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:44:34 crc kubenswrapper[4755]: I1210 15:44:34.177635 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05ad143b-bb62-4f04-94da-b4473be95da2-config-data\") pod \"ceilometer-0\" (UID: \"05ad143b-bb62-4f04-94da-b4473be95da2\") " pod="openstack/ceilometer-0" Dec 10 15:44:34 crc kubenswrapper[4755]: I1210 15:44:34.178191 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/05ad143b-bb62-4f04-94da-b4473be95da2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"05ad143b-bb62-4f04-94da-b4473be95da2\") " pod="openstack/ceilometer-0" Dec 10 15:44:34 crc kubenswrapper[4755]: I1210 15:44:34.178674 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05ad143b-bb62-4f04-94da-b4473be95da2-scripts\") pod \"ceilometer-0\" (UID: \"05ad143b-bb62-4f04-94da-b4473be95da2\") " pod="openstack/ceilometer-0" Dec 10 15:44:34 crc kubenswrapper[4755]: I1210 15:44:34.180791 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbhgg\" (UniqueName: \"kubernetes.io/projected/05ad143b-bb62-4f04-94da-b4473be95da2-kube-api-access-xbhgg\") pod \"ceilometer-0\" (UID: \"05ad143b-bb62-4f04-94da-b4473be95da2\") " pod="openstack/ceilometer-0" Dec 10 15:44:34 crc kubenswrapper[4755]: I1210 15:44:34.203176 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05ad143b-bb62-4f04-94da-b4473be95da2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"05ad143b-bb62-4f04-94da-b4473be95da2\") " pod="openstack/ceilometer-0" Dec 10 15:44:34 crc kubenswrapper[4755]: I1210 15:44:34.203277 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af70a62a-214d-4ed0-8508-de8e52dd68a3-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "af70a62a-214d-4ed0-8508-de8e52dd68a3" (UID: "af70a62a-214d-4ed0-8508-de8e52dd68a3"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:44:34 crc kubenswrapper[4755]: I1210 15:44:34.217706 4755 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/af70a62a-214d-4ed0-8508-de8e52dd68a3-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:34 crc kubenswrapper[4755]: I1210 15:44:34.217741 4755 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af70a62a-214d-4ed0-8508-de8e52dd68a3-logs\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:34 crc kubenswrapper[4755]: I1210 15:44:34.217750 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c9f5h\" (UniqueName: \"kubernetes.io/projected/af70a62a-214d-4ed0-8508-de8e52dd68a3-kube-api-access-c9f5h\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:34 crc kubenswrapper[4755]: I1210 15:44:34.269165 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af70a62a-214d-4ed0-8508-de8e52dd68a3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "af70a62a-214d-4ed0-8508-de8e52dd68a3" (UID: "af70a62a-214d-4ed0-8508-de8e52dd68a3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:44:34 crc kubenswrapper[4755]: I1210 15:44:34.285184 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af70a62a-214d-4ed0-8508-de8e52dd68a3-config-data" (OuterVolumeSpecName: "config-data") pod "af70a62a-214d-4ed0-8508-de8e52dd68a3" (UID: "af70a62a-214d-4ed0-8508-de8e52dd68a3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:44:34 crc kubenswrapper[4755]: I1210 15:44:34.317516 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 15:44:34 crc kubenswrapper[4755]: I1210 15:44:34.323447 4755 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af70a62a-214d-4ed0-8508-de8e52dd68a3-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:34 crc kubenswrapper[4755]: I1210 15:44:34.323495 4755 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af70a62a-214d-4ed0-8508-de8e52dd68a3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:34 crc kubenswrapper[4755]: I1210 15:44:34.423220 4755 generic.go:334] "Generic (PLEG): container finished" podID="af70a62a-214d-4ed0-8508-de8e52dd68a3" containerID="eba94b308e2e1c01708d1c6a4068802247c409b7e4f8f84ffc5448b5b0fdde23" exitCode=0 Dec 10 15:44:34 crc kubenswrapper[4755]: I1210 15:44:34.423378 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-699d79cf4-kwqcl" Dec 10 15:44:34 crc kubenswrapper[4755]: I1210 15:44:34.423404 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-699d79cf4-kwqcl" event={"ID":"af70a62a-214d-4ed0-8508-de8e52dd68a3","Type":"ContainerDied","Data":"eba94b308e2e1c01708d1c6a4068802247c409b7e4f8f84ffc5448b5b0fdde23"} Dec 10 15:44:34 crc kubenswrapper[4755]: I1210 15:44:34.424430 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-699d79cf4-kwqcl" event={"ID":"af70a62a-214d-4ed0-8508-de8e52dd68a3","Type":"ContainerDied","Data":"f1136cf7065c0e75e869166f287e4df0e238bcc7e6639e40191c427c79e5d235"} Dec 10 15:44:34 crc kubenswrapper[4755]: I1210 15:44:34.424475 4755 scope.go:117] "RemoveContainer" containerID="eba94b308e2e1c01708d1c6a4068802247c409b7e4f8f84ffc5448b5b0fdde23" Dec 10 15:44:34 crc kubenswrapper[4755]: I1210 15:44:34.428878 4755 generic.go:334] "Generic (PLEG): container finished" podID="cbc4e627-8238-49b1-a0ac-48d07a29c23a" containerID="ccfba5e44c738428b1efb1985fd1190a7901f21f3afd822dd48a953a331b8305" exitCode=0 Dec 10 15:44:34 crc kubenswrapper[4755]: I1210 15:44:34.428937 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-jr6l4" event={"ID":"cbc4e627-8238-49b1-a0ac-48d07a29c23a","Type":"ContainerDied","Data":"ccfba5e44c738428b1efb1985fd1190a7901f21f3afd822dd48a953a331b8305"} Dec 10 15:44:34 crc kubenswrapper[4755]: I1210 15:44:34.432375 4755 generic.go:334] "Generic (PLEG): container finished" podID="9b9ab1e5-2daa-4057-84e3-50bef68bbaca" containerID="d52536c354af758c503c73ece6e28c53b5786a281589f9ca634611750884ffef" exitCode=0 Dec 10 15:44:34 crc kubenswrapper[4755]: I1210 15:44:34.432421 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-cwjsz" event={"ID":"9b9ab1e5-2daa-4057-84e3-50bef68bbaca","Type":"ContainerDied","Data":"d52536c354af758c503c73ece6e28c53b5786a281589f9ca634611750884ffef"} Dec 10 15:44:34 crc kubenswrapper[4755]: I1210 15:44:34.465285 4755 scope.go:117] "RemoveContainer" containerID="c7b3bb22d66f88d472d636b1ffd7b2a9e19fc9627f1e47c7c907fc690e35514c" Dec 10 15:44:34 crc kubenswrapper[4755]: I1210 15:44:34.481029 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-699d79cf4-kwqcl"] Dec 10 15:44:34 crc kubenswrapper[4755]: I1210 15:44:34.494209 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-699d79cf4-kwqcl"] Dec 10 15:44:34 crc kubenswrapper[4755]: I1210 15:44:34.524980 4755 scope.go:117] "RemoveContainer" containerID="eba94b308e2e1c01708d1c6a4068802247c409b7e4f8f84ffc5448b5b0fdde23" Dec 10 15:44:34 crc kubenswrapper[4755]: E1210 15:44:34.525452 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eba94b308e2e1c01708d1c6a4068802247c409b7e4f8f84ffc5448b5b0fdde23\": container with ID starting with eba94b308e2e1c01708d1c6a4068802247c409b7e4f8f84ffc5448b5b0fdde23 not found: ID does not exist" containerID="eba94b308e2e1c01708d1c6a4068802247c409b7e4f8f84ffc5448b5b0fdde23" Dec 10 15:44:34 crc kubenswrapper[4755]: I1210 15:44:34.525579 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eba94b308e2e1c01708d1c6a4068802247c409b7e4f8f84ffc5448b5b0fdde23"} err="failed to get container status \"eba94b308e2e1c01708d1c6a4068802247c409b7e4f8f84ffc5448b5b0fdde23\": rpc error: code = NotFound desc = could not find container \"eba94b308e2e1c01708d1c6a4068802247c409b7e4f8f84ffc5448b5b0fdde23\": container with ID starting with eba94b308e2e1c01708d1c6a4068802247c409b7e4f8f84ffc5448b5b0fdde23 not found: ID does not exist" Dec 10 15:44:34 crc kubenswrapper[4755]: I1210 15:44:34.525598 4755 scope.go:117] "RemoveContainer" containerID="c7b3bb22d66f88d472d636b1ffd7b2a9e19fc9627f1e47c7c907fc690e35514c" Dec 10 15:44:34 crc kubenswrapper[4755]: E1210 15:44:34.525937 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7b3bb22d66f88d472d636b1ffd7b2a9e19fc9627f1e47c7c907fc690e35514c\": container with ID starting with c7b3bb22d66f88d472d636b1ffd7b2a9e19fc9627f1e47c7c907fc690e35514c not found: ID does not exist" containerID="c7b3bb22d66f88d472d636b1ffd7b2a9e19fc9627f1e47c7c907fc690e35514c" Dec 10 15:44:34 crc kubenswrapper[4755]: I1210 15:44:34.525959 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7b3bb22d66f88d472d636b1ffd7b2a9e19fc9627f1e47c7c907fc690e35514c"} err="failed to get container status \"c7b3bb22d66f88d472d636b1ffd7b2a9e19fc9627f1e47c7c907fc690e35514c\": rpc error: code = NotFound desc = could not find container \"c7b3bb22d66f88d472d636b1ffd7b2a9e19fc9627f1e47c7c907fc690e35514c\": container with ID starting with c7b3bb22d66f88d472d636b1ffd7b2a9e19fc9627f1e47c7c907fc690e35514c not found: ID does not exist" Dec 10 15:44:34 crc kubenswrapper[4755]: I1210 15:44:34.780455 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 10 15:44:34 crc kubenswrapper[4755]: W1210 15:44:34.782903 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod05ad143b_bb62_4f04_94da_b4473be95da2.slice/crio-08eefd5afb1069d855e6673233228563625c2e85281cfd67731c650b82d5e4ac WatchSource:0}: Error finding container 08eefd5afb1069d855e6673233228563625c2e85281cfd67731c650b82d5e4ac: Status 404 returned error can't find the container with id 08eefd5afb1069d855e6673233228563625c2e85281cfd67731c650b82d5e4ac Dec 10 15:44:35 crc kubenswrapper[4755]: I1210 15:44:35.441659 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"05ad143b-bb62-4f04-94da-b4473be95da2","Type":"ContainerStarted","Data":"08eefd5afb1069d855e6673233228563625c2e85281cfd67731c650b82d5e4ac"} Dec 10 15:44:35 crc kubenswrapper[4755]: I1210 15:44:35.771138 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ca4e52f-2a99-42bb-abb3-20a9ee8594b5" path="/var/lib/kubelet/pods/0ca4e52f-2a99-42bb-abb3-20a9ee8594b5/volumes" Dec 10 15:44:35 crc kubenswrapper[4755]: I1210 15:44:35.772987 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af70a62a-214d-4ed0-8508-de8e52dd68a3" path="/var/lib/kubelet/pods/af70a62a-214d-4ed0-8508-de8e52dd68a3/volumes" Dec 10 15:44:35 crc kubenswrapper[4755]: I1210 15:44:35.973252 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-cwjsz" Dec 10 15:44:35 crc kubenswrapper[4755]: I1210 15:44:35.982883 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-jr6l4" Dec 10 15:44:36 crc kubenswrapper[4755]: I1210 15:44:36.055552 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/cbc4e627-8238-49b1-a0ac-48d07a29c23a-certs\") pod \"cbc4e627-8238-49b1-a0ac-48d07a29c23a\" (UID: \"cbc4e627-8238-49b1-a0ac-48d07a29c23a\") " Dec 10 15:44:36 crc kubenswrapper[4755]: I1210 15:44:36.055613 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9b9ab1e5-2daa-4057-84e3-50bef68bbaca-etc-machine-id\") pod \"9b9ab1e5-2daa-4057-84e3-50bef68bbaca\" (UID: \"9b9ab1e5-2daa-4057-84e3-50bef68bbaca\") " Dec 10 15:44:36 crc kubenswrapper[4755]: I1210 15:44:36.055708 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ff59l\" (UniqueName: \"kubernetes.io/projected/9b9ab1e5-2daa-4057-84e3-50bef68bbaca-kube-api-access-ff59l\") pod \"9b9ab1e5-2daa-4057-84e3-50bef68bbaca\" (UID: \"9b9ab1e5-2daa-4057-84e3-50bef68bbaca\") " Dec 10 15:44:36 crc kubenswrapper[4755]: I1210 15:44:36.055781 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7hvvl\" (UniqueName: \"kubernetes.io/projected/cbc4e627-8238-49b1-a0ac-48d07a29c23a-kube-api-access-7hvvl\") pod \"cbc4e627-8238-49b1-a0ac-48d07a29c23a\" (UID: \"cbc4e627-8238-49b1-a0ac-48d07a29c23a\") " Dec 10 15:44:36 crc kubenswrapper[4755]: I1210 15:44:36.055814 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbc4e627-8238-49b1-a0ac-48d07a29c23a-scripts\") pod \"cbc4e627-8238-49b1-a0ac-48d07a29c23a\" (UID: \"cbc4e627-8238-49b1-a0ac-48d07a29c23a\") " Dec 10 15:44:36 crc kubenswrapper[4755]: I1210 15:44:36.055864 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b9ab1e5-2daa-4057-84e3-50bef68bbaca-combined-ca-bundle\") pod \"9b9ab1e5-2daa-4057-84e3-50bef68bbaca\" (UID: \"9b9ab1e5-2daa-4057-84e3-50bef68bbaca\") " Dec 10 15:44:36 crc kubenswrapper[4755]: I1210 15:44:36.055890 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b9ab1e5-2daa-4057-84e3-50bef68bbaca-config-data\") pod \"9b9ab1e5-2daa-4057-84e3-50bef68bbaca\" (UID: \"9b9ab1e5-2daa-4057-84e3-50bef68bbaca\") " Dec 10 15:44:36 crc kubenswrapper[4755]: I1210 15:44:36.055923 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbc4e627-8238-49b1-a0ac-48d07a29c23a-combined-ca-bundle\") pod \"cbc4e627-8238-49b1-a0ac-48d07a29c23a\" (UID: \"cbc4e627-8238-49b1-a0ac-48d07a29c23a\") " Dec 10 15:44:36 crc kubenswrapper[4755]: I1210 15:44:36.055959 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9b9ab1e5-2daa-4057-84e3-50bef68bbaca-db-sync-config-data\") pod \"9b9ab1e5-2daa-4057-84e3-50bef68bbaca\" (UID: \"9b9ab1e5-2daa-4057-84e3-50bef68bbaca\") " Dec 10 15:44:36 crc kubenswrapper[4755]: I1210 15:44:36.056024 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b9ab1e5-2daa-4057-84e3-50bef68bbaca-scripts\") pod \"9b9ab1e5-2daa-4057-84e3-50bef68bbaca\" (UID: \"9b9ab1e5-2daa-4057-84e3-50bef68bbaca\") " Dec 10 15:44:36 crc kubenswrapper[4755]: I1210 15:44:36.056065 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbc4e627-8238-49b1-a0ac-48d07a29c23a-config-data\") pod \"cbc4e627-8238-49b1-a0ac-48d07a29c23a\" (UID: \"cbc4e627-8238-49b1-a0ac-48d07a29c23a\") " Dec 10 15:44:36 crc kubenswrapper[4755]: I1210 15:44:36.058740 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b9ab1e5-2daa-4057-84e3-50bef68bbaca-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "9b9ab1e5-2daa-4057-84e3-50bef68bbaca" (UID: "9b9ab1e5-2daa-4057-84e3-50bef68bbaca"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 15:44:36 crc kubenswrapper[4755]: I1210 15:44:36.062594 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b9ab1e5-2daa-4057-84e3-50bef68bbaca-scripts" (OuterVolumeSpecName: "scripts") pod "9b9ab1e5-2daa-4057-84e3-50bef68bbaca" (UID: "9b9ab1e5-2daa-4057-84e3-50bef68bbaca"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:44:36 crc kubenswrapper[4755]: I1210 15:44:36.062938 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbc4e627-8238-49b1-a0ac-48d07a29c23a-scripts" (OuterVolumeSpecName: "scripts") pod "cbc4e627-8238-49b1-a0ac-48d07a29c23a" (UID: "cbc4e627-8238-49b1-a0ac-48d07a29c23a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:44:36 crc kubenswrapper[4755]: I1210 15:44:36.065042 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b9ab1e5-2daa-4057-84e3-50bef68bbaca-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "9b9ab1e5-2daa-4057-84e3-50bef68bbaca" (UID: "9b9ab1e5-2daa-4057-84e3-50bef68bbaca"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:44:36 crc kubenswrapper[4755]: I1210 15:44:36.065519 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b9ab1e5-2daa-4057-84e3-50bef68bbaca-kube-api-access-ff59l" (OuterVolumeSpecName: "kube-api-access-ff59l") pod "9b9ab1e5-2daa-4057-84e3-50bef68bbaca" (UID: "9b9ab1e5-2daa-4057-84e3-50bef68bbaca"). InnerVolumeSpecName "kube-api-access-ff59l". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:44:36 crc kubenswrapper[4755]: I1210 15:44:36.065836 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbc4e627-8238-49b1-a0ac-48d07a29c23a-certs" (OuterVolumeSpecName: "certs") pod "cbc4e627-8238-49b1-a0ac-48d07a29c23a" (UID: "cbc4e627-8238-49b1-a0ac-48d07a29c23a"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:44:36 crc kubenswrapper[4755]: I1210 15:44:36.071970 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbc4e627-8238-49b1-a0ac-48d07a29c23a-kube-api-access-7hvvl" (OuterVolumeSpecName: "kube-api-access-7hvvl") pod "cbc4e627-8238-49b1-a0ac-48d07a29c23a" (UID: "cbc4e627-8238-49b1-a0ac-48d07a29c23a"). InnerVolumeSpecName "kube-api-access-7hvvl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:44:36 crc kubenswrapper[4755]: I1210 15:44:36.088663 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbc4e627-8238-49b1-a0ac-48d07a29c23a-config-data" (OuterVolumeSpecName: "config-data") pod "cbc4e627-8238-49b1-a0ac-48d07a29c23a" (UID: "cbc4e627-8238-49b1-a0ac-48d07a29c23a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:44:36 crc kubenswrapper[4755]: I1210 15:44:36.100830 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbc4e627-8238-49b1-a0ac-48d07a29c23a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cbc4e627-8238-49b1-a0ac-48d07a29c23a" (UID: "cbc4e627-8238-49b1-a0ac-48d07a29c23a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:44:36 crc kubenswrapper[4755]: I1210 15:44:36.101511 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b9ab1e5-2daa-4057-84e3-50bef68bbaca-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9b9ab1e5-2daa-4057-84e3-50bef68bbaca" (UID: "9b9ab1e5-2daa-4057-84e3-50bef68bbaca"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:44:36 crc kubenswrapper[4755]: I1210 15:44:36.115974 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b9ab1e5-2daa-4057-84e3-50bef68bbaca-config-data" (OuterVolumeSpecName: "config-data") pod "9b9ab1e5-2daa-4057-84e3-50bef68bbaca" (UID: "9b9ab1e5-2daa-4057-84e3-50bef68bbaca"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:44:36 crc kubenswrapper[4755]: I1210 15:44:36.159310 4755 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9b9ab1e5-2daa-4057-84e3-50bef68bbaca-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:36 crc kubenswrapper[4755]: I1210 15:44:36.159564 4755 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b9ab1e5-2daa-4057-84e3-50bef68bbaca-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:36 crc kubenswrapper[4755]: I1210 15:44:36.159626 4755 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbc4e627-8238-49b1-a0ac-48d07a29c23a-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:36 crc kubenswrapper[4755]: I1210 15:44:36.159681 4755 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/projected/cbc4e627-8238-49b1-a0ac-48d07a29c23a-certs\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:36 crc kubenswrapper[4755]: I1210 15:44:36.159733 4755 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9b9ab1e5-2daa-4057-84e3-50bef68bbaca-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:36 crc kubenswrapper[4755]: I1210 15:44:36.159936 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ff59l\" (UniqueName: \"kubernetes.io/projected/9b9ab1e5-2daa-4057-84e3-50bef68bbaca-kube-api-access-ff59l\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:36 crc kubenswrapper[4755]: I1210 15:44:36.159999 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7hvvl\" (UniqueName: \"kubernetes.io/projected/cbc4e627-8238-49b1-a0ac-48d07a29c23a-kube-api-access-7hvvl\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:36 crc kubenswrapper[4755]: I1210 15:44:36.160219 4755 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbc4e627-8238-49b1-a0ac-48d07a29c23a-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:36 crc kubenswrapper[4755]: I1210 15:44:36.160337 4755 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b9ab1e5-2daa-4057-84e3-50bef68bbaca-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:36 crc kubenswrapper[4755]: I1210 15:44:36.160391 4755 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b9ab1e5-2daa-4057-84e3-50bef68bbaca-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:36 crc kubenswrapper[4755]: I1210 15:44:36.160520 4755 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbc4e627-8238-49b1-a0ac-48d07a29c23a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:36 crc kubenswrapper[4755]: I1210 15:44:36.386586 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6d5449c6d4-5rw2f" Dec 10 15:44:36 crc kubenswrapper[4755]: I1210 15:44:36.457085 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-jr6l4" event={"ID":"cbc4e627-8238-49b1-a0ac-48d07a29c23a","Type":"ContainerDied","Data":"c08ece9847a439ede0169a8bee4c81ad7536b4ffcc9b7b12ea55f6e32538368f"} Dec 10 15:44:36 crc kubenswrapper[4755]: I1210 15:44:36.457186 4755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c08ece9847a439ede0169a8bee4c81ad7536b4ffcc9b7b12ea55f6e32538368f" Dec 10 15:44:36 crc kubenswrapper[4755]: I1210 15:44:36.457454 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-jr6l4" Dec 10 15:44:36 crc kubenswrapper[4755]: I1210 15:44:36.459532 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-cwjsz" event={"ID":"9b9ab1e5-2daa-4057-84e3-50bef68bbaca","Type":"ContainerDied","Data":"1c2aa86a6dd7c5c73d503209d54262107b8c28431522cc048320276dcd8b4426"} Dec 10 15:44:36 crc kubenswrapper[4755]: I1210 15:44:36.459576 4755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c2aa86a6dd7c5c73d503209d54262107b8c28431522cc048320276dcd8b4426" Dec 10 15:44:36 crc kubenswrapper[4755]: I1210 15:44:36.459673 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-cwjsz" Dec 10 15:44:36 crc kubenswrapper[4755]: I1210 15:44:36.613319 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-storageinit-796tf"] Dec 10 15:44:36 crc kubenswrapper[4755]: E1210 15:44:36.613747 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbc4e627-8238-49b1-a0ac-48d07a29c23a" containerName="cloudkitty-db-sync" Dec 10 15:44:36 crc kubenswrapper[4755]: I1210 15:44:36.613767 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbc4e627-8238-49b1-a0ac-48d07a29c23a" containerName="cloudkitty-db-sync" Dec 10 15:44:36 crc kubenswrapper[4755]: E1210 15:44:36.613788 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af70a62a-214d-4ed0-8508-de8e52dd68a3" containerName="barbican-api-log" Dec 10 15:44:36 crc kubenswrapper[4755]: I1210 15:44:36.613794 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="af70a62a-214d-4ed0-8508-de8e52dd68a3" containerName="barbican-api-log" Dec 10 15:44:36 crc kubenswrapper[4755]: E1210 15:44:36.613813 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af70a62a-214d-4ed0-8508-de8e52dd68a3" containerName="barbican-api" Dec 10 15:44:36 crc kubenswrapper[4755]: I1210 15:44:36.613820 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="af70a62a-214d-4ed0-8508-de8e52dd68a3" containerName="barbican-api" Dec 10 15:44:36 crc kubenswrapper[4755]: E1210 15:44:36.613830 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b9ab1e5-2daa-4057-84e3-50bef68bbaca" containerName="cinder-db-sync" Dec 10 15:44:36 crc kubenswrapper[4755]: I1210 15:44:36.613835 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b9ab1e5-2daa-4057-84e3-50bef68bbaca" containerName="cinder-db-sync" Dec 10 15:44:36 crc kubenswrapper[4755]: I1210 15:44:36.614022 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="af70a62a-214d-4ed0-8508-de8e52dd68a3" containerName="barbican-api" Dec 10 15:44:36 crc kubenswrapper[4755]: I1210 15:44:36.614037 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b9ab1e5-2daa-4057-84e3-50bef68bbaca" containerName="cinder-db-sync" Dec 10 15:44:36 crc kubenswrapper[4755]: I1210 15:44:36.614052 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="af70a62a-214d-4ed0-8508-de8e52dd68a3" containerName="barbican-api-log" Dec 10 15:44:36 crc kubenswrapper[4755]: I1210 15:44:36.614061 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbc4e627-8238-49b1-a0ac-48d07a29c23a" containerName="cloudkitty-db-sync" Dec 10 15:44:36 crc kubenswrapper[4755]: I1210 15:44:36.614726 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-storageinit-796tf" Dec 10 15:44:36 crc kubenswrapper[4755]: I1210 15:44:36.619093 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-config-data" Dec 10 15:44:36 crc kubenswrapper[4755]: I1210 15:44:36.619505 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cloudkitty-client-internal" Dec 10 15:44:36 crc kubenswrapper[4755]: I1210 15:44:36.619977 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-cloudkitty-dockercfg-6f74p" Dec 10 15:44:36 crc kubenswrapper[4755]: I1210 15:44:36.620516 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Dec 10 15:44:36 crc kubenswrapper[4755]: I1210 15:44:36.621494 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-scripts" Dec 10 15:44:36 crc kubenswrapper[4755]: I1210 15:44:36.641282 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-storageinit-796tf"] Dec 10 15:44:36 crc kubenswrapper[4755]: I1210 15:44:36.670719 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50f8ada4-b157-4f73-ae6b-876844b71ced-combined-ca-bundle\") pod \"cloudkitty-storageinit-796tf\" (UID: \"50f8ada4-b157-4f73-ae6b-876844b71ced\") " pod="openstack/cloudkitty-storageinit-796tf" Dec 10 15:44:36 crc kubenswrapper[4755]: I1210 15:44:36.670790 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50f8ada4-b157-4f73-ae6b-876844b71ced-config-data\") pod \"cloudkitty-storageinit-796tf\" (UID: \"50f8ada4-b157-4f73-ae6b-876844b71ced\") " pod="openstack/cloudkitty-storageinit-796tf" Dec 10 15:44:36 crc kubenswrapper[4755]: I1210 15:44:36.670911 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/50f8ada4-b157-4f73-ae6b-876844b71ced-certs\") pod \"cloudkitty-storageinit-796tf\" (UID: \"50f8ada4-b157-4f73-ae6b-876844b71ced\") " pod="openstack/cloudkitty-storageinit-796tf" Dec 10 15:44:36 crc kubenswrapper[4755]: I1210 15:44:36.670984 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/50f8ada4-b157-4f73-ae6b-876844b71ced-scripts\") pod \"cloudkitty-storageinit-796tf\" (UID: \"50f8ada4-b157-4f73-ae6b-876844b71ced\") " pod="openstack/cloudkitty-storageinit-796tf" Dec 10 15:44:36 crc kubenswrapper[4755]: I1210 15:44:36.671014 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdtfc\" (UniqueName: \"kubernetes.io/projected/50f8ada4-b157-4f73-ae6b-876844b71ced-kube-api-access-kdtfc\") pod \"cloudkitty-storageinit-796tf\" (UID: \"50f8ada4-b157-4f73-ae6b-876844b71ced\") " pod="openstack/cloudkitty-storageinit-796tf" Dec 10 15:44:36 crc kubenswrapper[4755]: I1210 15:44:36.773782 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50f8ada4-b157-4f73-ae6b-876844b71ced-config-data\") pod \"cloudkitty-storageinit-796tf\" (UID: \"50f8ada4-b157-4f73-ae6b-876844b71ced\") " pod="openstack/cloudkitty-storageinit-796tf" Dec 10 15:44:36 crc kubenswrapper[4755]: I1210 15:44:36.773928 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/50f8ada4-b157-4f73-ae6b-876844b71ced-certs\") pod \"cloudkitty-storageinit-796tf\" (UID: \"50f8ada4-b157-4f73-ae6b-876844b71ced\") " pod="openstack/cloudkitty-storageinit-796tf" Dec 10 15:44:36 crc kubenswrapper[4755]: I1210 15:44:36.774016 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/50f8ada4-b157-4f73-ae6b-876844b71ced-scripts\") pod \"cloudkitty-storageinit-796tf\" (UID: \"50f8ada4-b157-4f73-ae6b-876844b71ced\") " pod="openstack/cloudkitty-storageinit-796tf" Dec 10 15:44:36 crc kubenswrapper[4755]: I1210 15:44:36.774052 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdtfc\" (UniqueName: \"kubernetes.io/projected/50f8ada4-b157-4f73-ae6b-876844b71ced-kube-api-access-kdtfc\") pod \"cloudkitty-storageinit-796tf\" (UID: \"50f8ada4-b157-4f73-ae6b-876844b71ced\") " pod="openstack/cloudkitty-storageinit-796tf" Dec 10 15:44:36 crc kubenswrapper[4755]: I1210 15:44:36.774127 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50f8ada4-b157-4f73-ae6b-876844b71ced-combined-ca-bundle\") pod \"cloudkitty-storageinit-796tf\" (UID: \"50f8ada4-b157-4f73-ae6b-876844b71ced\") " pod="openstack/cloudkitty-storageinit-796tf" Dec 10 15:44:36 crc kubenswrapper[4755]: I1210 15:44:36.784121 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/50f8ada4-b157-4f73-ae6b-876844b71ced-scripts\") pod \"cloudkitty-storageinit-796tf\" (UID: \"50f8ada4-b157-4f73-ae6b-876844b71ced\") " pod="openstack/cloudkitty-storageinit-796tf" Dec 10 15:44:36 crc kubenswrapper[4755]: I1210 15:44:36.786128 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50f8ada4-b157-4f73-ae6b-876844b71ced-combined-ca-bundle\") pod \"cloudkitty-storageinit-796tf\" (UID: \"50f8ada4-b157-4f73-ae6b-876844b71ced\") " pod="openstack/cloudkitty-storageinit-796tf" Dec 10 15:44:36 crc kubenswrapper[4755]: I1210 15:44:36.792876 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50f8ada4-b157-4f73-ae6b-876844b71ced-config-data\") pod \"cloudkitty-storageinit-796tf\" (UID: \"50f8ada4-b157-4f73-ae6b-876844b71ced\") " pod="openstack/cloudkitty-storageinit-796tf" Dec 10 15:44:36 crc kubenswrapper[4755]: I1210 15:44:36.795006 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/50f8ada4-b157-4f73-ae6b-876844b71ced-certs\") pod \"cloudkitty-storageinit-796tf\" (UID: \"50f8ada4-b157-4f73-ae6b-876844b71ced\") " pod="openstack/cloudkitty-storageinit-796tf" Dec 10 15:44:36 crc kubenswrapper[4755]: I1210 15:44:36.815430 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdtfc\" (UniqueName: \"kubernetes.io/projected/50f8ada4-b157-4f73-ae6b-876844b71ced-kube-api-access-kdtfc\") pod \"cloudkitty-storageinit-796tf\" (UID: \"50f8ada4-b157-4f73-ae6b-876844b71ced\") " pod="openstack/cloudkitty-storageinit-796tf" Dec 10 15:44:36 crc kubenswrapper[4755]: I1210 15:44:36.899098 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Dec 10 15:44:36 crc kubenswrapper[4755]: I1210 15:44:36.901962 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 10 15:44:36 crc kubenswrapper[4755]: I1210 15:44:36.906027 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Dec 10 15:44:36 crc kubenswrapper[4755]: I1210 15:44:36.906302 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Dec 10 15:44:36 crc kubenswrapper[4755]: I1210 15:44:36.906416 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Dec 10 15:44:36 crc kubenswrapper[4755]: I1210 15:44:36.914681 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-gjn7t" Dec 10 15:44:36 crc kubenswrapper[4755]: I1210 15:44:36.934736 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 10 15:44:36 crc kubenswrapper[4755]: I1210 15:44:36.937409 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-storageinit-796tf" Dec 10 15:44:36 crc kubenswrapper[4755]: I1210 15:44:36.979703 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/976d7784-c4ed-4590-a34c-a3f79aedf471-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"976d7784-c4ed-4590-a34c-a3f79aedf471\") " pod="openstack/cinder-scheduler-0" Dec 10 15:44:36 crc kubenswrapper[4755]: I1210 15:44:36.979766 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/976d7784-c4ed-4590-a34c-a3f79aedf471-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"976d7784-c4ed-4590-a34c-a3f79aedf471\") " pod="openstack/cinder-scheduler-0" Dec 10 15:44:36 crc kubenswrapper[4755]: I1210 15:44:36.979853 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/976d7784-c4ed-4590-a34c-a3f79aedf471-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"976d7784-c4ed-4590-a34c-a3f79aedf471\") " pod="openstack/cinder-scheduler-0" Dec 10 15:44:36 crc kubenswrapper[4755]: I1210 15:44:36.979903 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/976d7784-c4ed-4590-a34c-a3f79aedf471-config-data\") pod \"cinder-scheduler-0\" (UID: \"976d7784-c4ed-4590-a34c-a3f79aedf471\") " pod="openstack/cinder-scheduler-0" Dec 10 15:44:36 crc kubenswrapper[4755]: I1210 15:44:36.979977 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/976d7784-c4ed-4590-a34c-a3f79aedf471-scripts\") pod \"cinder-scheduler-0\" (UID: \"976d7784-c4ed-4590-a34c-a3f79aedf471\") " pod="openstack/cinder-scheduler-0" Dec 10 15:44:36 crc kubenswrapper[4755]: I1210 15:44:36.980008 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpjdh\" (UniqueName: \"kubernetes.io/projected/976d7784-c4ed-4590-a34c-a3f79aedf471-kube-api-access-zpjdh\") pod \"cinder-scheduler-0\" (UID: \"976d7784-c4ed-4590-a34c-a3f79aedf471\") " pod="openstack/cinder-scheduler-0" Dec 10 15:44:37 crc kubenswrapper[4755]: I1210 15:44:37.081517 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/976d7784-c4ed-4590-a34c-a3f79aedf471-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"976d7784-c4ed-4590-a34c-a3f79aedf471\") " pod="openstack/cinder-scheduler-0" Dec 10 15:44:37 crc kubenswrapper[4755]: I1210 15:44:37.081572 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/976d7784-c4ed-4590-a34c-a3f79aedf471-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"976d7784-c4ed-4590-a34c-a3f79aedf471\") " pod="openstack/cinder-scheduler-0" Dec 10 15:44:37 crc kubenswrapper[4755]: I1210 15:44:37.081630 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/976d7784-c4ed-4590-a34c-a3f79aedf471-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"976d7784-c4ed-4590-a34c-a3f79aedf471\") " pod="openstack/cinder-scheduler-0" Dec 10 15:44:37 crc kubenswrapper[4755]: I1210 15:44:37.081666 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/976d7784-c4ed-4590-a34c-a3f79aedf471-config-data\") pod \"cinder-scheduler-0\" (UID: \"976d7784-c4ed-4590-a34c-a3f79aedf471\") " pod="openstack/cinder-scheduler-0" Dec 10 15:44:37 crc kubenswrapper[4755]: I1210 15:44:37.081719 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/976d7784-c4ed-4590-a34c-a3f79aedf471-scripts\") pod \"cinder-scheduler-0\" (UID: \"976d7784-c4ed-4590-a34c-a3f79aedf471\") " pod="openstack/cinder-scheduler-0" Dec 10 15:44:37 crc kubenswrapper[4755]: I1210 15:44:37.081742 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpjdh\" (UniqueName: \"kubernetes.io/projected/976d7784-c4ed-4590-a34c-a3f79aedf471-kube-api-access-zpjdh\") pod \"cinder-scheduler-0\" (UID: \"976d7784-c4ed-4590-a34c-a3f79aedf471\") " pod="openstack/cinder-scheduler-0" Dec 10 15:44:37 crc kubenswrapper[4755]: I1210 15:44:37.082596 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/976d7784-c4ed-4590-a34c-a3f79aedf471-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"976d7784-c4ed-4590-a34c-a3f79aedf471\") " pod="openstack/cinder-scheduler-0" Dec 10 15:44:37 crc kubenswrapper[4755]: I1210 15:44:37.086325 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/976d7784-c4ed-4590-a34c-a3f79aedf471-scripts\") pod \"cinder-scheduler-0\" (UID: \"976d7784-c4ed-4590-a34c-a3f79aedf471\") " pod="openstack/cinder-scheduler-0" Dec 10 15:44:37 crc kubenswrapper[4755]: I1210 15:44:37.098160 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/976d7784-c4ed-4590-a34c-a3f79aedf471-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"976d7784-c4ed-4590-a34c-a3f79aedf471\") " pod="openstack/cinder-scheduler-0" Dec 10 15:44:37 crc kubenswrapper[4755]: I1210 15:44:37.100541 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/976d7784-c4ed-4590-a34c-a3f79aedf471-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"976d7784-c4ed-4590-a34c-a3f79aedf471\") " pod="openstack/cinder-scheduler-0" Dec 10 15:44:37 crc kubenswrapper[4755]: I1210 15:44:37.112728 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/976d7784-c4ed-4590-a34c-a3f79aedf471-config-data\") pod \"cinder-scheduler-0\" (UID: \"976d7784-c4ed-4590-a34c-a3f79aedf471\") " pod="openstack/cinder-scheduler-0" Dec 10 15:44:37 crc kubenswrapper[4755]: I1210 15:44:37.112977 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-zmh5b"] Dec 10 15:44:37 crc kubenswrapper[4755]: I1210 15:44:37.115024 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-zmh5b" Dec 10 15:44:37 crc kubenswrapper[4755]: I1210 15:44:37.145290 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpjdh\" (UniqueName: \"kubernetes.io/projected/976d7784-c4ed-4590-a34c-a3f79aedf471-kube-api-access-zpjdh\") pod \"cinder-scheduler-0\" (UID: \"976d7784-c4ed-4590-a34c-a3f79aedf471\") " pod="openstack/cinder-scheduler-0" Dec 10 15:44:37 crc kubenswrapper[4755]: I1210 15:44:37.155628 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-zmh5b"] Dec 10 15:44:37 crc kubenswrapper[4755]: I1210 15:44:37.184527 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a49be472-21ec-4f59-811e-9e1196ebaa14-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-zmh5b\" (UID: \"a49be472-21ec-4f59-811e-9e1196ebaa14\") " pod="openstack/dnsmasq-dns-6578955fd5-zmh5b" Dec 10 15:44:37 crc kubenswrapper[4755]: I1210 15:44:37.184608 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a49be472-21ec-4f59-811e-9e1196ebaa14-config\") pod \"dnsmasq-dns-6578955fd5-zmh5b\" (UID: \"a49be472-21ec-4f59-811e-9e1196ebaa14\") " pod="openstack/dnsmasq-dns-6578955fd5-zmh5b" Dec 10 15:44:37 crc kubenswrapper[4755]: I1210 15:44:37.184701 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a49be472-21ec-4f59-811e-9e1196ebaa14-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-zmh5b\" (UID: \"a49be472-21ec-4f59-811e-9e1196ebaa14\") " pod="openstack/dnsmasq-dns-6578955fd5-zmh5b" Dec 10 15:44:37 crc kubenswrapper[4755]: I1210 15:44:37.184762 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a49be472-21ec-4f59-811e-9e1196ebaa14-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-zmh5b\" (UID: \"a49be472-21ec-4f59-811e-9e1196ebaa14\") " pod="openstack/dnsmasq-dns-6578955fd5-zmh5b" Dec 10 15:44:37 crc kubenswrapper[4755]: I1210 15:44:37.184803 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnjrd\" (UniqueName: \"kubernetes.io/projected/a49be472-21ec-4f59-811e-9e1196ebaa14-kube-api-access-dnjrd\") pod \"dnsmasq-dns-6578955fd5-zmh5b\" (UID: \"a49be472-21ec-4f59-811e-9e1196ebaa14\") " pod="openstack/dnsmasq-dns-6578955fd5-zmh5b" Dec 10 15:44:37 crc kubenswrapper[4755]: I1210 15:44:37.184865 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a49be472-21ec-4f59-811e-9e1196ebaa14-dns-svc\") pod \"dnsmasq-dns-6578955fd5-zmh5b\" (UID: \"a49be472-21ec-4f59-811e-9e1196ebaa14\") " pod="openstack/dnsmasq-dns-6578955fd5-zmh5b" Dec 10 15:44:37 crc kubenswrapper[4755]: I1210 15:44:37.264927 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 10 15:44:37 crc kubenswrapper[4755]: I1210 15:44:37.287845 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a49be472-21ec-4f59-811e-9e1196ebaa14-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-zmh5b\" (UID: \"a49be472-21ec-4f59-811e-9e1196ebaa14\") " pod="openstack/dnsmasq-dns-6578955fd5-zmh5b" Dec 10 15:44:37 crc kubenswrapper[4755]: I1210 15:44:37.287891 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a49be472-21ec-4f59-811e-9e1196ebaa14-config\") pod \"dnsmasq-dns-6578955fd5-zmh5b\" (UID: \"a49be472-21ec-4f59-811e-9e1196ebaa14\") " pod="openstack/dnsmasq-dns-6578955fd5-zmh5b" Dec 10 15:44:37 crc kubenswrapper[4755]: I1210 15:44:37.287942 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a49be472-21ec-4f59-811e-9e1196ebaa14-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-zmh5b\" (UID: \"a49be472-21ec-4f59-811e-9e1196ebaa14\") " pod="openstack/dnsmasq-dns-6578955fd5-zmh5b" Dec 10 15:44:37 crc kubenswrapper[4755]: I1210 15:44:37.287975 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a49be472-21ec-4f59-811e-9e1196ebaa14-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-zmh5b\" (UID: \"a49be472-21ec-4f59-811e-9e1196ebaa14\") " pod="openstack/dnsmasq-dns-6578955fd5-zmh5b" Dec 10 15:44:37 crc kubenswrapper[4755]: I1210 15:44:37.288004 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnjrd\" (UniqueName: \"kubernetes.io/projected/a49be472-21ec-4f59-811e-9e1196ebaa14-kube-api-access-dnjrd\") pod \"dnsmasq-dns-6578955fd5-zmh5b\" (UID: \"a49be472-21ec-4f59-811e-9e1196ebaa14\") " pod="openstack/dnsmasq-dns-6578955fd5-zmh5b" Dec 10 15:44:37 crc kubenswrapper[4755]: I1210 15:44:37.288127 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a49be472-21ec-4f59-811e-9e1196ebaa14-dns-svc\") pod \"dnsmasq-dns-6578955fd5-zmh5b\" (UID: \"a49be472-21ec-4f59-811e-9e1196ebaa14\") " pod="openstack/dnsmasq-dns-6578955fd5-zmh5b" Dec 10 15:44:37 crc kubenswrapper[4755]: I1210 15:44:37.289487 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a49be472-21ec-4f59-811e-9e1196ebaa14-dns-svc\") pod \"dnsmasq-dns-6578955fd5-zmh5b\" (UID: \"a49be472-21ec-4f59-811e-9e1196ebaa14\") " pod="openstack/dnsmasq-dns-6578955fd5-zmh5b" Dec 10 15:44:37 crc kubenswrapper[4755]: I1210 15:44:37.289514 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a49be472-21ec-4f59-811e-9e1196ebaa14-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-zmh5b\" (UID: \"a49be472-21ec-4f59-811e-9e1196ebaa14\") " pod="openstack/dnsmasq-dns-6578955fd5-zmh5b" Dec 10 15:44:37 crc kubenswrapper[4755]: I1210 15:44:37.290004 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a49be472-21ec-4f59-811e-9e1196ebaa14-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-zmh5b\" (UID: \"a49be472-21ec-4f59-811e-9e1196ebaa14\") " pod="openstack/dnsmasq-dns-6578955fd5-zmh5b" Dec 10 15:44:37 crc kubenswrapper[4755]: I1210 15:44:37.290158 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a49be472-21ec-4f59-811e-9e1196ebaa14-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-zmh5b\" (UID: \"a49be472-21ec-4f59-811e-9e1196ebaa14\") " pod="openstack/dnsmasq-dns-6578955fd5-zmh5b" Dec 10 15:44:37 crc kubenswrapper[4755]: I1210 15:44:37.290807 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a49be472-21ec-4f59-811e-9e1196ebaa14-config\") pod \"dnsmasq-dns-6578955fd5-zmh5b\" (UID: \"a49be472-21ec-4f59-811e-9e1196ebaa14\") " pod="openstack/dnsmasq-dns-6578955fd5-zmh5b" Dec 10 15:44:37 crc kubenswrapper[4755]: I1210 15:44:37.333113 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Dec 10 15:44:37 crc kubenswrapper[4755]: I1210 15:44:37.334691 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 10 15:44:37 crc kubenswrapper[4755]: I1210 15:44:37.336820 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Dec 10 15:44:37 crc kubenswrapper[4755]: I1210 15:44:37.343781 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnjrd\" (UniqueName: \"kubernetes.io/projected/a49be472-21ec-4f59-811e-9e1196ebaa14-kube-api-access-dnjrd\") pod \"dnsmasq-dns-6578955fd5-zmh5b\" (UID: \"a49be472-21ec-4f59-811e-9e1196ebaa14\") " pod="openstack/dnsmasq-dns-6578955fd5-zmh5b" Dec 10 15:44:37 crc kubenswrapper[4755]: I1210 15:44:37.358397 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 10 15:44:37 crc kubenswrapper[4755]: I1210 15:44:37.390567 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plxs8\" (UniqueName: \"kubernetes.io/projected/c9a28c10-6e42-4e1d-9374-67db311e98ff-kube-api-access-plxs8\") pod \"cinder-api-0\" (UID: \"c9a28c10-6e42-4e1d-9374-67db311e98ff\") " pod="openstack/cinder-api-0" Dec 10 15:44:37 crc kubenswrapper[4755]: I1210 15:44:37.390628 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9a28c10-6e42-4e1d-9374-67db311e98ff-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"c9a28c10-6e42-4e1d-9374-67db311e98ff\") " pod="openstack/cinder-api-0" Dec 10 15:44:37 crc kubenswrapper[4755]: I1210 15:44:37.390678 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c9a28c10-6e42-4e1d-9374-67db311e98ff-scripts\") pod \"cinder-api-0\" (UID: \"c9a28c10-6e42-4e1d-9374-67db311e98ff\") " pod="openstack/cinder-api-0" Dec 10 15:44:37 crc kubenswrapper[4755]: I1210 15:44:37.390712 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c9a28c10-6e42-4e1d-9374-67db311e98ff-config-data-custom\") pod \"cinder-api-0\" (UID: \"c9a28c10-6e42-4e1d-9374-67db311e98ff\") " pod="openstack/cinder-api-0" Dec 10 15:44:37 crc kubenswrapper[4755]: I1210 15:44:37.391290 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9a28c10-6e42-4e1d-9374-67db311e98ff-config-data\") pod \"cinder-api-0\" (UID: \"c9a28c10-6e42-4e1d-9374-67db311e98ff\") " pod="openstack/cinder-api-0" Dec 10 15:44:37 crc kubenswrapper[4755]: I1210 15:44:37.391346 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c9a28c10-6e42-4e1d-9374-67db311e98ff-etc-machine-id\") pod \"cinder-api-0\" (UID: \"c9a28c10-6e42-4e1d-9374-67db311e98ff\") " pod="openstack/cinder-api-0" Dec 10 15:44:37 crc kubenswrapper[4755]: I1210 15:44:37.391365 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c9a28c10-6e42-4e1d-9374-67db311e98ff-logs\") pod \"cinder-api-0\" (UID: \"c9a28c10-6e42-4e1d-9374-67db311e98ff\") " pod="openstack/cinder-api-0" Dec 10 15:44:37 crc kubenswrapper[4755]: I1210 15:44:37.484947 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"05ad143b-bb62-4f04-94da-b4473be95da2","Type":"ContainerStarted","Data":"1317119dfc28922ac22eb17eeb3e7b438ff12a3a3dcf1a08f48d30a64c9de0b5"} Dec 10 15:44:37 crc kubenswrapper[4755]: I1210 15:44:37.495761 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c9a28c10-6e42-4e1d-9374-67db311e98ff-config-data-custom\") pod \"cinder-api-0\" (UID: \"c9a28c10-6e42-4e1d-9374-67db311e98ff\") " pod="openstack/cinder-api-0" Dec 10 15:44:37 crc kubenswrapper[4755]: I1210 15:44:37.495888 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9a28c10-6e42-4e1d-9374-67db311e98ff-config-data\") pod \"cinder-api-0\" (UID: \"c9a28c10-6e42-4e1d-9374-67db311e98ff\") " pod="openstack/cinder-api-0" Dec 10 15:44:37 crc kubenswrapper[4755]: I1210 15:44:37.495923 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c9a28c10-6e42-4e1d-9374-67db311e98ff-etc-machine-id\") pod \"cinder-api-0\" (UID: \"c9a28c10-6e42-4e1d-9374-67db311e98ff\") " pod="openstack/cinder-api-0" Dec 10 15:44:37 crc kubenswrapper[4755]: I1210 15:44:37.495942 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c9a28c10-6e42-4e1d-9374-67db311e98ff-logs\") pod \"cinder-api-0\" (UID: \"c9a28c10-6e42-4e1d-9374-67db311e98ff\") " pod="openstack/cinder-api-0" Dec 10 15:44:37 crc kubenswrapper[4755]: I1210 15:44:37.495980 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plxs8\" (UniqueName: \"kubernetes.io/projected/c9a28c10-6e42-4e1d-9374-67db311e98ff-kube-api-access-plxs8\") pod \"cinder-api-0\" (UID: \"c9a28c10-6e42-4e1d-9374-67db311e98ff\") " pod="openstack/cinder-api-0" Dec 10 15:44:37 crc kubenswrapper[4755]: I1210 15:44:37.496015 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9a28c10-6e42-4e1d-9374-67db311e98ff-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"c9a28c10-6e42-4e1d-9374-67db311e98ff\") " pod="openstack/cinder-api-0" Dec 10 15:44:37 crc kubenswrapper[4755]: I1210 15:44:37.496038 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c9a28c10-6e42-4e1d-9374-67db311e98ff-scripts\") pod \"cinder-api-0\" (UID: \"c9a28c10-6e42-4e1d-9374-67db311e98ff\") " pod="openstack/cinder-api-0" Dec 10 15:44:37 crc kubenswrapper[4755]: I1210 15:44:37.497603 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c9a28c10-6e42-4e1d-9374-67db311e98ff-etc-machine-id\") pod \"cinder-api-0\" (UID: \"c9a28c10-6e42-4e1d-9374-67db311e98ff\") " pod="openstack/cinder-api-0" Dec 10 15:44:37 crc kubenswrapper[4755]: I1210 15:44:37.500654 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c9a28c10-6e42-4e1d-9374-67db311e98ff-scripts\") pod \"cinder-api-0\" (UID: \"c9a28c10-6e42-4e1d-9374-67db311e98ff\") " pod="openstack/cinder-api-0" Dec 10 15:44:37 crc kubenswrapper[4755]: I1210 15:44:37.501651 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c9a28c10-6e42-4e1d-9374-67db311e98ff-logs\") pod \"cinder-api-0\" (UID: \"c9a28c10-6e42-4e1d-9374-67db311e98ff\") " pod="openstack/cinder-api-0" Dec 10 15:44:37 crc kubenswrapper[4755]: I1210 15:44:37.507930 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c9a28c10-6e42-4e1d-9374-67db311e98ff-config-data-custom\") pod \"cinder-api-0\" (UID: \"c9a28c10-6e42-4e1d-9374-67db311e98ff\") " pod="openstack/cinder-api-0" Dec 10 15:44:37 crc kubenswrapper[4755]: I1210 15:44:37.509049 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9a28c10-6e42-4e1d-9374-67db311e98ff-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"c9a28c10-6e42-4e1d-9374-67db311e98ff\") " pod="openstack/cinder-api-0" Dec 10 15:44:37 crc kubenswrapper[4755]: I1210 15:44:37.509671 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9a28c10-6e42-4e1d-9374-67db311e98ff-config-data\") pod \"cinder-api-0\" (UID: \"c9a28c10-6e42-4e1d-9374-67db311e98ff\") " pod="openstack/cinder-api-0" Dec 10 15:44:37 crc kubenswrapper[4755]: I1210 15:44:37.511003 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-zmh5b" Dec 10 15:44:37 crc kubenswrapper[4755]: I1210 15:44:37.533320 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plxs8\" (UniqueName: \"kubernetes.io/projected/c9a28c10-6e42-4e1d-9374-67db311e98ff-kube-api-access-plxs8\") pod \"cinder-api-0\" (UID: \"c9a28c10-6e42-4e1d-9374-67db311e98ff\") " pod="openstack/cinder-api-0" Dec 10 15:44:37 crc kubenswrapper[4755]: I1210 15:44:37.670412 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 10 15:44:37 crc kubenswrapper[4755]: I1210 15:44:37.731808 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-storageinit-796tf"] Dec 10 15:44:37 crc kubenswrapper[4755]: I1210 15:44:37.886906 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 10 15:44:37 crc kubenswrapper[4755]: I1210 15:44:37.886951 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 10 15:44:37 crc kubenswrapper[4755]: I1210 15:44:37.937246 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 10 15:44:37 crc kubenswrapper[4755]: I1210 15:44:37.992065 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 10 15:44:38 crc kubenswrapper[4755]: I1210 15:44:38.021889 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 10 15:44:38 crc kubenswrapper[4755]: I1210 15:44:38.096214 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-zmh5b"] Dec 10 15:44:38 crc kubenswrapper[4755]: I1210 15:44:38.398906 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 10 15:44:38 crc kubenswrapper[4755]: W1210 15:44:38.441001 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc9a28c10_6e42_4e1d_9374_67db311e98ff.slice/crio-cdcb00ac2f2f750a53dbd36a17db15a0afaefd2cc86aa3145e6b06b21ed19390 WatchSource:0}: Error finding container cdcb00ac2f2f750a53dbd36a17db15a0afaefd2cc86aa3145e6b06b21ed19390: Status 404 returned error can't find the container with id cdcb00ac2f2f750a53dbd36a17db15a0afaefd2cc86aa3145e6b06b21ed19390 Dec 10 15:44:38 crc kubenswrapper[4755]: I1210 15:44:38.496658 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"976d7784-c4ed-4590-a34c-a3f79aedf471","Type":"ContainerStarted","Data":"1ddc4b210869be273b81a506441c8aedc392f5a680e9aab389eaf5cc30e81518"} Dec 10 15:44:38 crc kubenswrapper[4755]: I1210 15:44:38.500089 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c9a28c10-6e42-4e1d-9374-67db311e98ff","Type":"ContainerStarted","Data":"cdcb00ac2f2f750a53dbd36a17db15a0afaefd2cc86aa3145e6b06b21ed19390"} Dec 10 15:44:38 crc kubenswrapper[4755]: I1210 15:44:38.517854 4755 generic.go:334] "Generic (PLEG): container finished" podID="a49be472-21ec-4f59-811e-9e1196ebaa14" containerID="87e888cccd3dd0f2689de44f9c5df08b9309506c2534ac642fdf8c6c73559346" exitCode=0 Dec 10 15:44:38 crc kubenswrapper[4755]: I1210 15:44:38.517935 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-zmh5b" event={"ID":"a49be472-21ec-4f59-811e-9e1196ebaa14","Type":"ContainerDied","Data":"87e888cccd3dd0f2689de44f9c5df08b9309506c2534ac642fdf8c6c73559346"} Dec 10 15:44:38 crc kubenswrapper[4755]: I1210 15:44:38.517958 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-zmh5b" event={"ID":"a49be472-21ec-4f59-811e-9e1196ebaa14","Type":"ContainerStarted","Data":"89768e0ba76d72ffa84a0d0d826e4b7946c5e3af3c50b3dff41f2fffd2b7ca24"} Dec 10 15:44:38 crc kubenswrapper[4755]: I1210 15:44:38.536758 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-storageinit-796tf" event={"ID":"50f8ada4-b157-4f73-ae6b-876844b71ced","Type":"ContainerStarted","Data":"344329be481337c544c714666aee3668da2db986f3b6e94930a4c3b05a83d634"} Dec 10 15:44:38 crc kubenswrapper[4755]: I1210 15:44:38.536829 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-storageinit-796tf" event={"ID":"50f8ada4-b157-4f73-ae6b-876844b71ced","Type":"ContainerStarted","Data":"44314b74c359571c218e7d3848a745124d6be274ed6547220617742f731e7ef7"} Dec 10 15:44:38 crc kubenswrapper[4755]: I1210 15:44:38.558316 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"05ad143b-bb62-4f04-94da-b4473be95da2","Type":"ContainerStarted","Data":"8ec378a30c21ccb7364ac5b12909974c507cb0542be73687d497ffd56696ec80"} Dec 10 15:44:38 crc kubenswrapper[4755]: I1210 15:44:38.558764 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 10 15:44:38 crc kubenswrapper[4755]: I1210 15:44:38.559003 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 10 15:44:38 crc kubenswrapper[4755]: I1210 15:44:38.575896 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-storageinit-796tf" podStartSLOduration=2.575848474 podStartE2EDuration="2.575848474s" podCreationTimestamp="2025-12-10 15:44:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:44:38.567956299 +0000 UTC m=+1275.168839941" watchObservedRunningTime="2025-12-10 15:44:38.575848474 +0000 UTC m=+1275.176732106" Dec 10 15:44:38 crc kubenswrapper[4755]: I1210 15:44:38.828106 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5f46cf586c-brqwd" Dec 10 15:44:38 crc kubenswrapper[4755]: I1210 15:44:38.915004 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6d5449c6d4-5rw2f"] Dec 10 15:44:38 crc kubenswrapper[4755]: I1210 15:44:38.916238 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6d5449c6d4-5rw2f" podUID="7d152fd0-fbbf-4c7b-874a-169860ee9075" containerName="neutron-api" containerID="cri-o://61de7b2c15e59d0bd635c02e1e4985b8bfd2694a7ded31026d0d4a36cc326f17" gracePeriod=30 Dec 10 15:44:38 crc kubenswrapper[4755]: I1210 15:44:38.917093 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6d5449c6d4-5rw2f" podUID="7d152fd0-fbbf-4c7b-874a-169860ee9075" containerName="neutron-httpd" containerID="cri-o://4e05778ed82b900074c8fe7e096c3ce7cd0b6492a587cbaa2f1c4f064a7c250e" gracePeriod=30 Dec 10 15:44:39 crc kubenswrapper[4755]: I1210 15:44:39.607443 4755 generic.go:334] "Generic (PLEG): container finished" podID="7d152fd0-fbbf-4c7b-874a-169860ee9075" containerID="4e05778ed82b900074c8fe7e096c3ce7cd0b6492a587cbaa2f1c4f064a7c250e" exitCode=0 Dec 10 15:44:39 crc kubenswrapper[4755]: I1210 15:44:39.607796 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6d5449c6d4-5rw2f" event={"ID":"7d152fd0-fbbf-4c7b-874a-169860ee9075","Type":"ContainerDied","Data":"4e05778ed82b900074c8fe7e096c3ce7cd0b6492a587cbaa2f1c4f064a7c250e"} Dec 10 15:44:39 crc kubenswrapper[4755]: I1210 15:44:39.621329 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-zmh5b" event={"ID":"a49be472-21ec-4f59-811e-9e1196ebaa14","Type":"ContainerStarted","Data":"2780f449c877d41f07a88f85b00e84d23ba1e0e2a9549e7d1e10297cf18f367c"} Dec 10 15:44:39 crc kubenswrapper[4755]: I1210 15:44:39.621608 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6578955fd5-zmh5b" Dec 10 15:44:39 crc kubenswrapper[4755]: I1210 15:44:39.660067 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"05ad143b-bb62-4f04-94da-b4473be95da2","Type":"ContainerStarted","Data":"6755b231d887b112762de4edbe10406e7553667e317f41eb6232b913bb47796b"} Dec 10 15:44:39 crc kubenswrapper[4755]: I1210 15:44:39.688213 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6578955fd5-zmh5b" podStartSLOduration=3.688193687 podStartE2EDuration="3.688193687s" podCreationTimestamp="2025-12-10 15:44:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:44:39.6610176 +0000 UTC m=+1276.261901232" watchObservedRunningTime="2025-12-10 15:44:39.688193687 +0000 UTC m=+1276.289077319" Dec 10 15:44:39 crc kubenswrapper[4755]: I1210 15:44:39.722406 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Dec 10 15:44:40 crc kubenswrapper[4755]: I1210 15:44:40.006692 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 10 15:44:40 crc kubenswrapper[4755]: I1210 15:44:40.006752 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 10 15:44:40 crc kubenswrapper[4755]: I1210 15:44:40.044599 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 10 15:44:40 crc kubenswrapper[4755]: I1210 15:44:40.092768 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 10 15:44:40 crc kubenswrapper[4755]: I1210 15:44:40.669442 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c9a28c10-6e42-4e1d-9374-67db311e98ff","Type":"ContainerStarted","Data":"ae1e42b42c40edab8487c9c507623fe4fef32be4b6b12116b8279a194a6aaf79"} Dec 10 15:44:40 crc kubenswrapper[4755]: I1210 15:44:40.670863 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 10 15:44:40 crc kubenswrapper[4755]: I1210 15:44:40.670889 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 10 15:44:41 crc kubenswrapper[4755]: I1210 15:44:41.679874 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"05ad143b-bb62-4f04-94da-b4473be95da2","Type":"ContainerStarted","Data":"f6915ec4e6022634899246154151fac515dd017f2f725a58c7c31fd0f5c66d3d"} Dec 10 15:44:41 crc kubenswrapper[4755]: I1210 15:44:41.680393 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 10 15:44:41 crc kubenswrapper[4755]: I1210 15:44:41.682381 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"976d7784-c4ed-4590-a34c-a3f79aedf471","Type":"ContainerStarted","Data":"261710c1e620e43715c5a98b322a0ac2cde54d540ff31a515329fe134358cde0"} Dec 10 15:44:41 crc kubenswrapper[4755]: I1210 15:44:41.682412 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"976d7784-c4ed-4590-a34c-a3f79aedf471","Type":"ContainerStarted","Data":"a3261f341319fb5c64ca2ea75a8547cd8d042c49634738b1ced0204857c1d6a3"} Dec 10 15:44:41 crc kubenswrapper[4755]: I1210 15:44:41.684442 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c9a28c10-6e42-4e1d-9374-67db311e98ff","Type":"ContainerStarted","Data":"ae29916b6da2467ac30f6ddbdfabba6845c58c4a172da1a2e6cdb24c3c8c2e5f"} Dec 10 15:44:41 crc kubenswrapper[4755]: I1210 15:44:41.684527 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="c9a28c10-6e42-4e1d-9374-67db311e98ff" containerName="cinder-api-log" containerID="cri-o://ae1e42b42c40edab8487c9c507623fe4fef32be4b6b12116b8279a194a6aaf79" gracePeriod=30 Dec 10 15:44:41 crc kubenswrapper[4755]: I1210 15:44:41.684585 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="c9a28c10-6e42-4e1d-9374-67db311e98ff" containerName="cinder-api" containerID="cri-o://ae29916b6da2467ac30f6ddbdfabba6845c58c4a172da1a2e6cdb24c3c8c2e5f" gracePeriod=30 Dec 10 15:44:41 crc kubenswrapper[4755]: I1210 15:44:41.684563 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Dec 10 15:44:41 crc kubenswrapper[4755]: I1210 15:44:41.687474 4755 generic.go:334] "Generic (PLEG): container finished" podID="50f8ada4-b157-4f73-ae6b-876844b71ced" containerID="344329be481337c544c714666aee3668da2db986f3b6e94930a4c3b05a83d634" exitCode=0 Dec 10 15:44:41 crc kubenswrapper[4755]: I1210 15:44:41.687508 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-storageinit-796tf" event={"ID":"50f8ada4-b157-4f73-ae6b-876844b71ced","Type":"ContainerDied","Data":"344329be481337c544c714666aee3668da2db986f3b6e94930a4c3b05a83d634"} Dec 10 15:44:41 crc kubenswrapper[4755]: I1210 15:44:41.729209 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.659002918 podStartE2EDuration="8.729183719s" podCreationTimestamp="2025-12-10 15:44:33 +0000 UTC" firstStartedPulling="2025-12-10 15:44:34.784970283 +0000 UTC m=+1271.385853915" lastFinishedPulling="2025-12-10 15:44:40.855151094 +0000 UTC m=+1277.456034716" observedRunningTime="2025-12-10 15:44:41.718974271 +0000 UTC m=+1278.319857903" watchObservedRunningTime="2025-12-10 15:44:41.729183719 +0000 UTC m=+1278.330067351" Dec 10 15:44:41 crc kubenswrapper[4755]: I1210 15:44:41.764923 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.750034967 podStartE2EDuration="5.764908779s" podCreationTimestamp="2025-12-10 15:44:36 +0000 UTC" firstStartedPulling="2025-12-10 15:44:37.964814098 +0000 UTC m=+1274.565697740" lastFinishedPulling="2025-12-10 15:44:39.97968792 +0000 UTC m=+1276.580571552" observedRunningTime="2025-12-10 15:44:41.764435395 +0000 UTC m=+1278.365319027" watchObservedRunningTime="2025-12-10 15:44:41.764908779 +0000 UTC m=+1278.365792411" Dec 10 15:44:41 crc kubenswrapper[4755]: I1210 15:44:41.817514 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.817493465 podStartE2EDuration="4.817493465s" podCreationTimestamp="2025-12-10 15:44:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:44:41.812392817 +0000 UTC m=+1278.413276449" watchObservedRunningTime="2025-12-10 15:44:41.817493465 +0000 UTC m=+1278.418377097" Dec 10 15:44:42 crc kubenswrapper[4755]: I1210 15:44:42.266408 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Dec 10 15:44:42 crc kubenswrapper[4755]: I1210 15:44:42.719705 4755 generic.go:334] "Generic (PLEG): container finished" podID="c9a28c10-6e42-4e1d-9374-67db311e98ff" containerID="ae29916b6da2467ac30f6ddbdfabba6845c58c4a172da1a2e6cdb24c3c8c2e5f" exitCode=0 Dec 10 15:44:42 crc kubenswrapper[4755]: I1210 15:44:42.720020 4755 generic.go:334] "Generic (PLEG): container finished" podID="c9a28c10-6e42-4e1d-9374-67db311e98ff" containerID="ae1e42b42c40edab8487c9c507623fe4fef32be4b6b12116b8279a194a6aaf79" exitCode=143 Dec 10 15:44:42 crc kubenswrapper[4755]: I1210 15:44:42.721308 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c9a28c10-6e42-4e1d-9374-67db311e98ff","Type":"ContainerDied","Data":"ae29916b6da2467ac30f6ddbdfabba6845c58c4a172da1a2e6cdb24c3c8c2e5f"} Dec 10 15:44:42 crc kubenswrapper[4755]: I1210 15:44:42.721333 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c9a28c10-6e42-4e1d-9374-67db311e98ff","Type":"ContainerDied","Data":"ae1e42b42c40edab8487c9c507623fe4fef32be4b6b12116b8279a194a6aaf79"} Dec 10 15:44:42 crc kubenswrapper[4755]: I1210 15:44:42.721924 4755 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 10 15:44:42 crc kubenswrapper[4755]: I1210 15:44:42.721935 4755 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 10 15:44:42 crc kubenswrapper[4755]: I1210 15:44:42.969292 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 10 15:44:43 crc kubenswrapper[4755]: I1210 15:44:43.055391 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9a28c10-6e42-4e1d-9374-67db311e98ff-combined-ca-bundle\") pod \"c9a28c10-6e42-4e1d-9374-67db311e98ff\" (UID: \"c9a28c10-6e42-4e1d-9374-67db311e98ff\") " Dec 10 15:44:43 crc kubenswrapper[4755]: I1210 15:44:43.057015 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c9a28c10-6e42-4e1d-9374-67db311e98ff-etc-machine-id\") pod \"c9a28c10-6e42-4e1d-9374-67db311e98ff\" (UID: \"c9a28c10-6e42-4e1d-9374-67db311e98ff\") " Dec 10 15:44:43 crc kubenswrapper[4755]: I1210 15:44:43.057028 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c9a28c10-6e42-4e1d-9374-67db311e98ff-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "c9a28c10-6e42-4e1d-9374-67db311e98ff" (UID: "c9a28c10-6e42-4e1d-9374-67db311e98ff"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 15:44:43 crc kubenswrapper[4755]: I1210 15:44:43.057109 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-plxs8\" (UniqueName: \"kubernetes.io/projected/c9a28c10-6e42-4e1d-9374-67db311e98ff-kube-api-access-plxs8\") pod \"c9a28c10-6e42-4e1d-9374-67db311e98ff\" (UID: \"c9a28c10-6e42-4e1d-9374-67db311e98ff\") " Dec 10 15:44:43 crc kubenswrapper[4755]: I1210 15:44:43.057302 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c9a28c10-6e42-4e1d-9374-67db311e98ff-logs\") pod \"c9a28c10-6e42-4e1d-9374-67db311e98ff\" (UID: \"c9a28c10-6e42-4e1d-9374-67db311e98ff\") " Dec 10 15:44:43 crc kubenswrapper[4755]: I1210 15:44:43.057334 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c9a28c10-6e42-4e1d-9374-67db311e98ff-scripts\") pod \"c9a28c10-6e42-4e1d-9374-67db311e98ff\" (UID: \"c9a28c10-6e42-4e1d-9374-67db311e98ff\") " Dec 10 15:44:43 crc kubenswrapper[4755]: I1210 15:44:43.057642 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9a28c10-6e42-4e1d-9374-67db311e98ff-logs" (OuterVolumeSpecName: "logs") pod "c9a28c10-6e42-4e1d-9374-67db311e98ff" (UID: "c9a28c10-6e42-4e1d-9374-67db311e98ff"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:44:43 crc kubenswrapper[4755]: I1210 15:44:43.057871 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9a28c10-6e42-4e1d-9374-67db311e98ff-config-data\") pod \"c9a28c10-6e42-4e1d-9374-67db311e98ff\" (UID: \"c9a28c10-6e42-4e1d-9374-67db311e98ff\") " Dec 10 15:44:43 crc kubenswrapper[4755]: I1210 15:44:43.057904 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c9a28c10-6e42-4e1d-9374-67db311e98ff-config-data-custom\") pod \"c9a28c10-6e42-4e1d-9374-67db311e98ff\" (UID: \"c9a28c10-6e42-4e1d-9374-67db311e98ff\") " Dec 10 15:44:43 crc kubenswrapper[4755]: I1210 15:44:43.058752 4755 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c9a28c10-6e42-4e1d-9374-67db311e98ff-logs\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:43 crc kubenswrapper[4755]: I1210 15:44:43.058767 4755 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c9a28c10-6e42-4e1d-9374-67db311e98ff-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:43 crc kubenswrapper[4755]: I1210 15:44:43.063582 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9a28c10-6e42-4e1d-9374-67db311e98ff-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "c9a28c10-6e42-4e1d-9374-67db311e98ff" (UID: "c9a28c10-6e42-4e1d-9374-67db311e98ff"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:44:43 crc kubenswrapper[4755]: I1210 15:44:43.064783 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9a28c10-6e42-4e1d-9374-67db311e98ff-scripts" (OuterVolumeSpecName: "scripts") pod "c9a28c10-6e42-4e1d-9374-67db311e98ff" (UID: "c9a28c10-6e42-4e1d-9374-67db311e98ff"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:44:43 crc kubenswrapper[4755]: I1210 15:44:43.085806 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9a28c10-6e42-4e1d-9374-67db311e98ff-kube-api-access-plxs8" (OuterVolumeSpecName: "kube-api-access-plxs8") pod "c9a28c10-6e42-4e1d-9374-67db311e98ff" (UID: "c9a28c10-6e42-4e1d-9374-67db311e98ff"). InnerVolumeSpecName "kube-api-access-plxs8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:44:43 crc kubenswrapper[4755]: I1210 15:44:43.156753 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9a28c10-6e42-4e1d-9374-67db311e98ff-config-data" (OuterVolumeSpecName: "config-data") pod "c9a28c10-6e42-4e1d-9374-67db311e98ff" (UID: "c9a28c10-6e42-4e1d-9374-67db311e98ff"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:44:43 crc kubenswrapper[4755]: I1210 15:44:43.161752 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-plxs8\" (UniqueName: \"kubernetes.io/projected/c9a28c10-6e42-4e1d-9374-67db311e98ff-kube-api-access-plxs8\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:43 crc kubenswrapper[4755]: I1210 15:44:43.161970 4755 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c9a28c10-6e42-4e1d-9374-67db311e98ff-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:43 crc kubenswrapper[4755]: I1210 15:44:43.162034 4755 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9a28c10-6e42-4e1d-9374-67db311e98ff-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:43 crc kubenswrapper[4755]: I1210 15:44:43.162098 4755 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c9a28c10-6e42-4e1d-9374-67db311e98ff-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:43 crc kubenswrapper[4755]: I1210 15:44:43.186043 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9a28c10-6e42-4e1d-9374-67db311e98ff-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c9a28c10-6e42-4e1d-9374-67db311e98ff" (UID: "c9a28c10-6e42-4e1d-9374-67db311e98ff"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:44:43 crc kubenswrapper[4755]: I1210 15:44:43.264071 4755 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9a28c10-6e42-4e1d-9374-67db311e98ff-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:43 crc kubenswrapper[4755]: I1210 15:44:43.323838 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 10 15:44:43 crc kubenswrapper[4755]: I1210 15:44:43.324205 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 10 15:44:43 crc kubenswrapper[4755]: I1210 15:44:43.435848 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-storageinit-796tf" Dec 10 15:44:43 crc kubenswrapper[4755]: I1210 15:44:43.569584 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50f8ada4-b157-4f73-ae6b-876844b71ced-config-data\") pod \"50f8ada4-b157-4f73-ae6b-876844b71ced\" (UID: \"50f8ada4-b157-4f73-ae6b-876844b71ced\") " Dec 10 15:44:43 crc kubenswrapper[4755]: I1210 15:44:43.569711 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50f8ada4-b157-4f73-ae6b-876844b71ced-combined-ca-bundle\") pod \"50f8ada4-b157-4f73-ae6b-876844b71ced\" (UID: \"50f8ada4-b157-4f73-ae6b-876844b71ced\") " Dec 10 15:44:43 crc kubenswrapper[4755]: I1210 15:44:43.569752 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/50f8ada4-b157-4f73-ae6b-876844b71ced-scripts\") pod \"50f8ada4-b157-4f73-ae6b-876844b71ced\" (UID: \"50f8ada4-b157-4f73-ae6b-876844b71ced\") " Dec 10 15:44:43 crc kubenswrapper[4755]: I1210 15:44:43.570008 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kdtfc\" (UniqueName: \"kubernetes.io/projected/50f8ada4-b157-4f73-ae6b-876844b71ced-kube-api-access-kdtfc\") pod \"50f8ada4-b157-4f73-ae6b-876844b71ced\" (UID: \"50f8ada4-b157-4f73-ae6b-876844b71ced\") " Dec 10 15:44:43 crc kubenswrapper[4755]: I1210 15:44:43.570050 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/50f8ada4-b157-4f73-ae6b-876844b71ced-certs\") pod \"50f8ada4-b157-4f73-ae6b-876844b71ced\" (UID: \"50f8ada4-b157-4f73-ae6b-876844b71ced\") " Dec 10 15:44:43 crc kubenswrapper[4755]: I1210 15:44:43.586670 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50f8ada4-b157-4f73-ae6b-876844b71ced-kube-api-access-kdtfc" (OuterVolumeSpecName: "kube-api-access-kdtfc") pod "50f8ada4-b157-4f73-ae6b-876844b71ced" (UID: "50f8ada4-b157-4f73-ae6b-876844b71ced"). InnerVolumeSpecName "kube-api-access-kdtfc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:44:43 crc kubenswrapper[4755]: I1210 15:44:43.587207 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50f8ada4-b157-4f73-ae6b-876844b71ced-certs" (OuterVolumeSpecName: "certs") pod "50f8ada4-b157-4f73-ae6b-876844b71ced" (UID: "50f8ada4-b157-4f73-ae6b-876844b71ced"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:44:43 crc kubenswrapper[4755]: I1210 15:44:43.590630 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50f8ada4-b157-4f73-ae6b-876844b71ced-scripts" (OuterVolumeSpecName: "scripts") pod "50f8ada4-b157-4f73-ae6b-876844b71ced" (UID: "50f8ada4-b157-4f73-ae6b-876844b71ced"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:44:43 crc kubenswrapper[4755]: I1210 15:44:43.639620 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50f8ada4-b157-4f73-ae6b-876844b71ced-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "50f8ada4-b157-4f73-ae6b-876844b71ced" (UID: "50f8ada4-b157-4f73-ae6b-876844b71ced"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:44:43 crc kubenswrapper[4755]: I1210 15:44:43.669930 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50f8ada4-b157-4f73-ae6b-876844b71ced-config-data" (OuterVolumeSpecName: "config-data") pod "50f8ada4-b157-4f73-ae6b-876844b71ced" (UID: "50f8ada4-b157-4f73-ae6b-876844b71ced"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:44:43 crc kubenswrapper[4755]: I1210 15:44:43.673448 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kdtfc\" (UniqueName: \"kubernetes.io/projected/50f8ada4-b157-4f73-ae6b-876844b71ced-kube-api-access-kdtfc\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:43 crc kubenswrapper[4755]: I1210 15:44:43.673510 4755 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/projected/50f8ada4-b157-4f73-ae6b-876844b71ced-certs\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:43 crc kubenswrapper[4755]: I1210 15:44:43.673524 4755 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50f8ada4-b157-4f73-ae6b-876844b71ced-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:43 crc kubenswrapper[4755]: I1210 15:44:43.673536 4755 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50f8ada4-b157-4f73-ae6b-876844b71ced-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:43 crc kubenswrapper[4755]: I1210 15:44:43.673547 4755 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/50f8ada4-b157-4f73-ae6b-876844b71ced-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:43 crc kubenswrapper[4755]: I1210 15:44:43.751421 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-storageinit-796tf" event={"ID":"50f8ada4-b157-4f73-ae6b-876844b71ced","Type":"ContainerDied","Data":"44314b74c359571c218e7d3848a745124d6be274ed6547220617742f731e7ef7"} Dec 10 15:44:43 crc kubenswrapper[4755]: I1210 15:44:43.751585 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-storageinit-796tf" Dec 10 15:44:43 crc kubenswrapper[4755]: I1210 15:44:43.751657 4755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="44314b74c359571c218e7d3848a745124d6be274ed6547220617742f731e7ef7" Dec 10 15:44:43 crc kubenswrapper[4755]: I1210 15:44:43.786120 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 10 15:44:43 crc kubenswrapper[4755]: I1210 15:44:43.797763 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c9a28c10-6e42-4e1d-9374-67db311e98ff","Type":"ContainerDied","Data":"cdcb00ac2f2f750a53dbd36a17db15a0afaefd2cc86aa3145e6b06b21ed19390"} Dec 10 15:44:43 crc kubenswrapper[4755]: I1210 15:44:43.797823 4755 scope.go:117] "RemoveContainer" containerID="ae29916b6da2467ac30f6ddbdfabba6845c58c4a172da1a2e6cdb24c3c8c2e5f" Dec 10 15:44:43 crc kubenswrapper[4755]: I1210 15:44:43.853103 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 10 15:44:43 crc kubenswrapper[4755]: I1210 15:44:43.853229 4755 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 10 15:44:43 crc kubenswrapper[4755]: I1210 15:44:43.872598 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Dec 10 15:44:43 crc kubenswrapper[4755]: I1210 15:44:43.876202 4755 scope.go:117] "RemoveContainer" containerID="ae1e42b42c40edab8487c9c507623fe4fef32be4b6b12116b8279a194a6aaf79" Dec 10 15:44:43 crc kubenswrapper[4755]: I1210 15:44:43.910181 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Dec 10 15:44:43 crc kubenswrapper[4755]: I1210 15:44:43.934786 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-75b8ff9576-fcxhh" Dec 10 15:44:43 crc kubenswrapper[4755]: I1210 15:44:43.945912 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-75b8ff9576-fcxhh" Dec 10 15:44:43 crc kubenswrapper[4755]: I1210 15:44:43.953043 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Dec 10 15:44:43 crc kubenswrapper[4755]: E1210 15:44:43.953456 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9a28c10-6e42-4e1d-9374-67db311e98ff" containerName="cinder-api-log" Dec 10 15:44:43 crc kubenswrapper[4755]: I1210 15:44:43.953489 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9a28c10-6e42-4e1d-9374-67db311e98ff" containerName="cinder-api-log" Dec 10 15:44:43 crc kubenswrapper[4755]: E1210 15:44:43.953510 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9a28c10-6e42-4e1d-9374-67db311e98ff" containerName="cinder-api" Dec 10 15:44:43 crc kubenswrapper[4755]: I1210 15:44:43.953517 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9a28c10-6e42-4e1d-9374-67db311e98ff" containerName="cinder-api" Dec 10 15:44:43 crc kubenswrapper[4755]: E1210 15:44:43.953537 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50f8ada4-b157-4f73-ae6b-876844b71ced" containerName="cloudkitty-storageinit" Dec 10 15:44:43 crc kubenswrapper[4755]: I1210 15:44:43.953543 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="50f8ada4-b157-4f73-ae6b-876844b71ced" containerName="cloudkitty-storageinit" Dec 10 15:44:43 crc kubenswrapper[4755]: I1210 15:44:43.953739 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="50f8ada4-b157-4f73-ae6b-876844b71ced" containerName="cloudkitty-storageinit" Dec 10 15:44:43 crc kubenswrapper[4755]: I1210 15:44:43.953762 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9a28c10-6e42-4e1d-9374-67db311e98ff" containerName="cinder-api-log" Dec 10 15:44:43 crc kubenswrapper[4755]: I1210 15:44:43.953771 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9a28c10-6e42-4e1d-9374-67db311e98ff" containerName="cinder-api" Dec 10 15:44:43 crc kubenswrapper[4755]: I1210 15:44:43.955682 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 10 15:44:43 crc kubenswrapper[4755]: I1210 15:44:43.970681 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Dec 10 15:44:43 crc kubenswrapper[4755]: I1210 15:44:43.970883 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Dec 10 15:44:43 crc kubenswrapper[4755]: I1210 15:44:43.970983 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Dec 10 15:44:43 crc kubenswrapper[4755]: I1210 15:44:43.976931 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 10 15:44:44 crc kubenswrapper[4755]: I1210 15:44:44.083894 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/900a05ac-78b6-44d0-9499-2dbfb52fcdfc-config-data-custom\") pod \"cinder-api-0\" (UID: \"900a05ac-78b6-44d0-9499-2dbfb52fcdfc\") " pod="openstack/cinder-api-0" Dec 10 15:44:44 crc kubenswrapper[4755]: I1210 15:44:44.083949 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/900a05ac-78b6-44d0-9499-2dbfb52fcdfc-etc-machine-id\") pod \"cinder-api-0\" (UID: \"900a05ac-78b6-44d0-9499-2dbfb52fcdfc\") " pod="openstack/cinder-api-0" Dec 10 15:44:44 crc kubenswrapper[4755]: I1210 15:44:44.083972 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/900a05ac-78b6-44d0-9499-2dbfb52fcdfc-config-data\") pod \"cinder-api-0\" (UID: \"900a05ac-78b6-44d0-9499-2dbfb52fcdfc\") " pod="openstack/cinder-api-0" Dec 10 15:44:44 crc kubenswrapper[4755]: I1210 15:44:44.084004 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/900a05ac-78b6-44d0-9499-2dbfb52fcdfc-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"900a05ac-78b6-44d0-9499-2dbfb52fcdfc\") " pod="openstack/cinder-api-0" Dec 10 15:44:44 crc kubenswrapper[4755]: I1210 15:44:44.084344 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/900a05ac-78b6-44d0-9499-2dbfb52fcdfc-public-tls-certs\") pod \"cinder-api-0\" (UID: \"900a05ac-78b6-44d0-9499-2dbfb52fcdfc\") " pod="openstack/cinder-api-0" Dec 10 15:44:44 crc kubenswrapper[4755]: I1210 15:44:44.084412 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/900a05ac-78b6-44d0-9499-2dbfb52fcdfc-logs\") pod \"cinder-api-0\" (UID: \"900a05ac-78b6-44d0-9499-2dbfb52fcdfc\") " pod="openstack/cinder-api-0" Dec 10 15:44:44 crc kubenswrapper[4755]: I1210 15:44:44.084580 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/900a05ac-78b6-44d0-9499-2dbfb52fcdfc-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"900a05ac-78b6-44d0-9499-2dbfb52fcdfc\") " pod="openstack/cinder-api-0" Dec 10 15:44:44 crc kubenswrapper[4755]: I1210 15:44:44.084615 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwp9n\" (UniqueName: \"kubernetes.io/projected/900a05ac-78b6-44d0-9499-2dbfb52fcdfc-kube-api-access-rwp9n\") pod \"cinder-api-0\" (UID: \"900a05ac-78b6-44d0-9499-2dbfb52fcdfc\") " pod="openstack/cinder-api-0" Dec 10 15:44:44 crc kubenswrapper[4755]: I1210 15:44:44.084663 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/900a05ac-78b6-44d0-9499-2dbfb52fcdfc-scripts\") pod \"cinder-api-0\" (UID: \"900a05ac-78b6-44d0-9499-2dbfb52fcdfc\") " pod="openstack/cinder-api-0" Dec 10 15:44:44 crc kubenswrapper[4755]: I1210 15:44:44.166072 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-proc-0"] Dec 10 15:44:44 crc kubenswrapper[4755]: I1210 15:44:44.182340 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Dec 10 15:44:44 crc kubenswrapper[4755]: I1210 15:44:44.190757 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/900a05ac-78b6-44d0-9499-2dbfb52fcdfc-config-data-custom\") pod \"cinder-api-0\" (UID: \"900a05ac-78b6-44d0-9499-2dbfb52fcdfc\") " pod="openstack/cinder-api-0" Dec 10 15:44:44 crc kubenswrapper[4755]: I1210 15:44:44.190833 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/900a05ac-78b6-44d0-9499-2dbfb52fcdfc-etc-machine-id\") pod \"cinder-api-0\" (UID: \"900a05ac-78b6-44d0-9499-2dbfb52fcdfc\") " pod="openstack/cinder-api-0" Dec 10 15:44:44 crc kubenswrapper[4755]: I1210 15:44:44.190873 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/900a05ac-78b6-44d0-9499-2dbfb52fcdfc-config-data\") pod \"cinder-api-0\" (UID: \"900a05ac-78b6-44d0-9499-2dbfb52fcdfc\") " pod="openstack/cinder-api-0" Dec 10 15:44:44 crc kubenswrapper[4755]: I1210 15:44:44.190919 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/900a05ac-78b6-44d0-9499-2dbfb52fcdfc-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"900a05ac-78b6-44d0-9499-2dbfb52fcdfc\") " pod="openstack/cinder-api-0" Dec 10 15:44:44 crc kubenswrapper[4755]: I1210 15:44:44.192991 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/900a05ac-78b6-44d0-9499-2dbfb52fcdfc-etc-machine-id\") pod \"cinder-api-0\" (UID: \"900a05ac-78b6-44d0-9499-2dbfb52fcdfc\") " pod="openstack/cinder-api-0" Dec 10 15:44:44 crc kubenswrapper[4755]: I1210 15:44:44.198996 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/900a05ac-78b6-44d0-9499-2dbfb52fcdfc-public-tls-certs\") pod \"cinder-api-0\" (UID: \"900a05ac-78b6-44d0-9499-2dbfb52fcdfc\") " pod="openstack/cinder-api-0" Dec 10 15:44:44 crc kubenswrapper[4755]: I1210 15:44:44.199073 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/900a05ac-78b6-44d0-9499-2dbfb52fcdfc-logs\") pod \"cinder-api-0\" (UID: \"900a05ac-78b6-44d0-9499-2dbfb52fcdfc\") " pod="openstack/cinder-api-0" Dec 10 15:44:44 crc kubenswrapper[4755]: I1210 15:44:44.199188 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/900a05ac-78b6-44d0-9499-2dbfb52fcdfc-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"900a05ac-78b6-44d0-9499-2dbfb52fcdfc\") " pod="openstack/cinder-api-0" Dec 10 15:44:44 crc kubenswrapper[4755]: I1210 15:44:44.199212 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwp9n\" (UniqueName: \"kubernetes.io/projected/900a05ac-78b6-44d0-9499-2dbfb52fcdfc-kube-api-access-rwp9n\") pod \"cinder-api-0\" (UID: \"900a05ac-78b6-44d0-9499-2dbfb52fcdfc\") " pod="openstack/cinder-api-0" Dec 10 15:44:44 crc kubenswrapper[4755]: I1210 15:44:44.199252 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/900a05ac-78b6-44d0-9499-2dbfb52fcdfc-scripts\") pod \"cinder-api-0\" (UID: \"900a05ac-78b6-44d0-9499-2dbfb52fcdfc\") " pod="openstack/cinder-api-0" Dec 10 15:44:44 crc kubenswrapper[4755]: I1210 15:44:44.204545 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/900a05ac-78b6-44d0-9499-2dbfb52fcdfc-logs\") pod \"cinder-api-0\" (UID: \"900a05ac-78b6-44d0-9499-2dbfb52fcdfc\") " pod="openstack/cinder-api-0" Dec 10 15:44:44 crc kubenswrapper[4755]: I1210 15:44:44.209678 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-scripts" Dec 10 15:44:44 crc kubenswrapper[4755]: I1210 15:44:44.209991 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cloudkitty-client-internal" Dec 10 15:44:44 crc kubenswrapper[4755]: I1210 15:44:44.210243 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-cloudkitty-dockercfg-6f74p" Dec 10 15:44:44 crc kubenswrapper[4755]: I1210 15:44:44.210080 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-config-data" Dec 10 15:44:44 crc kubenswrapper[4755]: I1210 15:44:44.210128 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-proc-config-data" Dec 10 15:44:44 crc kubenswrapper[4755]: I1210 15:44:44.217617 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/900a05ac-78b6-44d0-9499-2dbfb52fcdfc-config-data-custom\") pod \"cinder-api-0\" (UID: \"900a05ac-78b6-44d0-9499-2dbfb52fcdfc\") " pod="openstack/cinder-api-0" Dec 10 15:44:44 crc kubenswrapper[4755]: I1210 15:44:44.222018 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/900a05ac-78b6-44d0-9499-2dbfb52fcdfc-scripts\") pod \"cinder-api-0\" (UID: \"900a05ac-78b6-44d0-9499-2dbfb52fcdfc\") " pod="openstack/cinder-api-0" Dec 10 15:44:44 crc kubenswrapper[4755]: I1210 15:44:44.223063 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/900a05ac-78b6-44d0-9499-2dbfb52fcdfc-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"900a05ac-78b6-44d0-9499-2dbfb52fcdfc\") " pod="openstack/cinder-api-0" Dec 10 15:44:44 crc kubenswrapper[4755]: I1210 15:44:44.223956 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/900a05ac-78b6-44d0-9499-2dbfb52fcdfc-public-tls-certs\") pod \"cinder-api-0\" (UID: \"900a05ac-78b6-44d0-9499-2dbfb52fcdfc\") " pod="openstack/cinder-api-0" Dec 10 15:44:44 crc kubenswrapper[4755]: I1210 15:44:44.224782 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-proc-0"] Dec 10 15:44:44 crc kubenswrapper[4755]: I1210 15:44:44.228388 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/900a05ac-78b6-44d0-9499-2dbfb52fcdfc-config-data\") pod \"cinder-api-0\" (UID: \"900a05ac-78b6-44d0-9499-2dbfb52fcdfc\") " pod="openstack/cinder-api-0" Dec 10 15:44:44 crc kubenswrapper[4755]: I1210 15:44:44.233843 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/900a05ac-78b6-44d0-9499-2dbfb52fcdfc-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"900a05ac-78b6-44d0-9499-2dbfb52fcdfc\") " pod="openstack/cinder-api-0" Dec 10 15:44:44 crc kubenswrapper[4755]: I1210 15:44:44.250270 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwp9n\" (UniqueName: \"kubernetes.io/projected/900a05ac-78b6-44d0-9499-2dbfb52fcdfc-kube-api-access-rwp9n\") pod \"cinder-api-0\" (UID: \"900a05ac-78b6-44d0-9499-2dbfb52fcdfc\") " pod="openstack/cinder-api-0" Dec 10 15:44:44 crc kubenswrapper[4755]: I1210 15:44:44.294205 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 10 15:44:44 crc kubenswrapper[4755]: I1210 15:44:44.295537 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-zmh5b"] Dec 10 15:44:44 crc kubenswrapper[4755]: I1210 15:44:44.298297 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6578955fd5-zmh5b" podUID="a49be472-21ec-4f59-811e-9e1196ebaa14" containerName="dnsmasq-dns" containerID="cri-o://2780f449c877d41f07a88f85b00e84d23ba1e0e2a9549e7d1e10297cf18f367c" gracePeriod=10 Dec 10 15:44:44 crc kubenswrapper[4755]: I1210 15:44:44.300728 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9ts5\" (UniqueName: \"kubernetes.io/projected/d5843012-5395-4dac-9506-b2e080cdc229-kube-api-access-k9ts5\") pod \"cloudkitty-proc-0\" (UID: \"d5843012-5395-4dac-9506-b2e080cdc229\") " pod="openstack/cloudkitty-proc-0" Dec 10 15:44:44 crc kubenswrapper[4755]: I1210 15:44:44.300820 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d5843012-5395-4dac-9506-b2e080cdc229-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"d5843012-5395-4dac-9506-b2e080cdc229\") " pod="openstack/cloudkitty-proc-0" Dec 10 15:44:44 crc kubenswrapper[4755]: I1210 15:44:44.300905 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/d5843012-5395-4dac-9506-b2e080cdc229-certs\") pod \"cloudkitty-proc-0\" (UID: \"d5843012-5395-4dac-9506-b2e080cdc229\") " pod="openstack/cloudkitty-proc-0" Dec 10 15:44:44 crc kubenswrapper[4755]: I1210 15:44:44.301049 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5843012-5395-4dac-9506-b2e080cdc229-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"d5843012-5395-4dac-9506-b2e080cdc229\") " pod="openstack/cloudkitty-proc-0" Dec 10 15:44:44 crc kubenswrapper[4755]: I1210 15:44:44.301076 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5843012-5395-4dac-9506-b2e080cdc229-config-data\") pod \"cloudkitty-proc-0\" (UID: \"d5843012-5395-4dac-9506-b2e080cdc229\") " pod="openstack/cloudkitty-proc-0" Dec 10 15:44:44 crc kubenswrapper[4755]: I1210 15:44:44.301098 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d5843012-5395-4dac-9506-b2e080cdc229-scripts\") pod \"cloudkitty-proc-0\" (UID: \"d5843012-5395-4dac-9506-b2e080cdc229\") " pod="openstack/cloudkitty-proc-0" Dec 10 15:44:44 crc kubenswrapper[4755]: I1210 15:44:44.305458 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6578955fd5-zmh5b" Dec 10 15:44:44 crc kubenswrapper[4755]: I1210 15:44:44.359343 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-58bd69657f-n5hkt"] Dec 10 15:44:44 crc kubenswrapper[4755]: I1210 15:44:44.372960 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58bd69657f-n5hkt" Dec 10 15:44:44 crc kubenswrapper[4755]: I1210 15:44:44.402885 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9ts5\" (UniqueName: \"kubernetes.io/projected/d5843012-5395-4dac-9506-b2e080cdc229-kube-api-access-k9ts5\") pod \"cloudkitty-proc-0\" (UID: \"d5843012-5395-4dac-9506-b2e080cdc229\") " pod="openstack/cloudkitty-proc-0" Dec 10 15:44:44 crc kubenswrapper[4755]: I1210 15:44:44.403312 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d5843012-5395-4dac-9506-b2e080cdc229-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"d5843012-5395-4dac-9506-b2e080cdc229\") " pod="openstack/cloudkitty-proc-0" Dec 10 15:44:44 crc kubenswrapper[4755]: I1210 15:44:44.403422 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/d5843012-5395-4dac-9506-b2e080cdc229-certs\") pod \"cloudkitty-proc-0\" (UID: \"d5843012-5395-4dac-9506-b2e080cdc229\") " pod="openstack/cloudkitty-proc-0" Dec 10 15:44:44 crc kubenswrapper[4755]: I1210 15:44:44.403607 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5843012-5395-4dac-9506-b2e080cdc229-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"d5843012-5395-4dac-9506-b2e080cdc229\") " pod="openstack/cloudkitty-proc-0" Dec 10 15:44:44 crc kubenswrapper[4755]: I1210 15:44:44.403646 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5843012-5395-4dac-9506-b2e080cdc229-config-data\") pod \"cloudkitty-proc-0\" (UID: \"d5843012-5395-4dac-9506-b2e080cdc229\") " pod="openstack/cloudkitty-proc-0" Dec 10 15:44:44 crc kubenswrapper[4755]: I1210 15:44:44.403708 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d5843012-5395-4dac-9506-b2e080cdc229-scripts\") pod \"cloudkitty-proc-0\" (UID: \"d5843012-5395-4dac-9506-b2e080cdc229\") " pod="openstack/cloudkitty-proc-0" Dec 10 15:44:44 crc kubenswrapper[4755]: I1210 15:44:44.448509 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d5843012-5395-4dac-9506-b2e080cdc229-scripts\") pod \"cloudkitty-proc-0\" (UID: \"d5843012-5395-4dac-9506-b2e080cdc229\") " pod="openstack/cloudkitty-proc-0" Dec 10 15:44:44 crc kubenswrapper[4755]: I1210 15:44:44.450122 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d5843012-5395-4dac-9506-b2e080cdc229-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"d5843012-5395-4dac-9506-b2e080cdc229\") " pod="openstack/cloudkitty-proc-0" Dec 10 15:44:44 crc kubenswrapper[4755]: I1210 15:44:44.454066 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/d5843012-5395-4dac-9506-b2e080cdc229-certs\") pod \"cloudkitty-proc-0\" (UID: \"d5843012-5395-4dac-9506-b2e080cdc229\") " pod="openstack/cloudkitty-proc-0" Dec 10 15:44:44 crc kubenswrapper[4755]: I1210 15:44:44.454292 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58bd69657f-n5hkt"] Dec 10 15:44:44 crc kubenswrapper[4755]: I1210 15:44:44.454943 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5843012-5395-4dac-9506-b2e080cdc229-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"d5843012-5395-4dac-9506-b2e080cdc229\") " pod="openstack/cloudkitty-proc-0" Dec 10 15:44:44 crc kubenswrapper[4755]: I1210 15:44:44.472147 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9ts5\" (UniqueName: \"kubernetes.io/projected/d5843012-5395-4dac-9506-b2e080cdc229-kube-api-access-k9ts5\") pod \"cloudkitty-proc-0\" (UID: \"d5843012-5395-4dac-9506-b2e080cdc229\") " pod="openstack/cloudkitty-proc-0" Dec 10 15:44:44 crc kubenswrapper[4755]: I1210 15:44:44.499892 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5843012-5395-4dac-9506-b2e080cdc229-config-data\") pod \"cloudkitty-proc-0\" (UID: \"d5843012-5395-4dac-9506-b2e080cdc229\") " pod="openstack/cloudkitty-proc-0" Dec 10 15:44:44 crc kubenswrapper[4755]: I1210 15:44:44.505549 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/45648d9c-bd22-443a-bf3f-8c08998388ec-ovsdbserver-sb\") pod \"dnsmasq-dns-58bd69657f-n5hkt\" (UID: \"45648d9c-bd22-443a-bf3f-8c08998388ec\") " pod="openstack/dnsmasq-dns-58bd69657f-n5hkt" Dec 10 15:44:44 crc kubenswrapper[4755]: I1210 15:44:44.505640 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45648d9c-bd22-443a-bf3f-8c08998388ec-config\") pod \"dnsmasq-dns-58bd69657f-n5hkt\" (UID: \"45648d9c-bd22-443a-bf3f-8c08998388ec\") " pod="openstack/dnsmasq-dns-58bd69657f-n5hkt" Dec 10 15:44:44 crc kubenswrapper[4755]: I1210 15:44:44.505705 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/45648d9c-bd22-443a-bf3f-8c08998388ec-dns-swift-storage-0\") pod \"dnsmasq-dns-58bd69657f-n5hkt\" (UID: \"45648d9c-bd22-443a-bf3f-8c08998388ec\") " pod="openstack/dnsmasq-dns-58bd69657f-n5hkt" Dec 10 15:44:44 crc kubenswrapper[4755]: I1210 15:44:44.505767 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/45648d9c-bd22-443a-bf3f-8c08998388ec-dns-svc\") pod \"dnsmasq-dns-58bd69657f-n5hkt\" (UID: \"45648d9c-bd22-443a-bf3f-8c08998388ec\") " pod="openstack/dnsmasq-dns-58bd69657f-n5hkt" Dec 10 15:44:44 crc kubenswrapper[4755]: I1210 15:44:44.505784 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/45648d9c-bd22-443a-bf3f-8c08998388ec-ovsdbserver-nb\") pod \"dnsmasq-dns-58bd69657f-n5hkt\" (UID: \"45648d9c-bd22-443a-bf3f-8c08998388ec\") " pod="openstack/dnsmasq-dns-58bd69657f-n5hkt" Dec 10 15:44:44 crc kubenswrapper[4755]: I1210 15:44:44.505820 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmr6g\" (UniqueName: \"kubernetes.io/projected/45648d9c-bd22-443a-bf3f-8c08998388ec-kube-api-access-zmr6g\") pod \"dnsmasq-dns-58bd69657f-n5hkt\" (UID: \"45648d9c-bd22-443a-bf3f-8c08998388ec\") " pod="openstack/dnsmasq-dns-58bd69657f-n5hkt" Dec 10 15:44:44 crc kubenswrapper[4755]: I1210 15:44:44.612737 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/45648d9c-bd22-443a-bf3f-8c08998388ec-dns-swift-storage-0\") pod \"dnsmasq-dns-58bd69657f-n5hkt\" (UID: \"45648d9c-bd22-443a-bf3f-8c08998388ec\") " pod="openstack/dnsmasq-dns-58bd69657f-n5hkt" Dec 10 15:44:44 crc kubenswrapper[4755]: I1210 15:44:44.612827 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/45648d9c-bd22-443a-bf3f-8c08998388ec-dns-svc\") pod \"dnsmasq-dns-58bd69657f-n5hkt\" (UID: \"45648d9c-bd22-443a-bf3f-8c08998388ec\") " pod="openstack/dnsmasq-dns-58bd69657f-n5hkt" Dec 10 15:44:44 crc kubenswrapper[4755]: I1210 15:44:44.612843 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/45648d9c-bd22-443a-bf3f-8c08998388ec-ovsdbserver-nb\") pod \"dnsmasq-dns-58bd69657f-n5hkt\" (UID: \"45648d9c-bd22-443a-bf3f-8c08998388ec\") " pod="openstack/dnsmasq-dns-58bd69657f-n5hkt" Dec 10 15:44:44 crc kubenswrapper[4755]: I1210 15:44:44.612879 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmr6g\" (UniqueName: \"kubernetes.io/projected/45648d9c-bd22-443a-bf3f-8c08998388ec-kube-api-access-zmr6g\") pod \"dnsmasq-dns-58bd69657f-n5hkt\" (UID: \"45648d9c-bd22-443a-bf3f-8c08998388ec\") " pod="openstack/dnsmasq-dns-58bd69657f-n5hkt" Dec 10 15:44:44 crc kubenswrapper[4755]: I1210 15:44:44.612912 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/45648d9c-bd22-443a-bf3f-8c08998388ec-ovsdbserver-sb\") pod \"dnsmasq-dns-58bd69657f-n5hkt\" (UID: \"45648d9c-bd22-443a-bf3f-8c08998388ec\") " pod="openstack/dnsmasq-dns-58bd69657f-n5hkt" Dec 10 15:44:44 crc kubenswrapper[4755]: I1210 15:44:44.612979 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45648d9c-bd22-443a-bf3f-8c08998388ec-config\") pod \"dnsmasq-dns-58bd69657f-n5hkt\" (UID: \"45648d9c-bd22-443a-bf3f-8c08998388ec\") " pod="openstack/dnsmasq-dns-58bd69657f-n5hkt" Dec 10 15:44:44 crc kubenswrapper[4755]: I1210 15:44:44.614513 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/45648d9c-bd22-443a-bf3f-8c08998388ec-dns-swift-storage-0\") pod \"dnsmasq-dns-58bd69657f-n5hkt\" (UID: \"45648d9c-bd22-443a-bf3f-8c08998388ec\") " pod="openstack/dnsmasq-dns-58bd69657f-n5hkt" Dec 10 15:44:44 crc kubenswrapper[4755]: I1210 15:44:44.615284 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/45648d9c-bd22-443a-bf3f-8c08998388ec-ovsdbserver-sb\") pod \"dnsmasq-dns-58bd69657f-n5hkt\" (UID: \"45648d9c-bd22-443a-bf3f-8c08998388ec\") " pod="openstack/dnsmasq-dns-58bd69657f-n5hkt" Dec 10 15:44:44 crc kubenswrapper[4755]: I1210 15:44:44.615453 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/45648d9c-bd22-443a-bf3f-8c08998388ec-dns-svc\") pod \"dnsmasq-dns-58bd69657f-n5hkt\" (UID: \"45648d9c-bd22-443a-bf3f-8c08998388ec\") " pod="openstack/dnsmasq-dns-58bd69657f-n5hkt" Dec 10 15:44:44 crc kubenswrapper[4755]: I1210 15:44:44.618015 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45648d9c-bd22-443a-bf3f-8c08998388ec-config\") pod \"dnsmasq-dns-58bd69657f-n5hkt\" (UID: \"45648d9c-bd22-443a-bf3f-8c08998388ec\") " pod="openstack/dnsmasq-dns-58bd69657f-n5hkt" Dec 10 15:44:44 crc kubenswrapper[4755]: I1210 15:44:44.626027 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/45648d9c-bd22-443a-bf3f-8c08998388ec-ovsdbserver-nb\") pod \"dnsmasq-dns-58bd69657f-n5hkt\" (UID: \"45648d9c-bd22-443a-bf3f-8c08998388ec\") " pod="openstack/dnsmasq-dns-58bd69657f-n5hkt" Dec 10 15:44:44 crc kubenswrapper[4755]: I1210 15:44:44.667332 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Dec 10 15:44:44 crc kubenswrapper[4755]: I1210 15:44:44.670233 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmr6g\" (UniqueName: \"kubernetes.io/projected/45648d9c-bd22-443a-bf3f-8c08998388ec-kube-api-access-zmr6g\") pod \"dnsmasq-dns-58bd69657f-n5hkt\" (UID: \"45648d9c-bd22-443a-bf3f-8c08998388ec\") " pod="openstack/dnsmasq-dns-58bd69657f-n5hkt" Dec 10 15:44:44 crc kubenswrapper[4755]: I1210 15:44:44.674217 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-api-0"] Dec 10 15:44:44 crc kubenswrapper[4755]: I1210 15:44:44.742971 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Dec 10 15:44:44 crc kubenswrapper[4755]: I1210 15:44:44.766754 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-api-config-data" Dec 10 15:44:44 crc kubenswrapper[4755]: I1210 15:44:44.830436 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58bd69657f-n5hkt" Dec 10 15:44:44 crc kubenswrapper[4755]: I1210 15:44:44.871572 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-api-0"] Dec 10 15:44:44 crc kubenswrapper[4755]: I1210 15:44:44.921622 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5676daa1-d125-4278-9766-c9fa314e5d77-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"5676daa1-d125-4278-9766-c9fa314e5d77\") " pod="openstack/cloudkitty-api-0" Dec 10 15:44:44 crc kubenswrapper[4755]: I1210 15:44:44.921960 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/5676daa1-d125-4278-9766-c9fa314e5d77-certs\") pod \"cloudkitty-api-0\" (UID: \"5676daa1-d125-4278-9766-c9fa314e5d77\") " pod="openstack/cloudkitty-api-0" Dec 10 15:44:44 crc kubenswrapper[4755]: I1210 15:44:44.922121 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5676daa1-d125-4278-9766-c9fa314e5d77-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"5676daa1-d125-4278-9766-c9fa314e5d77\") " pod="openstack/cloudkitty-api-0" Dec 10 15:44:44 crc kubenswrapper[4755]: I1210 15:44:44.922423 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5676daa1-d125-4278-9766-c9fa314e5d77-scripts\") pod \"cloudkitty-api-0\" (UID: \"5676daa1-d125-4278-9766-c9fa314e5d77\") " pod="openstack/cloudkitty-api-0" Dec 10 15:44:44 crc kubenswrapper[4755]: I1210 15:44:44.922548 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5676daa1-d125-4278-9766-c9fa314e5d77-config-data\") pod \"cloudkitty-api-0\" (UID: \"5676daa1-d125-4278-9766-c9fa314e5d77\") " pod="openstack/cloudkitty-api-0" Dec 10 15:44:44 crc kubenswrapper[4755]: I1210 15:44:44.927091 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5676daa1-d125-4278-9766-c9fa314e5d77-logs\") pod \"cloudkitty-api-0\" (UID: \"5676daa1-d125-4278-9766-c9fa314e5d77\") " pod="openstack/cloudkitty-api-0" Dec 10 15:44:44 crc kubenswrapper[4755]: I1210 15:44:44.927535 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqjt4\" (UniqueName: \"kubernetes.io/projected/5676daa1-d125-4278-9766-c9fa314e5d77-kube-api-access-vqjt4\") pod \"cloudkitty-api-0\" (UID: \"5676daa1-d125-4278-9766-c9fa314e5d77\") " pod="openstack/cloudkitty-api-0" Dec 10 15:44:44 crc kubenswrapper[4755]: I1210 15:44:44.931325 4755 generic.go:334] "Generic (PLEG): container finished" podID="7d152fd0-fbbf-4c7b-874a-169860ee9075" containerID="61de7b2c15e59d0bd635c02e1e4985b8bfd2694a7ded31026d0d4a36cc326f17" exitCode=0 Dec 10 15:44:44 crc kubenswrapper[4755]: I1210 15:44:44.931379 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6d5449c6d4-5rw2f" event={"ID":"7d152fd0-fbbf-4c7b-874a-169860ee9075","Type":"ContainerDied","Data":"61de7b2c15e59d0bd635c02e1e4985b8bfd2694a7ded31026d0d4a36cc326f17"} Dec 10 15:44:44 crc kubenswrapper[4755]: I1210 15:44:44.935771 4755 generic.go:334] "Generic (PLEG): container finished" podID="a49be472-21ec-4f59-811e-9e1196ebaa14" containerID="2780f449c877d41f07a88f85b00e84d23ba1e0e2a9549e7d1e10297cf18f367c" exitCode=0 Dec 10 15:44:44 crc kubenswrapper[4755]: I1210 15:44:44.935826 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-zmh5b" event={"ID":"a49be472-21ec-4f59-811e-9e1196ebaa14","Type":"ContainerDied","Data":"2780f449c877d41f07a88f85b00e84d23ba1e0e2a9549e7d1e10297cf18f367c"} Dec 10 15:44:44 crc kubenswrapper[4755]: E1210 15:44:44.996237 4755 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda49be472_21ec_4f59_811e_9e1196ebaa14.slice/crio-2780f449c877d41f07a88f85b00e84d23ba1e0e2a9549e7d1e10297cf18f367c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7d152fd0_fbbf_4c7b_874a_169860ee9075.slice/crio-61de7b2c15e59d0bd635c02e1e4985b8bfd2694a7ded31026d0d4a36cc326f17.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda49be472_21ec_4f59_811e_9e1196ebaa14.slice/crio-conmon-2780f449c877d41f07a88f85b00e84d23ba1e0e2a9549e7d1e10297cf18f367c.scope\": RecentStats: unable to find data in memory cache]" Dec 10 15:44:45 crc kubenswrapper[4755]: I1210 15:44:45.034893 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5676daa1-d125-4278-9766-c9fa314e5d77-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"5676daa1-d125-4278-9766-c9fa314e5d77\") " pod="openstack/cloudkitty-api-0" Dec 10 15:44:45 crc kubenswrapper[4755]: I1210 15:44:45.035196 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/5676daa1-d125-4278-9766-c9fa314e5d77-certs\") pod \"cloudkitty-api-0\" (UID: \"5676daa1-d125-4278-9766-c9fa314e5d77\") " pod="openstack/cloudkitty-api-0" Dec 10 15:44:45 crc kubenswrapper[4755]: I1210 15:44:45.035258 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5676daa1-d125-4278-9766-c9fa314e5d77-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"5676daa1-d125-4278-9766-c9fa314e5d77\") " pod="openstack/cloudkitty-api-0" Dec 10 15:44:45 crc kubenswrapper[4755]: I1210 15:44:45.035393 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5676daa1-d125-4278-9766-c9fa314e5d77-scripts\") pod \"cloudkitty-api-0\" (UID: \"5676daa1-d125-4278-9766-c9fa314e5d77\") " pod="openstack/cloudkitty-api-0" Dec 10 15:44:45 crc kubenswrapper[4755]: I1210 15:44:45.035413 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5676daa1-d125-4278-9766-c9fa314e5d77-config-data\") pod \"cloudkitty-api-0\" (UID: \"5676daa1-d125-4278-9766-c9fa314e5d77\") " pod="openstack/cloudkitty-api-0" Dec 10 15:44:45 crc kubenswrapper[4755]: I1210 15:44:45.035441 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5676daa1-d125-4278-9766-c9fa314e5d77-logs\") pod \"cloudkitty-api-0\" (UID: \"5676daa1-d125-4278-9766-c9fa314e5d77\") " pod="openstack/cloudkitty-api-0" Dec 10 15:44:45 crc kubenswrapper[4755]: I1210 15:44:45.035504 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqjt4\" (UniqueName: \"kubernetes.io/projected/5676daa1-d125-4278-9766-c9fa314e5d77-kube-api-access-vqjt4\") pod \"cloudkitty-api-0\" (UID: \"5676daa1-d125-4278-9766-c9fa314e5d77\") " pod="openstack/cloudkitty-api-0" Dec 10 15:44:45 crc kubenswrapper[4755]: I1210 15:44:45.043528 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5676daa1-d125-4278-9766-c9fa314e5d77-logs\") pod \"cloudkitty-api-0\" (UID: \"5676daa1-d125-4278-9766-c9fa314e5d77\") " pod="openstack/cloudkitty-api-0" Dec 10 15:44:45 crc kubenswrapper[4755]: I1210 15:44:45.050674 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5676daa1-d125-4278-9766-c9fa314e5d77-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"5676daa1-d125-4278-9766-c9fa314e5d77\") " pod="openstack/cloudkitty-api-0" Dec 10 15:44:45 crc kubenswrapper[4755]: I1210 15:44:45.069936 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5676daa1-d125-4278-9766-c9fa314e5d77-scripts\") pod \"cloudkitty-api-0\" (UID: \"5676daa1-d125-4278-9766-c9fa314e5d77\") " pod="openstack/cloudkitty-api-0" Dec 10 15:44:45 crc kubenswrapper[4755]: I1210 15:44:45.071437 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5676daa1-d125-4278-9766-c9fa314e5d77-config-data\") pod \"cloudkitty-api-0\" (UID: \"5676daa1-d125-4278-9766-c9fa314e5d77\") " pod="openstack/cloudkitty-api-0" Dec 10 15:44:45 crc kubenswrapper[4755]: I1210 15:44:45.077858 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/5676daa1-d125-4278-9766-c9fa314e5d77-certs\") pod \"cloudkitty-api-0\" (UID: \"5676daa1-d125-4278-9766-c9fa314e5d77\") " pod="openstack/cloudkitty-api-0" Dec 10 15:44:45 crc kubenswrapper[4755]: I1210 15:44:45.078501 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5676daa1-d125-4278-9766-c9fa314e5d77-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"5676daa1-d125-4278-9766-c9fa314e5d77\") " pod="openstack/cloudkitty-api-0" Dec 10 15:44:45 crc kubenswrapper[4755]: I1210 15:44:45.080090 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqjt4\" (UniqueName: \"kubernetes.io/projected/5676daa1-d125-4278-9766-c9fa314e5d77-kube-api-access-vqjt4\") pod \"cloudkitty-api-0\" (UID: \"5676daa1-d125-4278-9766-c9fa314e5d77\") " pod="openstack/cloudkitty-api-0" Dec 10 15:44:45 crc kubenswrapper[4755]: I1210 15:44:45.187835 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Dec 10 15:44:45 crc kubenswrapper[4755]: I1210 15:44:45.532559 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-zmh5b" Dec 10 15:44:45 crc kubenswrapper[4755]: I1210 15:44:45.671328 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a49be472-21ec-4f59-811e-9e1196ebaa14-ovsdbserver-nb\") pod \"a49be472-21ec-4f59-811e-9e1196ebaa14\" (UID: \"a49be472-21ec-4f59-811e-9e1196ebaa14\") " Dec 10 15:44:45 crc kubenswrapper[4755]: I1210 15:44:45.672628 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a49be472-21ec-4f59-811e-9e1196ebaa14-dns-swift-storage-0\") pod \"a49be472-21ec-4f59-811e-9e1196ebaa14\" (UID: \"a49be472-21ec-4f59-811e-9e1196ebaa14\") " Dec 10 15:44:45 crc kubenswrapper[4755]: I1210 15:44:45.672756 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a49be472-21ec-4f59-811e-9e1196ebaa14-config\") pod \"a49be472-21ec-4f59-811e-9e1196ebaa14\" (UID: \"a49be472-21ec-4f59-811e-9e1196ebaa14\") " Dec 10 15:44:45 crc kubenswrapper[4755]: I1210 15:44:45.672799 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a49be472-21ec-4f59-811e-9e1196ebaa14-dns-svc\") pod \"a49be472-21ec-4f59-811e-9e1196ebaa14\" (UID: \"a49be472-21ec-4f59-811e-9e1196ebaa14\") " Dec 10 15:44:45 crc kubenswrapper[4755]: I1210 15:44:45.672941 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a49be472-21ec-4f59-811e-9e1196ebaa14-ovsdbserver-sb\") pod \"a49be472-21ec-4f59-811e-9e1196ebaa14\" (UID: \"a49be472-21ec-4f59-811e-9e1196ebaa14\") " Dec 10 15:44:45 crc kubenswrapper[4755]: I1210 15:44:45.673033 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dnjrd\" (UniqueName: \"kubernetes.io/projected/a49be472-21ec-4f59-811e-9e1196ebaa14-kube-api-access-dnjrd\") pod \"a49be472-21ec-4f59-811e-9e1196ebaa14\" (UID: \"a49be472-21ec-4f59-811e-9e1196ebaa14\") " Dec 10 15:44:45 crc kubenswrapper[4755]: I1210 15:44:45.682106 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a49be472-21ec-4f59-811e-9e1196ebaa14-kube-api-access-dnjrd" (OuterVolumeSpecName: "kube-api-access-dnjrd") pod "a49be472-21ec-4f59-811e-9e1196ebaa14" (UID: "a49be472-21ec-4f59-811e-9e1196ebaa14"). InnerVolumeSpecName "kube-api-access-dnjrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:44:45 crc kubenswrapper[4755]: I1210 15:44:45.709374 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6d5449c6d4-5rw2f" Dec 10 15:44:45 crc kubenswrapper[4755]: I1210 15:44:45.776015 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dnjrd\" (UniqueName: \"kubernetes.io/projected/a49be472-21ec-4f59-811e-9e1196ebaa14-kube-api-access-dnjrd\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:45 crc kubenswrapper[4755]: I1210 15:44:45.793544 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a49be472-21ec-4f59-811e-9e1196ebaa14-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a49be472-21ec-4f59-811e-9e1196ebaa14" (UID: "a49be472-21ec-4f59-811e-9e1196ebaa14"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:44:45 crc kubenswrapper[4755]: I1210 15:44:45.815000 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a49be472-21ec-4f59-811e-9e1196ebaa14-config" (OuterVolumeSpecName: "config") pod "a49be472-21ec-4f59-811e-9e1196ebaa14" (UID: "a49be472-21ec-4f59-811e-9e1196ebaa14"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:44:45 crc kubenswrapper[4755]: I1210 15:44:45.828440 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9a28c10-6e42-4e1d-9374-67db311e98ff" path="/var/lib/kubelet/pods/c9a28c10-6e42-4e1d-9374-67db311e98ff/volumes" Dec 10 15:44:45 crc kubenswrapper[4755]: W1210 15:44:45.842650 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod900a05ac_78b6_44d0_9499_2dbfb52fcdfc.slice/crio-039ce724f7f6d9fb215652a576654875e4ef688647572ecb5941dbab188636c0 WatchSource:0}: Error finding container 039ce724f7f6d9fb215652a576654875e4ef688647572ecb5941dbab188636c0: Status 404 returned error can't find the container with id 039ce724f7f6d9fb215652a576654875e4ef688647572ecb5941dbab188636c0 Dec 10 15:44:45 crc kubenswrapper[4755]: I1210 15:44:45.879754 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a49be472-21ec-4f59-811e-9e1196ebaa14-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a49be472-21ec-4f59-811e-9e1196ebaa14" (UID: "a49be472-21ec-4f59-811e-9e1196ebaa14"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:44:45 crc kubenswrapper[4755]: I1210 15:44:45.882428 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 10 15:44:45 crc kubenswrapper[4755]: I1210 15:44:45.884124 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 10 15:44:45 crc kubenswrapper[4755]: I1210 15:44:45.894549 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a49be472-21ec-4f59-811e-9e1196ebaa14-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a49be472-21ec-4f59-811e-9e1196ebaa14" (UID: "a49be472-21ec-4f59-811e-9e1196ebaa14"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:44:45 crc kubenswrapper[4755]: I1210 15:44:45.894715 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7d152fd0-fbbf-4c7b-874a-169860ee9075-config\") pod \"7d152fd0-fbbf-4c7b-874a-169860ee9075\" (UID: \"7d152fd0-fbbf-4c7b-874a-169860ee9075\") " Dec 10 15:44:45 crc kubenswrapper[4755]: I1210 15:44:45.894837 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d152fd0-fbbf-4c7b-874a-169860ee9075-combined-ca-bundle\") pod \"7d152fd0-fbbf-4c7b-874a-169860ee9075\" (UID: \"7d152fd0-fbbf-4c7b-874a-169860ee9075\") " Dec 10 15:44:45 crc kubenswrapper[4755]: I1210 15:44:45.894907 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d152fd0-fbbf-4c7b-874a-169860ee9075-ovndb-tls-certs\") pod \"7d152fd0-fbbf-4c7b-874a-169860ee9075\" (UID: \"7d152fd0-fbbf-4c7b-874a-169860ee9075\") " Dec 10 15:44:45 crc kubenswrapper[4755]: I1210 15:44:45.895044 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7d152fd0-fbbf-4c7b-874a-169860ee9075-httpd-config\") pod \"7d152fd0-fbbf-4c7b-874a-169860ee9075\" (UID: \"7d152fd0-fbbf-4c7b-874a-169860ee9075\") " Dec 10 15:44:45 crc kubenswrapper[4755]: I1210 15:44:45.895137 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z4r8t\" (UniqueName: \"kubernetes.io/projected/7d152fd0-fbbf-4c7b-874a-169860ee9075-kube-api-access-z4r8t\") pod \"7d152fd0-fbbf-4c7b-874a-169860ee9075\" (UID: \"7d152fd0-fbbf-4c7b-874a-169860ee9075\") " Dec 10 15:44:45 crc kubenswrapper[4755]: I1210 15:44:45.895961 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a49be472-21ec-4f59-811e-9e1196ebaa14-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a49be472-21ec-4f59-811e-9e1196ebaa14" (UID: "a49be472-21ec-4f59-811e-9e1196ebaa14"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:44:45 crc kubenswrapper[4755]: I1210 15:44:45.896650 4755 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a49be472-21ec-4f59-811e-9e1196ebaa14-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:45 crc kubenswrapper[4755]: I1210 15:44:45.896690 4755 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a49be472-21ec-4f59-811e-9e1196ebaa14-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:45 crc kubenswrapper[4755]: I1210 15:44:45.896701 4755 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a49be472-21ec-4f59-811e-9e1196ebaa14-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:45 crc kubenswrapper[4755]: I1210 15:44:45.896712 4755 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a49be472-21ec-4f59-811e-9e1196ebaa14-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:45 crc kubenswrapper[4755]: I1210 15:44:45.896722 4755 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a49be472-21ec-4f59-811e-9e1196ebaa14-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:45 crc kubenswrapper[4755]: I1210 15:44:45.918705 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d152fd0-fbbf-4c7b-874a-169860ee9075-kube-api-access-z4r8t" (OuterVolumeSpecName: "kube-api-access-z4r8t") pod "7d152fd0-fbbf-4c7b-874a-169860ee9075" (UID: "7d152fd0-fbbf-4c7b-874a-169860ee9075"). InnerVolumeSpecName "kube-api-access-z4r8t". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:44:45 crc kubenswrapper[4755]: I1210 15:44:45.922654 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d152fd0-fbbf-4c7b-874a-169860ee9075-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "7d152fd0-fbbf-4c7b-874a-169860ee9075" (UID: "7d152fd0-fbbf-4c7b-874a-169860ee9075"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:44:45 crc kubenswrapper[4755]: I1210 15:44:45.939231 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-764c845c6f-lbzq2" Dec 10 15:44:45 crc kubenswrapper[4755]: I1210 15:44:45.985648 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d152fd0-fbbf-4c7b-874a-169860ee9075-config" (OuterVolumeSpecName: "config") pod "7d152fd0-fbbf-4c7b-874a-169860ee9075" (UID: "7d152fd0-fbbf-4c7b-874a-169860ee9075"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:44:45 crc kubenswrapper[4755]: I1210 15:44:45.991572 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d152fd0-fbbf-4c7b-874a-169860ee9075-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7d152fd0-fbbf-4c7b-874a-169860ee9075" (UID: "7d152fd0-fbbf-4c7b-874a-169860ee9075"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:44:45 crc kubenswrapper[4755]: I1210 15:44:45.998353 4755 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7d152fd0-fbbf-4c7b-874a-169860ee9075-httpd-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:45 crc kubenswrapper[4755]: I1210 15:44:45.998384 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z4r8t\" (UniqueName: \"kubernetes.io/projected/7d152fd0-fbbf-4c7b-874a-169860ee9075-kube-api-access-z4r8t\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:45 crc kubenswrapper[4755]: I1210 15:44:45.998395 4755 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/7d152fd0-fbbf-4c7b-874a-169860ee9075-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:45 crc kubenswrapper[4755]: I1210 15:44:45.998406 4755 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d152fd0-fbbf-4c7b-874a-169860ee9075-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:46 crc kubenswrapper[4755]: I1210 15:44:46.022285 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"900a05ac-78b6-44d0-9499-2dbfb52fcdfc","Type":"ContainerStarted","Data":"039ce724f7f6d9fb215652a576654875e4ef688647572ecb5941dbab188636c0"} Dec 10 15:44:46 crc kubenswrapper[4755]: I1210 15:44:46.061756 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6d5449c6d4-5rw2f" event={"ID":"7d152fd0-fbbf-4c7b-874a-169860ee9075","Type":"ContainerDied","Data":"31d1afa3cb59c5ad381b33b84d5f4d3650540c48cfb95df7de723424328cbe05"} Dec 10 15:44:46 crc kubenswrapper[4755]: I1210 15:44:46.061891 4755 scope.go:117] "RemoveContainer" containerID="4e05778ed82b900074c8fe7e096c3ce7cd0b6492a587cbaa2f1c4f064a7c250e" Dec 10 15:44:46 crc kubenswrapper[4755]: I1210 15:44:46.062030 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6d5449c6d4-5rw2f" Dec 10 15:44:46 crc kubenswrapper[4755]: I1210 15:44:46.097991 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-zmh5b" event={"ID":"a49be472-21ec-4f59-811e-9e1196ebaa14","Type":"ContainerDied","Data":"89768e0ba76d72ffa84a0d0d826e4b7946c5e3af3c50b3dff41f2fffd2b7ca24"} Dec 10 15:44:46 crc kubenswrapper[4755]: I1210 15:44:46.098090 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-zmh5b" Dec 10 15:44:46 crc kubenswrapper[4755]: I1210 15:44:46.146603 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58bd69657f-n5hkt"] Dec 10 15:44:46 crc kubenswrapper[4755]: I1210 15:44:46.161625 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d152fd0-fbbf-4c7b-874a-169860ee9075-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "7d152fd0-fbbf-4c7b-874a-169860ee9075" (UID: "7d152fd0-fbbf-4c7b-874a-169860ee9075"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:44:46 crc kubenswrapper[4755]: I1210 15:44:46.184658 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-proc-0"] Dec 10 15:44:46 crc kubenswrapper[4755]: I1210 15:44:46.205099 4755 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d152fd0-fbbf-4c7b-874a-169860ee9075-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:46 crc kubenswrapper[4755]: I1210 15:44:46.224413 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-zmh5b"] Dec 10 15:44:46 crc kubenswrapper[4755]: I1210 15:44:46.239899 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-zmh5b"] Dec 10 15:44:46 crc kubenswrapper[4755]: I1210 15:44:46.248943 4755 scope.go:117] "RemoveContainer" containerID="61de7b2c15e59d0bd635c02e1e4985b8bfd2694a7ded31026d0d4a36cc326f17" Dec 10 15:44:46 crc kubenswrapper[4755]: W1210 15:44:46.289001 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd5843012_5395_4dac_9506_b2e080cdc229.slice/crio-6f2db4715219749af04d614269699ea1dd1d7acfffcf2650477d497df92b77cf WatchSource:0}: Error finding container 6f2db4715219749af04d614269699ea1dd1d7acfffcf2650477d497df92b77cf: Status 404 returned error can't find the container with id 6f2db4715219749af04d614269699ea1dd1d7acfffcf2650477d497df92b77cf Dec 10 15:44:46 crc kubenswrapper[4755]: I1210 15:44:46.303535 4755 scope.go:117] "RemoveContainer" containerID="2780f449c877d41f07a88f85b00e84d23ba1e0e2a9549e7d1e10297cf18f367c" Dec 10 15:44:46 crc kubenswrapper[4755]: I1210 15:44:46.403142 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-api-0"] Dec 10 15:44:46 crc kubenswrapper[4755]: I1210 15:44:46.553521 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6d5449c6d4-5rw2f"] Dec 10 15:44:46 crc kubenswrapper[4755]: I1210 15:44:46.600392 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-6d5449c6d4-5rw2f"] Dec 10 15:44:46 crc kubenswrapper[4755]: I1210 15:44:46.626732 4755 scope.go:117] "RemoveContainer" containerID="87e888cccd3dd0f2689de44f9c5df08b9309506c2534ac642fdf8c6c73559346" Dec 10 15:44:46 crc kubenswrapper[4755]: I1210 15:44:46.700319 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Dec 10 15:44:46 crc kubenswrapper[4755]: E1210 15:44:46.700744 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d152fd0-fbbf-4c7b-874a-169860ee9075" containerName="neutron-httpd" Dec 10 15:44:46 crc kubenswrapper[4755]: I1210 15:44:46.700756 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d152fd0-fbbf-4c7b-874a-169860ee9075" containerName="neutron-httpd" Dec 10 15:44:46 crc kubenswrapper[4755]: E1210 15:44:46.700771 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d152fd0-fbbf-4c7b-874a-169860ee9075" containerName="neutron-api" Dec 10 15:44:46 crc kubenswrapper[4755]: I1210 15:44:46.700776 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d152fd0-fbbf-4c7b-874a-169860ee9075" containerName="neutron-api" Dec 10 15:44:46 crc kubenswrapper[4755]: E1210 15:44:46.700788 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a49be472-21ec-4f59-811e-9e1196ebaa14" containerName="init" Dec 10 15:44:46 crc kubenswrapper[4755]: I1210 15:44:46.700794 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="a49be472-21ec-4f59-811e-9e1196ebaa14" containerName="init" Dec 10 15:44:46 crc kubenswrapper[4755]: E1210 15:44:46.700822 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a49be472-21ec-4f59-811e-9e1196ebaa14" containerName="dnsmasq-dns" Dec 10 15:44:46 crc kubenswrapper[4755]: I1210 15:44:46.700827 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="a49be472-21ec-4f59-811e-9e1196ebaa14" containerName="dnsmasq-dns" Dec 10 15:44:46 crc kubenswrapper[4755]: I1210 15:44:46.701019 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="a49be472-21ec-4f59-811e-9e1196ebaa14" containerName="dnsmasq-dns" Dec 10 15:44:46 crc kubenswrapper[4755]: I1210 15:44:46.701031 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d152fd0-fbbf-4c7b-874a-169860ee9075" containerName="neutron-httpd" Dec 10 15:44:46 crc kubenswrapper[4755]: I1210 15:44:46.701040 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d152fd0-fbbf-4c7b-874a-169860ee9075" containerName="neutron-api" Dec 10 15:44:46 crc kubenswrapper[4755]: I1210 15:44:46.701716 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 10 15:44:46 crc kubenswrapper[4755]: I1210 15:44:46.707991 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-qjkzd" Dec 10 15:44:46 crc kubenswrapper[4755]: I1210 15:44:46.708206 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Dec 10 15:44:46 crc kubenswrapper[4755]: I1210 15:44:46.708366 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Dec 10 15:44:46 crc kubenswrapper[4755]: I1210 15:44:46.732249 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Dec 10 15:44:46 crc kubenswrapper[4755]: I1210 15:44:46.834700 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88028196-de0c-42f7-a781-1b37d35c91f0-combined-ca-bundle\") pod \"openstackclient\" (UID: \"88028196-de0c-42f7-a781-1b37d35c91f0\") " pod="openstack/openstackclient" Dec 10 15:44:46 crc kubenswrapper[4755]: I1210 15:44:46.834757 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/88028196-de0c-42f7-a781-1b37d35c91f0-openstack-config-secret\") pod \"openstackclient\" (UID: \"88028196-de0c-42f7-a781-1b37d35c91f0\") " pod="openstack/openstackclient" Dec 10 15:44:46 crc kubenswrapper[4755]: I1210 15:44:46.834842 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/88028196-de0c-42f7-a781-1b37d35c91f0-openstack-config\") pod \"openstackclient\" (UID: \"88028196-de0c-42f7-a781-1b37d35c91f0\") " pod="openstack/openstackclient" Dec 10 15:44:46 crc kubenswrapper[4755]: I1210 15:44:46.834894 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7c85\" (UniqueName: \"kubernetes.io/projected/88028196-de0c-42f7-a781-1b37d35c91f0-kube-api-access-m7c85\") pod \"openstackclient\" (UID: \"88028196-de0c-42f7-a781-1b37d35c91f0\") " pod="openstack/openstackclient" Dec 10 15:44:46 crc kubenswrapper[4755]: I1210 15:44:46.937444 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88028196-de0c-42f7-a781-1b37d35c91f0-combined-ca-bundle\") pod \"openstackclient\" (UID: \"88028196-de0c-42f7-a781-1b37d35c91f0\") " pod="openstack/openstackclient" Dec 10 15:44:46 crc kubenswrapper[4755]: I1210 15:44:46.937508 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/88028196-de0c-42f7-a781-1b37d35c91f0-openstack-config-secret\") pod \"openstackclient\" (UID: \"88028196-de0c-42f7-a781-1b37d35c91f0\") " pod="openstack/openstackclient" Dec 10 15:44:46 crc kubenswrapper[4755]: I1210 15:44:46.937558 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/88028196-de0c-42f7-a781-1b37d35c91f0-openstack-config\") pod \"openstackclient\" (UID: \"88028196-de0c-42f7-a781-1b37d35c91f0\") " pod="openstack/openstackclient" Dec 10 15:44:46 crc kubenswrapper[4755]: I1210 15:44:46.937870 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7c85\" (UniqueName: \"kubernetes.io/projected/88028196-de0c-42f7-a781-1b37d35c91f0-kube-api-access-m7c85\") pod \"openstackclient\" (UID: \"88028196-de0c-42f7-a781-1b37d35c91f0\") " pod="openstack/openstackclient" Dec 10 15:44:46 crc kubenswrapper[4755]: I1210 15:44:46.938293 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/88028196-de0c-42f7-a781-1b37d35c91f0-openstack-config\") pod \"openstackclient\" (UID: \"88028196-de0c-42f7-a781-1b37d35c91f0\") " pod="openstack/openstackclient" Dec 10 15:44:46 crc kubenswrapper[4755]: I1210 15:44:46.943498 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/88028196-de0c-42f7-a781-1b37d35c91f0-openstack-config-secret\") pod \"openstackclient\" (UID: \"88028196-de0c-42f7-a781-1b37d35c91f0\") " pod="openstack/openstackclient" Dec 10 15:44:46 crc kubenswrapper[4755]: I1210 15:44:46.944014 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88028196-de0c-42f7-a781-1b37d35c91f0-combined-ca-bundle\") pod \"openstackclient\" (UID: \"88028196-de0c-42f7-a781-1b37d35c91f0\") " pod="openstack/openstackclient" Dec 10 15:44:47 crc kubenswrapper[4755]: I1210 15:44:47.002123 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7c85\" (UniqueName: \"kubernetes.io/projected/88028196-de0c-42f7-a781-1b37d35c91f0-kube-api-access-m7c85\") pod \"openstackclient\" (UID: \"88028196-de0c-42f7-a781-1b37d35c91f0\") " pod="openstack/openstackclient" Dec 10 15:44:47 crc kubenswrapper[4755]: I1210 15:44:47.062944 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Dec 10 15:44:47 crc kubenswrapper[4755]: I1210 15:44:47.064671 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 10 15:44:47 crc kubenswrapper[4755]: I1210 15:44:47.092491 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Dec 10 15:44:47 crc kubenswrapper[4755]: I1210 15:44:47.161764 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Dec 10 15:44:47 crc kubenswrapper[4755]: I1210 15:44:47.164041 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 10 15:44:47 crc kubenswrapper[4755]: I1210 15:44:47.169234 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"d5843012-5395-4dac-9506-b2e080cdc229","Type":"ContainerStarted","Data":"6f2db4715219749af04d614269699ea1dd1d7acfffcf2650477d497df92b77cf"} Dec 10 15:44:47 crc kubenswrapper[4755]: I1210 15:44:47.232373 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58bd69657f-n5hkt" event={"ID":"45648d9c-bd22-443a-bf3f-8c08998388ec","Type":"ContainerStarted","Data":"88ba78b8d8c407d0e75953a4eb3d11b94ef799a5f0e8ecbe484ff97d35f0577b"} Dec 10 15:44:47 crc kubenswrapper[4755]: I1210 15:44:47.246709 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9d4b4ae-84b2-4971-aafc-6f5bdad0b69d-combined-ca-bundle\") pod \"openstackclient\" (UID: \"a9d4b4ae-84b2-4971-aafc-6f5bdad0b69d\") " pod="openstack/openstackclient" Dec 10 15:44:47 crc kubenswrapper[4755]: I1210 15:44:47.246792 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a9d4b4ae-84b2-4971-aafc-6f5bdad0b69d-openstack-config\") pod \"openstackclient\" (UID: \"a9d4b4ae-84b2-4971-aafc-6f5bdad0b69d\") " pod="openstack/openstackclient" Dec 10 15:44:47 crc kubenswrapper[4755]: I1210 15:44:47.246975 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ss5sh\" (UniqueName: \"kubernetes.io/projected/a9d4b4ae-84b2-4971-aafc-6f5bdad0b69d-kube-api-access-ss5sh\") pod \"openstackclient\" (UID: \"a9d4b4ae-84b2-4971-aafc-6f5bdad0b69d\") " pod="openstack/openstackclient" Dec 10 15:44:47 crc kubenswrapper[4755]: I1210 15:44:47.247029 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a9d4b4ae-84b2-4971-aafc-6f5bdad0b69d-openstack-config-secret\") pod \"openstackclient\" (UID: \"a9d4b4ae-84b2-4971-aafc-6f5bdad0b69d\") " pod="openstack/openstackclient" Dec 10 15:44:47 crc kubenswrapper[4755]: I1210 15:44:47.252215 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Dec 10 15:44:47 crc kubenswrapper[4755]: I1210 15:44:47.266952 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"5676daa1-d125-4278-9766-c9fa314e5d77","Type":"ContainerStarted","Data":"be8d9069a5c380a95e16846d44db4723edfdb61ab6ce4a2f3babe8186b586a17"} Dec 10 15:44:47 crc kubenswrapper[4755]: I1210 15:44:47.364990 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ss5sh\" (UniqueName: \"kubernetes.io/projected/a9d4b4ae-84b2-4971-aafc-6f5bdad0b69d-kube-api-access-ss5sh\") pod \"openstackclient\" (UID: \"a9d4b4ae-84b2-4971-aafc-6f5bdad0b69d\") " pod="openstack/openstackclient" Dec 10 15:44:47 crc kubenswrapper[4755]: I1210 15:44:47.365293 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a9d4b4ae-84b2-4971-aafc-6f5bdad0b69d-openstack-config-secret\") pod \"openstackclient\" (UID: \"a9d4b4ae-84b2-4971-aafc-6f5bdad0b69d\") " pod="openstack/openstackclient" Dec 10 15:44:47 crc kubenswrapper[4755]: I1210 15:44:47.365517 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9d4b4ae-84b2-4971-aafc-6f5bdad0b69d-combined-ca-bundle\") pod \"openstackclient\" (UID: \"a9d4b4ae-84b2-4971-aafc-6f5bdad0b69d\") " pod="openstack/openstackclient" Dec 10 15:44:47 crc kubenswrapper[4755]: I1210 15:44:47.365647 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a9d4b4ae-84b2-4971-aafc-6f5bdad0b69d-openstack-config\") pod \"openstackclient\" (UID: \"a9d4b4ae-84b2-4971-aafc-6f5bdad0b69d\") " pod="openstack/openstackclient" Dec 10 15:44:47 crc kubenswrapper[4755]: I1210 15:44:47.366707 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a9d4b4ae-84b2-4971-aafc-6f5bdad0b69d-openstack-config\") pod \"openstackclient\" (UID: \"a9d4b4ae-84b2-4971-aafc-6f5bdad0b69d\") " pod="openstack/openstackclient" Dec 10 15:44:47 crc kubenswrapper[4755]: I1210 15:44:47.400287 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a9d4b4ae-84b2-4971-aafc-6f5bdad0b69d-openstack-config-secret\") pod \"openstackclient\" (UID: \"a9d4b4ae-84b2-4971-aafc-6f5bdad0b69d\") " pod="openstack/openstackclient" Dec 10 15:44:47 crc kubenswrapper[4755]: I1210 15:44:47.408192 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9d4b4ae-84b2-4971-aafc-6f5bdad0b69d-combined-ca-bundle\") pod \"openstackclient\" (UID: \"a9d4b4ae-84b2-4971-aafc-6f5bdad0b69d\") " pod="openstack/openstackclient" Dec 10 15:44:47 crc kubenswrapper[4755]: I1210 15:44:47.434353 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ss5sh\" (UniqueName: \"kubernetes.io/projected/a9d4b4ae-84b2-4971-aafc-6f5bdad0b69d-kube-api-access-ss5sh\") pod \"openstackclient\" (UID: \"a9d4b4ae-84b2-4971-aafc-6f5bdad0b69d\") " pod="openstack/openstackclient" Dec 10 15:44:47 crc kubenswrapper[4755]: I1210 15:44:47.551884 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 10 15:44:47 crc kubenswrapper[4755]: E1210 15:44:47.748038 4755 log.go:32] "RunPodSandbox from runtime service failed" err=< Dec 10 15:44:47 crc kubenswrapper[4755]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_88028196-de0c-42f7-a781-1b37d35c91f0_0(c8a8ca0568b6a19069d3f262bf11d37f924d3a4fb7237e804a80e4c9faf2f002): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"c8a8ca0568b6a19069d3f262bf11d37f924d3a4fb7237e804a80e4c9faf2f002" Netns:"/var/run/netns/b2c7259f-1621-4cf2-8b60-310ae0cc728c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=c8a8ca0568b6a19069d3f262bf11d37f924d3a4fb7237e804a80e4c9faf2f002;K8S_POD_UID=88028196-de0c-42f7-a781-1b37d35c91f0" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/88028196-de0c-42f7-a781-1b37d35c91f0]: expected pod UID "88028196-de0c-42f7-a781-1b37d35c91f0" but got "a9d4b4ae-84b2-4971-aafc-6f5bdad0b69d" from Kube API Dec 10 15:44:47 crc kubenswrapper[4755]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Dec 10 15:44:47 crc kubenswrapper[4755]: > Dec 10 15:44:47 crc kubenswrapper[4755]: E1210 15:44:47.748114 4755 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Dec 10 15:44:47 crc kubenswrapper[4755]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_88028196-de0c-42f7-a781-1b37d35c91f0_0(c8a8ca0568b6a19069d3f262bf11d37f924d3a4fb7237e804a80e4c9faf2f002): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"c8a8ca0568b6a19069d3f262bf11d37f924d3a4fb7237e804a80e4c9faf2f002" Netns:"/var/run/netns/b2c7259f-1621-4cf2-8b60-310ae0cc728c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=c8a8ca0568b6a19069d3f262bf11d37f924d3a4fb7237e804a80e4c9faf2f002;K8S_POD_UID=88028196-de0c-42f7-a781-1b37d35c91f0" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/88028196-de0c-42f7-a781-1b37d35c91f0]: expected pod UID "88028196-de0c-42f7-a781-1b37d35c91f0" but got "a9d4b4ae-84b2-4971-aafc-6f5bdad0b69d" from Kube API Dec 10 15:44:47 crc kubenswrapper[4755]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Dec 10 15:44:47 crc kubenswrapper[4755]: > pod="openstack/openstackclient" Dec 10 15:44:47 crc kubenswrapper[4755]: I1210 15:44:47.791330 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d152fd0-fbbf-4c7b-874a-169860ee9075" path="/var/lib/kubelet/pods/7d152fd0-fbbf-4c7b-874a-169860ee9075/volumes" Dec 10 15:44:47 crc kubenswrapper[4755]: I1210 15:44:47.792788 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a49be472-21ec-4f59-811e-9e1196ebaa14" path="/var/lib/kubelet/pods/a49be472-21ec-4f59-811e-9e1196ebaa14/volumes" Dec 10 15:44:48 crc kubenswrapper[4755]: I1210 15:44:48.022388 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Dec 10 15:44:48 crc kubenswrapper[4755]: I1210 15:44:48.094577 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 10 15:44:48 crc kubenswrapper[4755]: I1210 15:44:48.372105 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"900a05ac-78b6-44d0-9499-2dbfb52fcdfc","Type":"ContainerStarted","Data":"edfe56996a103d8723eae15684aef831997b0cf1fac7bf6fc8fecb48eba7164f"} Dec 10 15:44:48 crc kubenswrapper[4755]: I1210 15:44:48.398590 4755 generic.go:334] "Generic (PLEG): container finished" podID="45648d9c-bd22-443a-bf3f-8c08998388ec" containerID="dd434e659137b34209d185951bfbc9ce56017c06fa6214497a254d196c1b779e" exitCode=0 Dec 10 15:44:48 crc kubenswrapper[4755]: I1210 15:44:48.398702 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58bd69657f-n5hkt" event={"ID":"45648d9c-bd22-443a-bf3f-8c08998388ec","Type":"ContainerDied","Data":"dd434e659137b34209d185951bfbc9ce56017c06fa6214497a254d196c1b779e"} Dec 10 15:44:48 crc kubenswrapper[4755]: I1210 15:44:48.423362 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 10 15:44:48 crc kubenswrapper[4755]: I1210 15:44:48.424868 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"5676daa1-d125-4278-9766-c9fa314e5d77","Type":"ContainerStarted","Data":"079b08419c820dcedd7e988101fd55b4f12254c3cc355aea4451e1d12e70500a"} Dec 10 15:44:48 crc kubenswrapper[4755]: I1210 15:44:48.424902 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"5676daa1-d125-4278-9766-c9fa314e5d77","Type":"ContainerStarted","Data":"05aacfbcedb1b4fb6b1c7503bfe4daa5df3eb327df5c78895b6f340933785fe4"} Dec 10 15:44:48 crc kubenswrapper[4755]: I1210 15:44:48.424917 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-api-0" Dec 10 15:44:48 crc kubenswrapper[4755]: I1210 15:44:48.425066 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="976d7784-c4ed-4590-a34c-a3f79aedf471" containerName="cinder-scheduler" containerID="cri-o://a3261f341319fb5c64ca2ea75a8547cd8d042c49634738b1ced0204857c1d6a3" gracePeriod=30 Dec 10 15:44:48 crc kubenswrapper[4755]: I1210 15:44:48.425204 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="976d7784-c4ed-4590-a34c-a3f79aedf471" containerName="probe" containerID="cri-o://261710c1e620e43715c5a98b322a0ac2cde54d540ff31a515329fe134358cde0" gracePeriod=30 Dec 10 15:44:48 crc kubenswrapper[4755]: W1210 15:44:48.465139 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda9d4b4ae_84b2_4971_aafc_6f5bdad0b69d.slice/crio-c1274440098022b581a7dc1a885ae1c0e214a267e8a229e07b59d5039ca242cb WatchSource:0}: Error finding container c1274440098022b581a7dc1a885ae1c0e214a267e8a229e07b59d5039ca242cb: Status 404 returned error can't find the container with id c1274440098022b581a7dc1a885ae1c0e214a267e8a229e07b59d5039ca242cb Dec 10 15:44:48 crc kubenswrapper[4755]: I1210 15:44:48.473204 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Dec 10 15:44:48 crc kubenswrapper[4755]: I1210 15:44:48.500222 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-api-0" podStartSLOduration=4.5002025119999995 podStartE2EDuration="4.500202512s" podCreationTimestamp="2025-12-10 15:44:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:44:48.487794785 +0000 UTC m=+1285.088678417" watchObservedRunningTime="2025-12-10 15:44:48.500202512 +0000 UTC m=+1285.101086154" Dec 10 15:44:48 crc kubenswrapper[4755]: I1210 15:44:48.634501 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 10 15:44:48 crc kubenswrapper[4755]: I1210 15:44:48.639863 4755 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="88028196-de0c-42f7-a781-1b37d35c91f0" podUID="a9d4b4ae-84b2-4971-aafc-6f5bdad0b69d" Dec 10 15:44:48 crc kubenswrapper[4755]: I1210 15:44:48.822787 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/88028196-de0c-42f7-a781-1b37d35c91f0-openstack-config\") pod \"88028196-de0c-42f7-a781-1b37d35c91f0\" (UID: \"88028196-de0c-42f7-a781-1b37d35c91f0\") " Dec 10 15:44:48 crc kubenswrapper[4755]: I1210 15:44:48.822832 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88028196-de0c-42f7-a781-1b37d35c91f0-combined-ca-bundle\") pod \"88028196-de0c-42f7-a781-1b37d35c91f0\" (UID: \"88028196-de0c-42f7-a781-1b37d35c91f0\") " Dec 10 15:44:48 crc kubenswrapper[4755]: I1210 15:44:48.822882 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/88028196-de0c-42f7-a781-1b37d35c91f0-openstack-config-secret\") pod \"88028196-de0c-42f7-a781-1b37d35c91f0\" (UID: \"88028196-de0c-42f7-a781-1b37d35c91f0\") " Dec 10 15:44:48 crc kubenswrapper[4755]: I1210 15:44:48.822930 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m7c85\" (UniqueName: \"kubernetes.io/projected/88028196-de0c-42f7-a781-1b37d35c91f0-kube-api-access-m7c85\") pod \"88028196-de0c-42f7-a781-1b37d35c91f0\" (UID: \"88028196-de0c-42f7-a781-1b37d35c91f0\") " Dec 10 15:44:48 crc kubenswrapper[4755]: I1210 15:44:48.827033 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88028196-de0c-42f7-a781-1b37d35c91f0-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "88028196-de0c-42f7-a781-1b37d35c91f0" (UID: "88028196-de0c-42f7-a781-1b37d35c91f0"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:44:48 crc kubenswrapper[4755]: I1210 15:44:48.841886 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88028196-de0c-42f7-a781-1b37d35c91f0-kube-api-access-m7c85" (OuterVolumeSpecName: "kube-api-access-m7c85") pod "88028196-de0c-42f7-a781-1b37d35c91f0" (UID: "88028196-de0c-42f7-a781-1b37d35c91f0"). InnerVolumeSpecName "kube-api-access-m7c85". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:44:48 crc kubenswrapper[4755]: I1210 15:44:48.841944 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88028196-de0c-42f7-a781-1b37d35c91f0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "88028196-de0c-42f7-a781-1b37d35c91f0" (UID: "88028196-de0c-42f7-a781-1b37d35c91f0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:44:48 crc kubenswrapper[4755]: I1210 15:44:48.843202 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88028196-de0c-42f7-a781-1b37d35c91f0-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "88028196-de0c-42f7-a781-1b37d35c91f0" (UID: "88028196-de0c-42f7-a781-1b37d35c91f0"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:44:48 crc kubenswrapper[4755]: I1210 15:44:48.927426 4755 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/88028196-de0c-42f7-a781-1b37d35c91f0-openstack-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:48 crc kubenswrapper[4755]: I1210 15:44:48.927788 4755 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88028196-de0c-42f7-a781-1b37d35c91f0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:48 crc kubenswrapper[4755]: I1210 15:44:48.927801 4755 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/88028196-de0c-42f7-a781-1b37d35c91f0-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:48 crc kubenswrapper[4755]: I1210 15:44:48.927814 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m7c85\" (UniqueName: \"kubernetes.io/projected/88028196-de0c-42f7-a781-1b37d35c91f0-kube-api-access-m7c85\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:49 crc kubenswrapper[4755]: I1210 15:44:49.075996 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-api-0"] Dec 10 15:44:49 crc kubenswrapper[4755]: I1210 15:44:49.436165 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"900a05ac-78b6-44d0-9499-2dbfb52fcdfc","Type":"ContainerStarted","Data":"eadba9128440ebe24d5bd8dfabd26ba72d064231efd8cc8793395e961080720c"} Dec 10 15:44:49 crc kubenswrapper[4755]: I1210 15:44:49.436666 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Dec 10 15:44:49 crc kubenswrapper[4755]: I1210 15:44:49.443702 4755 generic.go:334] "Generic (PLEG): container finished" podID="976d7784-c4ed-4590-a34c-a3f79aedf471" containerID="261710c1e620e43715c5a98b322a0ac2cde54d540ff31a515329fe134358cde0" exitCode=0 Dec 10 15:44:49 crc kubenswrapper[4755]: I1210 15:44:49.443772 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"976d7784-c4ed-4590-a34c-a3f79aedf471","Type":"ContainerDied","Data":"261710c1e620e43715c5a98b322a0ac2cde54d540ff31a515329fe134358cde0"} Dec 10 15:44:49 crc kubenswrapper[4755]: I1210 15:44:49.446139 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58bd69657f-n5hkt" event={"ID":"45648d9c-bd22-443a-bf3f-8c08998388ec","Type":"ContainerStarted","Data":"20930194f79f2f7e19899ac550e4647fc661cb704ee91cde2a2d2036c5cc0cd8"} Dec 10 15:44:49 crc kubenswrapper[4755]: I1210 15:44:49.446299 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-58bd69657f-n5hkt" Dec 10 15:44:49 crc kubenswrapper[4755]: I1210 15:44:49.448380 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"a9d4b4ae-84b2-4971-aafc-6f5bdad0b69d","Type":"ContainerStarted","Data":"c1274440098022b581a7dc1a885ae1c0e214a267e8a229e07b59d5039ca242cb"} Dec 10 15:44:49 crc kubenswrapper[4755]: I1210 15:44:49.448396 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 10 15:44:49 crc kubenswrapper[4755]: I1210 15:44:49.470415 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=6.470389537 podStartE2EDuration="6.470389537s" podCreationTimestamp="2025-12-10 15:44:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:44:49.458119334 +0000 UTC m=+1286.059002976" watchObservedRunningTime="2025-12-10 15:44:49.470389537 +0000 UTC m=+1286.071273179" Dec 10 15:44:49 crc kubenswrapper[4755]: I1210 15:44:49.483513 4755 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="88028196-de0c-42f7-a781-1b37d35c91f0" podUID="a9d4b4ae-84b2-4971-aafc-6f5bdad0b69d" Dec 10 15:44:49 crc kubenswrapper[4755]: I1210 15:44:49.485495 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-58bd69657f-n5hkt" podStartSLOduration=5.485458097 podStartE2EDuration="5.485458097s" podCreationTimestamp="2025-12-10 15:44:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:44:49.480259445 +0000 UTC m=+1286.081143077" watchObservedRunningTime="2025-12-10 15:44:49.485458097 +0000 UTC m=+1286.086341729" Dec 10 15:44:49 crc kubenswrapper[4755]: I1210 15:44:49.775394 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88028196-de0c-42f7-a781-1b37d35c91f0" path="/var/lib/kubelet/pods/88028196-de0c-42f7-a781-1b37d35c91f0/volumes" Dec 10 15:44:50 crc kubenswrapper[4755]: I1210 15:44:50.465911 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"d5843012-5395-4dac-9506-b2e080cdc229","Type":"ContainerStarted","Data":"4923bc3b4e1ca7e189ac6c5d0cf80cc1de3c2a26b6e033b555fdd137c7170f7a"} Dec 10 15:44:50 crc kubenswrapper[4755]: I1210 15:44:50.466387 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cloudkitty-api-0" podUID="5676daa1-d125-4278-9766-c9fa314e5d77" containerName="cloudkitty-api-log" containerID="cri-o://05aacfbcedb1b4fb6b1c7503bfe4daa5df3eb327df5c78895b6f340933785fe4" gracePeriod=30 Dec 10 15:44:50 crc kubenswrapper[4755]: I1210 15:44:50.466441 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cloudkitty-api-0" podUID="5676daa1-d125-4278-9766-c9fa314e5d77" containerName="cloudkitty-api" containerID="cri-o://079b08419c820dcedd7e988101fd55b4f12254c3cc355aea4451e1d12e70500a" gracePeriod=30 Dec 10 15:44:50 crc kubenswrapper[4755]: I1210 15:44:50.497929 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-proc-0" podStartSLOduration=2.656796445 podStartE2EDuration="6.497910968s" podCreationTimestamp="2025-12-10 15:44:44 +0000 UTC" firstStartedPulling="2025-12-10 15:44:46.310439193 +0000 UTC m=+1282.911322825" lastFinishedPulling="2025-12-10 15:44:50.151553716 +0000 UTC m=+1286.752437348" observedRunningTime="2025-12-10 15:44:50.496786167 +0000 UTC m=+1287.097669799" watchObservedRunningTime="2025-12-10 15:44:50.497910968 +0000 UTC m=+1287.098794590" Dec 10 15:44:50 crc kubenswrapper[4755]: I1210 15:44:50.564117 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-proc-0"] Dec 10 15:44:51 crc kubenswrapper[4755]: I1210 15:44:51.303448 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 10 15:44:51 crc kubenswrapper[4755]: I1210 15:44:51.313777 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Dec 10 15:44:51 crc kubenswrapper[4755]: I1210 15:44:51.421146 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/976d7784-c4ed-4590-a34c-a3f79aedf471-scripts\") pod \"976d7784-c4ed-4590-a34c-a3f79aedf471\" (UID: \"976d7784-c4ed-4590-a34c-a3f79aedf471\") " Dec 10 15:44:51 crc kubenswrapper[4755]: I1210 15:44:51.421186 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5676daa1-d125-4278-9766-c9fa314e5d77-combined-ca-bundle\") pod \"5676daa1-d125-4278-9766-c9fa314e5d77\" (UID: \"5676daa1-d125-4278-9766-c9fa314e5d77\") " Dec 10 15:44:51 crc kubenswrapper[4755]: I1210 15:44:51.421244 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/976d7784-c4ed-4590-a34c-a3f79aedf471-config-data-custom\") pod \"976d7784-c4ed-4590-a34c-a3f79aedf471\" (UID: \"976d7784-c4ed-4590-a34c-a3f79aedf471\") " Dec 10 15:44:51 crc kubenswrapper[4755]: I1210 15:44:51.421282 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5676daa1-d125-4278-9766-c9fa314e5d77-logs\") pod \"5676daa1-d125-4278-9766-c9fa314e5d77\" (UID: \"5676daa1-d125-4278-9766-c9fa314e5d77\") " Dec 10 15:44:51 crc kubenswrapper[4755]: I1210 15:44:51.421316 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5676daa1-d125-4278-9766-c9fa314e5d77-scripts\") pod \"5676daa1-d125-4278-9766-c9fa314e5d77\" (UID: \"5676daa1-d125-4278-9766-c9fa314e5d77\") " Dec 10 15:44:51 crc kubenswrapper[4755]: I1210 15:44:51.421340 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5676daa1-d125-4278-9766-c9fa314e5d77-config-data\") pod \"5676daa1-d125-4278-9766-c9fa314e5d77\" (UID: \"5676daa1-d125-4278-9766-c9fa314e5d77\") " Dec 10 15:44:51 crc kubenswrapper[4755]: I1210 15:44:51.421362 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/976d7784-c4ed-4590-a34c-a3f79aedf471-combined-ca-bundle\") pod \"976d7784-c4ed-4590-a34c-a3f79aedf471\" (UID: \"976d7784-c4ed-4590-a34c-a3f79aedf471\") " Dec 10 15:44:51 crc kubenswrapper[4755]: I1210 15:44:51.421421 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5676daa1-d125-4278-9766-c9fa314e5d77-config-data-custom\") pod \"5676daa1-d125-4278-9766-c9fa314e5d77\" (UID: \"5676daa1-d125-4278-9766-c9fa314e5d77\") " Dec 10 15:44:51 crc kubenswrapper[4755]: I1210 15:44:51.421458 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/5676daa1-d125-4278-9766-c9fa314e5d77-certs\") pod \"5676daa1-d125-4278-9766-c9fa314e5d77\" (UID: \"5676daa1-d125-4278-9766-c9fa314e5d77\") " Dec 10 15:44:51 crc kubenswrapper[4755]: I1210 15:44:51.421527 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/976d7784-c4ed-4590-a34c-a3f79aedf471-config-data\") pod \"976d7784-c4ed-4590-a34c-a3f79aedf471\" (UID: \"976d7784-c4ed-4590-a34c-a3f79aedf471\") " Dec 10 15:44:51 crc kubenswrapper[4755]: I1210 15:44:51.421559 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vqjt4\" (UniqueName: \"kubernetes.io/projected/5676daa1-d125-4278-9766-c9fa314e5d77-kube-api-access-vqjt4\") pod \"5676daa1-d125-4278-9766-c9fa314e5d77\" (UID: \"5676daa1-d125-4278-9766-c9fa314e5d77\") " Dec 10 15:44:51 crc kubenswrapper[4755]: I1210 15:44:51.421579 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zpjdh\" (UniqueName: \"kubernetes.io/projected/976d7784-c4ed-4590-a34c-a3f79aedf471-kube-api-access-zpjdh\") pod \"976d7784-c4ed-4590-a34c-a3f79aedf471\" (UID: \"976d7784-c4ed-4590-a34c-a3f79aedf471\") " Dec 10 15:44:51 crc kubenswrapper[4755]: I1210 15:44:51.421612 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/976d7784-c4ed-4590-a34c-a3f79aedf471-etc-machine-id\") pod \"976d7784-c4ed-4590-a34c-a3f79aedf471\" (UID: \"976d7784-c4ed-4590-a34c-a3f79aedf471\") " Dec 10 15:44:51 crc kubenswrapper[4755]: I1210 15:44:51.422063 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/976d7784-c4ed-4590-a34c-a3f79aedf471-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "976d7784-c4ed-4590-a34c-a3f79aedf471" (UID: "976d7784-c4ed-4590-a34c-a3f79aedf471"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 15:44:51 crc kubenswrapper[4755]: I1210 15:44:51.429031 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5676daa1-d125-4278-9766-c9fa314e5d77-logs" (OuterVolumeSpecName: "logs") pod "5676daa1-d125-4278-9766-c9fa314e5d77" (UID: "5676daa1-d125-4278-9766-c9fa314e5d77"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:44:51 crc kubenswrapper[4755]: I1210 15:44:51.437721 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5676daa1-d125-4278-9766-c9fa314e5d77-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "5676daa1-d125-4278-9766-c9fa314e5d77" (UID: "5676daa1-d125-4278-9766-c9fa314e5d77"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:44:51 crc kubenswrapper[4755]: I1210 15:44:51.439451 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/976d7784-c4ed-4590-a34c-a3f79aedf471-kube-api-access-zpjdh" (OuterVolumeSpecName: "kube-api-access-zpjdh") pod "976d7784-c4ed-4590-a34c-a3f79aedf471" (UID: "976d7784-c4ed-4590-a34c-a3f79aedf471"). InnerVolumeSpecName "kube-api-access-zpjdh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:44:51 crc kubenswrapper[4755]: I1210 15:44:51.440750 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5676daa1-d125-4278-9766-c9fa314e5d77-certs" (OuterVolumeSpecName: "certs") pod "5676daa1-d125-4278-9766-c9fa314e5d77" (UID: "5676daa1-d125-4278-9766-c9fa314e5d77"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:44:51 crc kubenswrapper[4755]: I1210 15:44:51.441634 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5676daa1-d125-4278-9766-c9fa314e5d77-scripts" (OuterVolumeSpecName: "scripts") pod "5676daa1-d125-4278-9766-c9fa314e5d77" (UID: "5676daa1-d125-4278-9766-c9fa314e5d77"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:44:51 crc kubenswrapper[4755]: I1210 15:44:51.442203 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/976d7784-c4ed-4590-a34c-a3f79aedf471-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "976d7784-c4ed-4590-a34c-a3f79aedf471" (UID: "976d7784-c4ed-4590-a34c-a3f79aedf471"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:44:51 crc kubenswrapper[4755]: I1210 15:44:51.442878 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/976d7784-c4ed-4590-a34c-a3f79aedf471-scripts" (OuterVolumeSpecName: "scripts") pod "976d7784-c4ed-4590-a34c-a3f79aedf471" (UID: "976d7784-c4ed-4590-a34c-a3f79aedf471"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:44:51 crc kubenswrapper[4755]: I1210 15:44:51.444603 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5676daa1-d125-4278-9766-c9fa314e5d77-kube-api-access-vqjt4" (OuterVolumeSpecName: "kube-api-access-vqjt4") pod "5676daa1-d125-4278-9766-c9fa314e5d77" (UID: "5676daa1-d125-4278-9766-c9fa314e5d77"). InnerVolumeSpecName "kube-api-access-vqjt4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:44:51 crc kubenswrapper[4755]: I1210 15:44:51.484672 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5676daa1-d125-4278-9766-c9fa314e5d77-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5676daa1-d125-4278-9766-c9fa314e5d77" (UID: "5676daa1-d125-4278-9766-c9fa314e5d77"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:44:51 crc kubenswrapper[4755]: I1210 15:44:51.491043 4755 generic.go:334] "Generic (PLEG): container finished" podID="976d7784-c4ed-4590-a34c-a3f79aedf471" containerID="a3261f341319fb5c64ca2ea75a8547cd8d042c49634738b1ced0204857c1d6a3" exitCode=0 Dec 10 15:44:51 crc kubenswrapper[4755]: I1210 15:44:51.491125 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"976d7784-c4ed-4590-a34c-a3f79aedf471","Type":"ContainerDied","Data":"a3261f341319fb5c64ca2ea75a8547cd8d042c49634738b1ced0204857c1d6a3"} Dec 10 15:44:51 crc kubenswrapper[4755]: I1210 15:44:51.491156 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"976d7784-c4ed-4590-a34c-a3f79aedf471","Type":"ContainerDied","Data":"1ddc4b210869be273b81a506441c8aedc392f5a680e9aab389eaf5cc30e81518"} Dec 10 15:44:51 crc kubenswrapper[4755]: I1210 15:44:51.491171 4755 scope.go:117] "RemoveContainer" containerID="261710c1e620e43715c5a98b322a0ac2cde54d540ff31a515329fe134358cde0" Dec 10 15:44:51 crc kubenswrapper[4755]: I1210 15:44:51.491306 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 10 15:44:51 crc kubenswrapper[4755]: I1210 15:44:51.495518 4755 generic.go:334] "Generic (PLEG): container finished" podID="5676daa1-d125-4278-9766-c9fa314e5d77" containerID="079b08419c820dcedd7e988101fd55b4f12254c3cc355aea4451e1d12e70500a" exitCode=0 Dec 10 15:44:51 crc kubenswrapper[4755]: I1210 15:44:51.495539 4755 generic.go:334] "Generic (PLEG): container finished" podID="5676daa1-d125-4278-9766-c9fa314e5d77" containerID="05aacfbcedb1b4fb6b1c7503bfe4daa5df3eb327df5c78895b6f340933785fe4" exitCode=143 Dec 10 15:44:51 crc kubenswrapper[4755]: I1210 15:44:51.496364 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Dec 10 15:44:51 crc kubenswrapper[4755]: I1210 15:44:51.498607 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"5676daa1-d125-4278-9766-c9fa314e5d77","Type":"ContainerDied","Data":"079b08419c820dcedd7e988101fd55b4f12254c3cc355aea4451e1d12e70500a"} Dec 10 15:44:51 crc kubenswrapper[4755]: I1210 15:44:51.498671 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"5676daa1-d125-4278-9766-c9fa314e5d77","Type":"ContainerDied","Data":"05aacfbcedb1b4fb6b1c7503bfe4daa5df3eb327df5c78895b6f340933785fe4"} Dec 10 15:44:51 crc kubenswrapper[4755]: I1210 15:44:51.498683 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"5676daa1-d125-4278-9766-c9fa314e5d77","Type":"ContainerDied","Data":"be8d9069a5c380a95e16846d44db4723edfdb61ab6ce4a2f3babe8186b586a17"} Dec 10 15:44:51 crc kubenswrapper[4755]: I1210 15:44:51.505391 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5676daa1-d125-4278-9766-c9fa314e5d77-config-data" (OuterVolumeSpecName: "config-data") pod "5676daa1-d125-4278-9766-c9fa314e5d77" (UID: "5676daa1-d125-4278-9766-c9fa314e5d77"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:44:51 crc kubenswrapper[4755]: I1210 15:44:51.523824 4755 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/976d7784-c4ed-4590-a34c-a3f79aedf471-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:51 crc kubenswrapper[4755]: I1210 15:44:51.523864 4755 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5676daa1-d125-4278-9766-c9fa314e5d77-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:51 crc kubenswrapper[4755]: I1210 15:44:51.523875 4755 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/976d7784-c4ed-4590-a34c-a3f79aedf471-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:51 crc kubenswrapper[4755]: I1210 15:44:51.523885 4755 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5676daa1-d125-4278-9766-c9fa314e5d77-logs\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:51 crc kubenswrapper[4755]: I1210 15:44:51.523893 4755 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5676daa1-d125-4278-9766-c9fa314e5d77-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:51 crc kubenswrapper[4755]: I1210 15:44:51.523903 4755 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5676daa1-d125-4278-9766-c9fa314e5d77-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:51 crc kubenswrapper[4755]: I1210 15:44:51.523911 4755 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5676daa1-d125-4278-9766-c9fa314e5d77-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:51 crc kubenswrapper[4755]: I1210 15:44:51.523919 4755 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/projected/5676daa1-d125-4278-9766-c9fa314e5d77-certs\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:51 crc kubenswrapper[4755]: I1210 15:44:51.523928 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vqjt4\" (UniqueName: \"kubernetes.io/projected/5676daa1-d125-4278-9766-c9fa314e5d77-kube-api-access-vqjt4\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:51 crc kubenswrapper[4755]: I1210 15:44:51.523936 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zpjdh\" (UniqueName: \"kubernetes.io/projected/976d7784-c4ed-4590-a34c-a3f79aedf471-kube-api-access-zpjdh\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:51 crc kubenswrapper[4755]: I1210 15:44:51.523944 4755 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/976d7784-c4ed-4590-a34c-a3f79aedf471-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:51 crc kubenswrapper[4755]: I1210 15:44:51.531631 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/976d7784-c4ed-4590-a34c-a3f79aedf471-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "976d7784-c4ed-4590-a34c-a3f79aedf471" (UID: "976d7784-c4ed-4590-a34c-a3f79aedf471"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:44:51 crc kubenswrapper[4755]: I1210 15:44:51.534261 4755 scope.go:117] "RemoveContainer" containerID="a3261f341319fb5c64ca2ea75a8547cd8d042c49634738b1ced0204857c1d6a3" Dec 10 15:44:51 crc kubenswrapper[4755]: I1210 15:44:51.567391 4755 scope.go:117] "RemoveContainer" containerID="261710c1e620e43715c5a98b322a0ac2cde54d540ff31a515329fe134358cde0" Dec 10 15:44:51 crc kubenswrapper[4755]: E1210 15:44:51.570831 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"261710c1e620e43715c5a98b322a0ac2cde54d540ff31a515329fe134358cde0\": container with ID starting with 261710c1e620e43715c5a98b322a0ac2cde54d540ff31a515329fe134358cde0 not found: ID does not exist" containerID="261710c1e620e43715c5a98b322a0ac2cde54d540ff31a515329fe134358cde0" Dec 10 15:44:51 crc kubenswrapper[4755]: I1210 15:44:51.570882 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"261710c1e620e43715c5a98b322a0ac2cde54d540ff31a515329fe134358cde0"} err="failed to get container status \"261710c1e620e43715c5a98b322a0ac2cde54d540ff31a515329fe134358cde0\": rpc error: code = NotFound desc = could not find container \"261710c1e620e43715c5a98b322a0ac2cde54d540ff31a515329fe134358cde0\": container with ID starting with 261710c1e620e43715c5a98b322a0ac2cde54d540ff31a515329fe134358cde0 not found: ID does not exist" Dec 10 15:44:51 crc kubenswrapper[4755]: I1210 15:44:51.570906 4755 scope.go:117] "RemoveContainer" containerID="a3261f341319fb5c64ca2ea75a8547cd8d042c49634738b1ced0204857c1d6a3" Dec 10 15:44:51 crc kubenswrapper[4755]: E1210 15:44:51.571228 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3261f341319fb5c64ca2ea75a8547cd8d042c49634738b1ced0204857c1d6a3\": container with ID starting with a3261f341319fb5c64ca2ea75a8547cd8d042c49634738b1ced0204857c1d6a3 not found: ID does not exist" containerID="a3261f341319fb5c64ca2ea75a8547cd8d042c49634738b1ced0204857c1d6a3" Dec 10 15:44:51 crc kubenswrapper[4755]: I1210 15:44:51.571268 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3261f341319fb5c64ca2ea75a8547cd8d042c49634738b1ced0204857c1d6a3"} err="failed to get container status \"a3261f341319fb5c64ca2ea75a8547cd8d042c49634738b1ced0204857c1d6a3\": rpc error: code = NotFound desc = could not find container \"a3261f341319fb5c64ca2ea75a8547cd8d042c49634738b1ced0204857c1d6a3\": container with ID starting with a3261f341319fb5c64ca2ea75a8547cd8d042c49634738b1ced0204857c1d6a3 not found: ID does not exist" Dec 10 15:44:51 crc kubenswrapper[4755]: I1210 15:44:51.571291 4755 scope.go:117] "RemoveContainer" containerID="079b08419c820dcedd7e988101fd55b4f12254c3cc355aea4451e1d12e70500a" Dec 10 15:44:51 crc kubenswrapper[4755]: I1210 15:44:51.609766 4755 scope.go:117] "RemoveContainer" containerID="05aacfbcedb1b4fb6b1c7503bfe4daa5df3eb327df5c78895b6f340933785fe4" Dec 10 15:44:51 crc kubenswrapper[4755]: I1210 15:44:51.619138 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/976d7784-c4ed-4590-a34c-a3f79aedf471-config-data" (OuterVolumeSpecName: "config-data") pod "976d7784-c4ed-4590-a34c-a3f79aedf471" (UID: "976d7784-c4ed-4590-a34c-a3f79aedf471"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:44:51 crc kubenswrapper[4755]: I1210 15:44:51.630944 4755 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/976d7784-c4ed-4590-a34c-a3f79aedf471-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:51 crc kubenswrapper[4755]: I1210 15:44:51.630983 4755 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/976d7784-c4ed-4590-a34c-a3f79aedf471-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:51 crc kubenswrapper[4755]: I1210 15:44:51.746125 4755 scope.go:117] "RemoveContainer" containerID="079b08419c820dcedd7e988101fd55b4f12254c3cc355aea4451e1d12e70500a" Dec 10 15:44:51 crc kubenswrapper[4755]: E1210 15:44:51.746790 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"079b08419c820dcedd7e988101fd55b4f12254c3cc355aea4451e1d12e70500a\": container with ID starting with 079b08419c820dcedd7e988101fd55b4f12254c3cc355aea4451e1d12e70500a not found: ID does not exist" containerID="079b08419c820dcedd7e988101fd55b4f12254c3cc355aea4451e1d12e70500a" Dec 10 15:44:51 crc kubenswrapper[4755]: I1210 15:44:51.746828 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"079b08419c820dcedd7e988101fd55b4f12254c3cc355aea4451e1d12e70500a"} err="failed to get container status \"079b08419c820dcedd7e988101fd55b4f12254c3cc355aea4451e1d12e70500a\": rpc error: code = NotFound desc = could not find container \"079b08419c820dcedd7e988101fd55b4f12254c3cc355aea4451e1d12e70500a\": container with ID starting with 079b08419c820dcedd7e988101fd55b4f12254c3cc355aea4451e1d12e70500a not found: ID does not exist" Dec 10 15:44:51 crc kubenswrapper[4755]: I1210 15:44:51.746855 4755 scope.go:117] "RemoveContainer" containerID="05aacfbcedb1b4fb6b1c7503bfe4daa5df3eb327df5c78895b6f340933785fe4" Dec 10 15:44:51 crc kubenswrapper[4755]: E1210 15:44:51.747203 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05aacfbcedb1b4fb6b1c7503bfe4daa5df3eb327df5c78895b6f340933785fe4\": container with ID starting with 05aacfbcedb1b4fb6b1c7503bfe4daa5df3eb327df5c78895b6f340933785fe4 not found: ID does not exist" containerID="05aacfbcedb1b4fb6b1c7503bfe4daa5df3eb327df5c78895b6f340933785fe4" Dec 10 15:44:51 crc kubenswrapper[4755]: I1210 15:44:51.747229 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05aacfbcedb1b4fb6b1c7503bfe4daa5df3eb327df5c78895b6f340933785fe4"} err="failed to get container status \"05aacfbcedb1b4fb6b1c7503bfe4daa5df3eb327df5c78895b6f340933785fe4\": rpc error: code = NotFound desc = could not find container \"05aacfbcedb1b4fb6b1c7503bfe4daa5df3eb327df5c78895b6f340933785fe4\": container with ID starting with 05aacfbcedb1b4fb6b1c7503bfe4daa5df3eb327df5c78895b6f340933785fe4 not found: ID does not exist" Dec 10 15:44:51 crc kubenswrapper[4755]: I1210 15:44:51.747246 4755 scope.go:117] "RemoveContainer" containerID="079b08419c820dcedd7e988101fd55b4f12254c3cc355aea4451e1d12e70500a" Dec 10 15:44:51 crc kubenswrapper[4755]: I1210 15:44:51.747547 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"079b08419c820dcedd7e988101fd55b4f12254c3cc355aea4451e1d12e70500a"} err="failed to get container status \"079b08419c820dcedd7e988101fd55b4f12254c3cc355aea4451e1d12e70500a\": rpc error: code = NotFound desc = could not find container \"079b08419c820dcedd7e988101fd55b4f12254c3cc355aea4451e1d12e70500a\": container with ID starting with 079b08419c820dcedd7e988101fd55b4f12254c3cc355aea4451e1d12e70500a not found: ID does not exist" Dec 10 15:44:51 crc kubenswrapper[4755]: I1210 15:44:51.747576 4755 scope.go:117] "RemoveContainer" containerID="05aacfbcedb1b4fb6b1c7503bfe4daa5df3eb327df5c78895b6f340933785fe4" Dec 10 15:44:51 crc kubenswrapper[4755]: I1210 15:44:51.747898 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05aacfbcedb1b4fb6b1c7503bfe4daa5df3eb327df5c78895b6f340933785fe4"} err="failed to get container status \"05aacfbcedb1b4fb6b1c7503bfe4daa5df3eb327df5c78895b6f340933785fe4\": rpc error: code = NotFound desc = could not find container \"05aacfbcedb1b4fb6b1c7503bfe4daa5df3eb327df5c78895b6f340933785fe4\": container with ID starting with 05aacfbcedb1b4fb6b1c7503bfe4daa5df3eb327df5c78895b6f340933785fe4 not found: ID does not exist" Dec 10 15:44:51 crc kubenswrapper[4755]: I1210 15:44:51.843604 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 10 15:44:51 crc kubenswrapper[4755]: I1210 15:44:51.862697 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 10 15:44:51 crc kubenswrapper[4755]: I1210 15:44:51.877815 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-api-0"] Dec 10 15:44:51 crc kubenswrapper[4755]: I1210 15:44:51.895739 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-api-0"] Dec 10 15:44:51 crc kubenswrapper[4755]: I1210 15:44:51.910552 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Dec 10 15:44:51 crc kubenswrapper[4755]: E1210 15:44:51.911088 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="976d7784-c4ed-4590-a34c-a3f79aedf471" containerName="probe" Dec 10 15:44:51 crc kubenswrapper[4755]: I1210 15:44:51.911107 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="976d7784-c4ed-4590-a34c-a3f79aedf471" containerName="probe" Dec 10 15:44:51 crc kubenswrapper[4755]: E1210 15:44:51.911132 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5676daa1-d125-4278-9766-c9fa314e5d77" containerName="cloudkitty-api" Dec 10 15:44:51 crc kubenswrapper[4755]: I1210 15:44:51.911140 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="5676daa1-d125-4278-9766-c9fa314e5d77" containerName="cloudkitty-api" Dec 10 15:44:51 crc kubenswrapper[4755]: E1210 15:44:51.911167 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="976d7784-c4ed-4590-a34c-a3f79aedf471" containerName="cinder-scheduler" Dec 10 15:44:51 crc kubenswrapper[4755]: I1210 15:44:51.911175 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="976d7784-c4ed-4590-a34c-a3f79aedf471" containerName="cinder-scheduler" Dec 10 15:44:51 crc kubenswrapper[4755]: E1210 15:44:51.911192 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5676daa1-d125-4278-9766-c9fa314e5d77" containerName="cloudkitty-api-log" Dec 10 15:44:51 crc kubenswrapper[4755]: I1210 15:44:51.911200 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="5676daa1-d125-4278-9766-c9fa314e5d77" containerName="cloudkitty-api-log" Dec 10 15:44:51 crc kubenswrapper[4755]: I1210 15:44:51.911415 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="5676daa1-d125-4278-9766-c9fa314e5d77" containerName="cloudkitty-api" Dec 10 15:44:51 crc kubenswrapper[4755]: I1210 15:44:51.911439 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="5676daa1-d125-4278-9766-c9fa314e5d77" containerName="cloudkitty-api-log" Dec 10 15:44:51 crc kubenswrapper[4755]: I1210 15:44:51.911454 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="976d7784-c4ed-4590-a34c-a3f79aedf471" containerName="probe" Dec 10 15:44:51 crc kubenswrapper[4755]: I1210 15:44:51.911481 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="976d7784-c4ed-4590-a34c-a3f79aedf471" containerName="cinder-scheduler" Dec 10 15:44:51 crc kubenswrapper[4755]: I1210 15:44:51.912957 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 10 15:44:51 crc kubenswrapper[4755]: I1210 15:44:51.918591 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Dec 10 15:44:51 crc kubenswrapper[4755]: I1210 15:44:51.926526 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-api-0"] Dec 10 15:44:51 crc kubenswrapper[4755]: I1210 15:44:51.928650 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Dec 10 15:44:51 crc kubenswrapper[4755]: I1210 15:44:51.931596 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 10 15:44:51 crc kubenswrapper[4755]: I1210 15:44:51.933598 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cloudkitty-public-svc" Dec 10 15:44:51 crc kubenswrapper[4755]: I1210 15:44:51.933781 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-api-config-data" Dec 10 15:44:51 crc kubenswrapper[4755]: I1210 15:44:51.933905 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cloudkitty-internal-svc" Dec 10 15:44:51 crc kubenswrapper[4755]: I1210 15:44:51.959932 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-api-0"] Dec 10 15:44:52 crc kubenswrapper[4755]: I1210 15:44:52.039753 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd0b3d98-9eda-4a87-8540-4a15ec2c174d-scripts\") pod \"cinder-scheduler-0\" (UID: \"fd0b3d98-9eda-4a87-8540-4a15ec2c174d\") " pod="openstack/cinder-scheduler-0" Dec 10 15:44:52 crc kubenswrapper[4755]: I1210 15:44:52.039821 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0051924b-bff8-4934-92b8-f787e29c758e-config-data\") pod \"cloudkitty-api-0\" (UID: \"0051924b-bff8-4934-92b8-f787e29c758e\") " pod="openstack/cloudkitty-api-0" Dec 10 15:44:52 crc kubenswrapper[4755]: I1210 15:44:52.039852 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0051924b-bff8-4934-92b8-f787e29c758e-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"0051924b-bff8-4934-92b8-f787e29c758e\") " pod="openstack/cloudkitty-api-0" Dec 10 15:44:52 crc kubenswrapper[4755]: I1210 15:44:52.039881 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0051924b-bff8-4934-92b8-f787e29c758e-scripts\") pod \"cloudkitty-api-0\" (UID: \"0051924b-bff8-4934-92b8-f787e29c758e\") " pod="openstack/cloudkitty-api-0" Dec 10 15:44:52 crc kubenswrapper[4755]: I1210 15:44:52.039912 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhf9p\" (UniqueName: \"kubernetes.io/projected/0051924b-bff8-4934-92b8-f787e29c758e-kube-api-access-rhf9p\") pod \"cloudkitty-api-0\" (UID: \"0051924b-bff8-4934-92b8-f787e29c758e\") " pod="openstack/cloudkitty-api-0" Dec 10 15:44:52 crc kubenswrapper[4755]: I1210 15:44:52.039941 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r89gg\" (UniqueName: \"kubernetes.io/projected/fd0b3d98-9eda-4a87-8540-4a15ec2c174d-kube-api-access-r89gg\") pod \"cinder-scheduler-0\" (UID: \"fd0b3d98-9eda-4a87-8540-4a15ec2c174d\") " pod="openstack/cinder-scheduler-0" Dec 10 15:44:52 crc kubenswrapper[4755]: I1210 15:44:52.039967 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0051924b-bff8-4934-92b8-f787e29c758e-public-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"0051924b-bff8-4934-92b8-f787e29c758e\") " pod="openstack/cloudkitty-api-0" Dec 10 15:44:52 crc kubenswrapper[4755]: I1210 15:44:52.039992 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd0b3d98-9eda-4a87-8540-4a15ec2c174d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"fd0b3d98-9eda-4a87-8540-4a15ec2c174d\") " pod="openstack/cinder-scheduler-0" Dec 10 15:44:52 crc kubenswrapper[4755]: I1210 15:44:52.040019 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0051924b-bff8-4934-92b8-f787e29c758e-internal-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"0051924b-bff8-4934-92b8-f787e29c758e\") " pod="openstack/cloudkitty-api-0" Dec 10 15:44:52 crc kubenswrapper[4755]: I1210 15:44:52.040062 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0051924b-bff8-4934-92b8-f787e29c758e-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"0051924b-bff8-4934-92b8-f787e29c758e\") " pod="openstack/cloudkitty-api-0" Dec 10 15:44:52 crc kubenswrapper[4755]: I1210 15:44:52.040143 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fd0b3d98-9eda-4a87-8540-4a15ec2c174d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"fd0b3d98-9eda-4a87-8540-4a15ec2c174d\") " pod="openstack/cinder-scheduler-0" Dec 10 15:44:52 crc kubenswrapper[4755]: I1210 15:44:52.040166 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0051924b-bff8-4934-92b8-f787e29c758e-logs\") pod \"cloudkitty-api-0\" (UID: \"0051924b-bff8-4934-92b8-f787e29c758e\") " pod="openstack/cloudkitty-api-0" Dec 10 15:44:52 crc kubenswrapper[4755]: I1210 15:44:52.040209 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fd0b3d98-9eda-4a87-8540-4a15ec2c174d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"fd0b3d98-9eda-4a87-8540-4a15ec2c174d\") " pod="openstack/cinder-scheduler-0" Dec 10 15:44:52 crc kubenswrapper[4755]: I1210 15:44:52.040239 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd0b3d98-9eda-4a87-8540-4a15ec2c174d-config-data\") pod \"cinder-scheduler-0\" (UID: \"fd0b3d98-9eda-4a87-8540-4a15ec2c174d\") " pod="openstack/cinder-scheduler-0" Dec 10 15:44:52 crc kubenswrapper[4755]: I1210 15:44:52.040286 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/0051924b-bff8-4934-92b8-f787e29c758e-certs\") pod \"cloudkitty-api-0\" (UID: \"0051924b-bff8-4934-92b8-f787e29c758e\") " pod="openstack/cloudkitty-api-0" Dec 10 15:44:52 crc kubenswrapper[4755]: I1210 15:44:52.142201 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fd0b3d98-9eda-4a87-8540-4a15ec2c174d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"fd0b3d98-9eda-4a87-8540-4a15ec2c174d\") " pod="openstack/cinder-scheduler-0" Dec 10 15:44:52 crc kubenswrapper[4755]: I1210 15:44:52.142242 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0051924b-bff8-4934-92b8-f787e29c758e-logs\") pod \"cloudkitty-api-0\" (UID: \"0051924b-bff8-4934-92b8-f787e29c758e\") " pod="openstack/cloudkitty-api-0" Dec 10 15:44:52 crc kubenswrapper[4755]: I1210 15:44:52.142289 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fd0b3d98-9eda-4a87-8540-4a15ec2c174d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"fd0b3d98-9eda-4a87-8540-4a15ec2c174d\") " pod="openstack/cinder-scheduler-0" Dec 10 15:44:52 crc kubenswrapper[4755]: I1210 15:44:52.142311 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd0b3d98-9eda-4a87-8540-4a15ec2c174d-config-data\") pod \"cinder-scheduler-0\" (UID: \"fd0b3d98-9eda-4a87-8540-4a15ec2c174d\") " pod="openstack/cinder-scheduler-0" Dec 10 15:44:52 crc kubenswrapper[4755]: I1210 15:44:52.142363 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/0051924b-bff8-4934-92b8-f787e29c758e-certs\") pod \"cloudkitty-api-0\" (UID: \"0051924b-bff8-4934-92b8-f787e29c758e\") " pod="openstack/cloudkitty-api-0" Dec 10 15:44:52 crc kubenswrapper[4755]: I1210 15:44:52.142444 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd0b3d98-9eda-4a87-8540-4a15ec2c174d-scripts\") pod \"cinder-scheduler-0\" (UID: \"fd0b3d98-9eda-4a87-8540-4a15ec2c174d\") " pod="openstack/cinder-scheduler-0" Dec 10 15:44:52 crc kubenswrapper[4755]: I1210 15:44:52.142517 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0051924b-bff8-4934-92b8-f787e29c758e-config-data\") pod \"cloudkitty-api-0\" (UID: \"0051924b-bff8-4934-92b8-f787e29c758e\") " pod="openstack/cloudkitty-api-0" Dec 10 15:44:52 crc kubenswrapper[4755]: I1210 15:44:52.142539 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0051924b-bff8-4934-92b8-f787e29c758e-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"0051924b-bff8-4934-92b8-f787e29c758e\") " pod="openstack/cloudkitty-api-0" Dec 10 15:44:52 crc kubenswrapper[4755]: I1210 15:44:52.142575 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0051924b-bff8-4934-92b8-f787e29c758e-scripts\") pod \"cloudkitty-api-0\" (UID: \"0051924b-bff8-4934-92b8-f787e29c758e\") " pod="openstack/cloudkitty-api-0" Dec 10 15:44:52 crc kubenswrapper[4755]: I1210 15:44:52.142596 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhf9p\" (UniqueName: \"kubernetes.io/projected/0051924b-bff8-4934-92b8-f787e29c758e-kube-api-access-rhf9p\") pod \"cloudkitty-api-0\" (UID: \"0051924b-bff8-4934-92b8-f787e29c758e\") " pod="openstack/cloudkitty-api-0" Dec 10 15:44:52 crc kubenswrapper[4755]: I1210 15:44:52.142613 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r89gg\" (UniqueName: \"kubernetes.io/projected/fd0b3d98-9eda-4a87-8540-4a15ec2c174d-kube-api-access-r89gg\") pod \"cinder-scheduler-0\" (UID: \"fd0b3d98-9eda-4a87-8540-4a15ec2c174d\") " pod="openstack/cinder-scheduler-0" Dec 10 15:44:52 crc kubenswrapper[4755]: I1210 15:44:52.142650 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0051924b-bff8-4934-92b8-f787e29c758e-public-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"0051924b-bff8-4934-92b8-f787e29c758e\") " pod="openstack/cloudkitty-api-0" Dec 10 15:44:52 crc kubenswrapper[4755]: I1210 15:44:52.142667 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd0b3d98-9eda-4a87-8540-4a15ec2c174d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"fd0b3d98-9eda-4a87-8540-4a15ec2c174d\") " pod="openstack/cinder-scheduler-0" Dec 10 15:44:52 crc kubenswrapper[4755]: I1210 15:44:52.142686 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0051924b-bff8-4934-92b8-f787e29c758e-internal-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"0051924b-bff8-4934-92b8-f787e29c758e\") " pod="openstack/cloudkitty-api-0" Dec 10 15:44:52 crc kubenswrapper[4755]: I1210 15:44:52.142727 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0051924b-bff8-4934-92b8-f787e29c758e-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"0051924b-bff8-4934-92b8-f787e29c758e\") " pod="openstack/cloudkitty-api-0" Dec 10 15:44:52 crc kubenswrapper[4755]: I1210 15:44:52.143425 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fd0b3d98-9eda-4a87-8540-4a15ec2c174d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"fd0b3d98-9eda-4a87-8540-4a15ec2c174d\") " pod="openstack/cinder-scheduler-0" Dec 10 15:44:52 crc kubenswrapper[4755]: I1210 15:44:52.143709 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0051924b-bff8-4934-92b8-f787e29c758e-logs\") pod \"cloudkitty-api-0\" (UID: \"0051924b-bff8-4934-92b8-f787e29c758e\") " pod="openstack/cloudkitty-api-0" Dec 10 15:44:52 crc kubenswrapper[4755]: I1210 15:44:52.151139 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0051924b-bff8-4934-92b8-f787e29c758e-public-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"0051924b-bff8-4934-92b8-f787e29c758e\") " pod="openstack/cloudkitty-api-0" Dec 10 15:44:52 crc kubenswrapper[4755]: I1210 15:44:52.151561 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0051924b-bff8-4934-92b8-f787e29c758e-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"0051924b-bff8-4934-92b8-f787e29c758e\") " pod="openstack/cloudkitty-api-0" Dec 10 15:44:52 crc kubenswrapper[4755]: I1210 15:44:52.157193 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0051924b-bff8-4934-92b8-f787e29c758e-config-data\") pod \"cloudkitty-api-0\" (UID: \"0051924b-bff8-4934-92b8-f787e29c758e\") " pod="openstack/cloudkitty-api-0" Dec 10 15:44:52 crc kubenswrapper[4755]: I1210 15:44:52.157910 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fd0b3d98-9eda-4a87-8540-4a15ec2c174d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"fd0b3d98-9eda-4a87-8540-4a15ec2c174d\") " pod="openstack/cinder-scheduler-0" Dec 10 15:44:52 crc kubenswrapper[4755]: I1210 15:44:52.169323 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/0051924b-bff8-4934-92b8-f787e29c758e-certs\") pod \"cloudkitty-api-0\" (UID: \"0051924b-bff8-4934-92b8-f787e29c758e\") " pod="openstack/cloudkitty-api-0" Dec 10 15:44:52 crc kubenswrapper[4755]: I1210 15:44:52.169351 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd0b3d98-9eda-4a87-8540-4a15ec2c174d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"fd0b3d98-9eda-4a87-8540-4a15ec2c174d\") " pod="openstack/cinder-scheduler-0" Dec 10 15:44:52 crc kubenswrapper[4755]: I1210 15:44:52.171999 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0051924b-bff8-4934-92b8-f787e29c758e-scripts\") pod \"cloudkitty-api-0\" (UID: \"0051924b-bff8-4934-92b8-f787e29c758e\") " pod="openstack/cloudkitty-api-0" Dec 10 15:44:52 crc kubenswrapper[4755]: I1210 15:44:52.172362 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0051924b-bff8-4934-92b8-f787e29c758e-internal-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"0051924b-bff8-4934-92b8-f787e29c758e\") " pod="openstack/cloudkitty-api-0" Dec 10 15:44:52 crc kubenswrapper[4755]: I1210 15:44:52.172370 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd0b3d98-9eda-4a87-8540-4a15ec2c174d-config-data\") pod \"cinder-scheduler-0\" (UID: \"fd0b3d98-9eda-4a87-8540-4a15ec2c174d\") " pod="openstack/cinder-scheduler-0" Dec 10 15:44:52 crc kubenswrapper[4755]: I1210 15:44:52.178913 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0051924b-bff8-4934-92b8-f787e29c758e-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"0051924b-bff8-4934-92b8-f787e29c758e\") " pod="openstack/cloudkitty-api-0" Dec 10 15:44:52 crc kubenswrapper[4755]: I1210 15:44:52.181105 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd0b3d98-9eda-4a87-8540-4a15ec2c174d-scripts\") pod \"cinder-scheduler-0\" (UID: \"fd0b3d98-9eda-4a87-8540-4a15ec2c174d\") " pod="openstack/cinder-scheduler-0" Dec 10 15:44:52 crc kubenswrapper[4755]: I1210 15:44:52.187197 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhf9p\" (UniqueName: \"kubernetes.io/projected/0051924b-bff8-4934-92b8-f787e29c758e-kube-api-access-rhf9p\") pod \"cloudkitty-api-0\" (UID: \"0051924b-bff8-4934-92b8-f787e29c758e\") " pod="openstack/cloudkitty-api-0" Dec 10 15:44:52 crc kubenswrapper[4755]: I1210 15:44:52.194981 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r89gg\" (UniqueName: \"kubernetes.io/projected/fd0b3d98-9eda-4a87-8540-4a15ec2c174d-kube-api-access-r89gg\") pod \"cinder-scheduler-0\" (UID: \"fd0b3d98-9eda-4a87-8540-4a15ec2c174d\") " pod="openstack/cinder-scheduler-0" Dec 10 15:44:52 crc kubenswrapper[4755]: I1210 15:44:52.259395 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 10 15:44:52 crc kubenswrapper[4755]: I1210 15:44:52.266581 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Dec 10 15:44:52 crc kubenswrapper[4755]: I1210 15:44:52.525843 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cloudkitty-proc-0" podUID="d5843012-5395-4dac-9506-b2e080cdc229" containerName="cloudkitty-proc" containerID="cri-o://4923bc3b4e1ca7e189ac6c5d0cf80cc1de3c2a26b6e033b555fdd137c7170f7a" gracePeriod=30 Dec 10 15:44:53 crc kubenswrapper[4755]: I1210 15:44:53.169814 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 10 15:44:53 crc kubenswrapper[4755]: I1210 15:44:53.542679 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"fd0b3d98-9eda-4a87-8540-4a15ec2c174d","Type":"ContainerStarted","Data":"552cd3b894d1e2ccd5e783055702a0f52dc7937e4ef6b92139cd49ed402e0aa2"} Dec 10 15:44:53 crc kubenswrapper[4755]: I1210 15:44:53.625428 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-api-0"] Dec 10 15:44:53 crc kubenswrapper[4755]: I1210 15:44:53.780892 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5676daa1-d125-4278-9766-c9fa314e5d77" path="/var/lib/kubelet/pods/5676daa1-d125-4278-9766-c9fa314e5d77/volumes" Dec 10 15:44:53 crc kubenswrapper[4755]: I1210 15:44:53.781964 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="976d7784-c4ed-4590-a34c-a3f79aedf471" path="/var/lib/kubelet/pods/976d7784-c4ed-4590-a34c-a3f79aedf471/volumes" Dec 10 15:44:54 crc kubenswrapper[4755]: I1210 15:44:54.554152 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"0051924b-bff8-4934-92b8-f787e29c758e","Type":"ContainerStarted","Data":"59619a06643efabc2935abba8da3f7016a1f4dd3c72371fc5ac552269ec194d5"} Dec 10 15:44:54 crc kubenswrapper[4755]: I1210 15:44:54.554525 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-api-0" Dec 10 15:44:54 crc kubenswrapper[4755]: I1210 15:44:54.554542 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"0051924b-bff8-4934-92b8-f787e29c758e","Type":"ContainerStarted","Data":"4103e4b2f069b4f2bc9a6575d879f8de044a07ef8738ca637803f0513c1899e7"} Dec 10 15:44:54 crc kubenswrapper[4755]: I1210 15:44:54.554555 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"0051924b-bff8-4934-92b8-f787e29c758e","Type":"ContainerStarted","Data":"8dfeb9217f0a851602cc274352c84bf665c5881bd77398b207056d60566b0ac8"} Dec 10 15:44:54 crc kubenswrapper[4755]: I1210 15:44:54.558625 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"fd0b3d98-9eda-4a87-8540-4a15ec2c174d","Type":"ContainerStarted","Data":"1ef137e51a5c4d036ec7a8d76be4d865f157d62bdc428a1aa40e9802873b5ee4"} Dec 10 15:44:54 crc kubenswrapper[4755]: I1210 15:44:54.589211 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-api-0" podStartSLOduration=3.589194452 podStartE2EDuration="3.589194452s" podCreationTimestamp="2025-12-10 15:44:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:44:54.584169106 +0000 UTC m=+1291.185052738" watchObservedRunningTime="2025-12-10 15:44:54.589194452 +0000 UTC m=+1291.190078084" Dec 10 15:44:54 crc kubenswrapper[4755]: I1210 15:44:54.832759 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-58bd69657f-n5hkt" Dec 10 15:44:54 crc kubenswrapper[4755]: I1210 15:44:54.910034 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-mff22"] Dec 10 15:44:54 crc kubenswrapper[4755]: I1210 15:44:54.911134 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-848cf88cfc-mff22" podUID="16a3f983-2b37-4dfc-944d-c959d5824b69" containerName="dnsmasq-dns" containerID="cri-o://6ee145ba3a7a44476a716e8d6e06467a204b1431c0f3807d9e82d3b83eb1b5e5" gracePeriod=10 Dec 10 15:44:55 crc kubenswrapper[4755]: I1210 15:44:55.662073 4755 generic.go:334] "Generic (PLEG): container finished" podID="16a3f983-2b37-4dfc-944d-c959d5824b69" containerID="6ee145ba3a7a44476a716e8d6e06467a204b1431c0f3807d9e82d3b83eb1b5e5" exitCode=0 Dec 10 15:44:55 crc kubenswrapper[4755]: I1210 15:44:55.662480 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-mff22" event={"ID":"16a3f983-2b37-4dfc-944d-c959d5824b69","Type":"ContainerDied","Data":"6ee145ba3a7a44476a716e8d6e06467a204b1431c0f3807d9e82d3b83eb1b5e5"} Dec 10 15:44:55 crc kubenswrapper[4755]: I1210 15:44:55.662512 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-mff22" event={"ID":"16a3f983-2b37-4dfc-944d-c959d5824b69","Type":"ContainerDied","Data":"d5aa303176b20a2ffe0de18e62f812a232e0f2e1ddce37044d6564e75a9449ca"} Dec 10 15:44:55 crc kubenswrapper[4755]: I1210 15:44:55.662525 4755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5aa303176b20a2ffe0de18e62f812a232e0f2e1ddce37044d6564e75a9449ca" Dec 10 15:44:55 crc kubenswrapper[4755]: I1210 15:44:55.685960 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"fd0b3d98-9eda-4a87-8540-4a15ec2c174d","Type":"ContainerStarted","Data":"cd49f2baadb616f1353d69a6130b8e6e3d90ac0d5bc790988fdf49280d33ff13"} Dec 10 15:44:55 crc kubenswrapper[4755]: I1210 15:44:55.686191 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-mff22" Dec 10 15:44:55 crc kubenswrapper[4755]: I1210 15:44:55.731325 4755 generic.go:334] "Generic (PLEG): container finished" podID="d5843012-5395-4dac-9506-b2e080cdc229" containerID="4923bc3b4e1ca7e189ac6c5d0cf80cc1de3c2a26b6e033b555fdd137c7170f7a" exitCode=0 Dec 10 15:44:55 crc kubenswrapper[4755]: I1210 15:44:55.731416 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"d5843012-5395-4dac-9506-b2e080cdc229","Type":"ContainerDied","Data":"4923bc3b4e1ca7e189ac6c5d0cf80cc1de3c2a26b6e033b555fdd137c7170f7a"} Dec 10 15:44:55 crc kubenswrapper[4755]: I1210 15:44:55.764570 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.764549016 podStartE2EDuration="4.764549016s" podCreationTimestamp="2025-12-10 15:44:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:44:55.75215762 +0000 UTC m=+1292.353041252" watchObservedRunningTime="2025-12-10 15:44:55.764549016 +0000 UTC m=+1292.365432648" Dec 10 15:44:55 crc kubenswrapper[4755]: I1210 15:44:55.811749 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/16a3f983-2b37-4dfc-944d-c959d5824b69-ovsdbserver-sb\") pod \"16a3f983-2b37-4dfc-944d-c959d5824b69\" (UID: \"16a3f983-2b37-4dfc-944d-c959d5824b69\") " Dec 10 15:44:55 crc kubenswrapper[4755]: I1210 15:44:55.811806 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bpg72\" (UniqueName: \"kubernetes.io/projected/16a3f983-2b37-4dfc-944d-c959d5824b69-kube-api-access-bpg72\") pod \"16a3f983-2b37-4dfc-944d-c959d5824b69\" (UID: \"16a3f983-2b37-4dfc-944d-c959d5824b69\") " Dec 10 15:44:55 crc kubenswrapper[4755]: I1210 15:44:55.811869 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/16a3f983-2b37-4dfc-944d-c959d5824b69-dns-svc\") pod \"16a3f983-2b37-4dfc-944d-c959d5824b69\" (UID: \"16a3f983-2b37-4dfc-944d-c959d5824b69\") " Dec 10 15:44:55 crc kubenswrapper[4755]: I1210 15:44:55.811938 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/16a3f983-2b37-4dfc-944d-c959d5824b69-dns-swift-storage-0\") pod \"16a3f983-2b37-4dfc-944d-c959d5824b69\" (UID: \"16a3f983-2b37-4dfc-944d-c959d5824b69\") " Dec 10 15:44:55 crc kubenswrapper[4755]: I1210 15:44:55.811980 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/16a3f983-2b37-4dfc-944d-c959d5824b69-ovsdbserver-nb\") pod \"16a3f983-2b37-4dfc-944d-c959d5824b69\" (UID: \"16a3f983-2b37-4dfc-944d-c959d5824b69\") " Dec 10 15:44:55 crc kubenswrapper[4755]: I1210 15:44:55.812145 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16a3f983-2b37-4dfc-944d-c959d5824b69-config\") pod \"16a3f983-2b37-4dfc-944d-c959d5824b69\" (UID: \"16a3f983-2b37-4dfc-944d-c959d5824b69\") " Dec 10 15:44:55 crc kubenswrapper[4755]: I1210 15:44:55.844454 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16a3f983-2b37-4dfc-944d-c959d5824b69-kube-api-access-bpg72" (OuterVolumeSpecName: "kube-api-access-bpg72") pod "16a3f983-2b37-4dfc-944d-c959d5824b69" (UID: "16a3f983-2b37-4dfc-944d-c959d5824b69"). InnerVolumeSpecName "kube-api-access-bpg72". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:44:55 crc kubenswrapper[4755]: I1210 15:44:55.919196 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bpg72\" (UniqueName: \"kubernetes.io/projected/16a3f983-2b37-4dfc-944d-c959d5824b69-kube-api-access-bpg72\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:55 crc kubenswrapper[4755]: I1210 15:44:55.937312 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16a3f983-2b37-4dfc-944d-c959d5824b69-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "16a3f983-2b37-4dfc-944d-c959d5824b69" (UID: "16a3f983-2b37-4dfc-944d-c959d5824b69"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:44:55 crc kubenswrapper[4755]: I1210 15:44:55.937824 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16a3f983-2b37-4dfc-944d-c959d5824b69-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "16a3f983-2b37-4dfc-944d-c959d5824b69" (UID: "16a3f983-2b37-4dfc-944d-c959d5824b69"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:44:55 crc kubenswrapper[4755]: I1210 15:44:55.957647 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16a3f983-2b37-4dfc-944d-c959d5824b69-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "16a3f983-2b37-4dfc-944d-c959d5824b69" (UID: "16a3f983-2b37-4dfc-944d-c959d5824b69"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:44:56 crc kubenswrapper[4755]: I1210 15:44:56.020750 4755 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/16a3f983-2b37-4dfc-944d-c959d5824b69-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:56 crc kubenswrapper[4755]: I1210 15:44:56.020787 4755 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/16a3f983-2b37-4dfc-944d-c959d5824b69-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:56 crc kubenswrapper[4755]: I1210 15:44:56.020796 4755 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/16a3f983-2b37-4dfc-944d-c959d5824b69-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:56 crc kubenswrapper[4755]: I1210 15:44:56.024404 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16a3f983-2b37-4dfc-944d-c959d5824b69-config" (OuterVolumeSpecName: "config") pod "16a3f983-2b37-4dfc-944d-c959d5824b69" (UID: "16a3f983-2b37-4dfc-944d-c959d5824b69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:44:56 crc kubenswrapper[4755]: I1210 15:44:56.025877 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16a3f983-2b37-4dfc-944d-c959d5824b69-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "16a3f983-2b37-4dfc-944d-c959d5824b69" (UID: "16a3f983-2b37-4dfc-944d-c959d5824b69"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:44:56 crc kubenswrapper[4755]: I1210 15:44:56.124009 4755 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16a3f983-2b37-4dfc-944d-c959d5824b69-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:56 crc kubenswrapper[4755]: I1210 15:44:56.124051 4755 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/16a3f983-2b37-4dfc-944d-c959d5824b69-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:56 crc kubenswrapper[4755]: I1210 15:44:56.336560 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Dec 10 15:44:56 crc kubenswrapper[4755]: I1210 15:44:56.429878 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5843012-5395-4dac-9506-b2e080cdc229-combined-ca-bundle\") pod \"d5843012-5395-4dac-9506-b2e080cdc229\" (UID: \"d5843012-5395-4dac-9506-b2e080cdc229\") " Dec 10 15:44:56 crc kubenswrapper[4755]: I1210 15:44:56.429958 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d5843012-5395-4dac-9506-b2e080cdc229-scripts\") pod \"d5843012-5395-4dac-9506-b2e080cdc229\" (UID: \"d5843012-5395-4dac-9506-b2e080cdc229\") " Dec 10 15:44:56 crc kubenswrapper[4755]: I1210 15:44:56.430054 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d5843012-5395-4dac-9506-b2e080cdc229-config-data-custom\") pod \"d5843012-5395-4dac-9506-b2e080cdc229\" (UID: \"d5843012-5395-4dac-9506-b2e080cdc229\") " Dec 10 15:44:56 crc kubenswrapper[4755]: I1210 15:44:56.430088 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k9ts5\" (UniqueName: \"kubernetes.io/projected/d5843012-5395-4dac-9506-b2e080cdc229-kube-api-access-k9ts5\") pod \"d5843012-5395-4dac-9506-b2e080cdc229\" (UID: \"d5843012-5395-4dac-9506-b2e080cdc229\") " Dec 10 15:44:56 crc kubenswrapper[4755]: I1210 15:44:56.430110 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5843012-5395-4dac-9506-b2e080cdc229-config-data\") pod \"d5843012-5395-4dac-9506-b2e080cdc229\" (UID: \"d5843012-5395-4dac-9506-b2e080cdc229\") " Dec 10 15:44:56 crc kubenswrapper[4755]: I1210 15:44:56.430213 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/d5843012-5395-4dac-9506-b2e080cdc229-certs\") pod \"d5843012-5395-4dac-9506-b2e080cdc229\" (UID: \"d5843012-5395-4dac-9506-b2e080cdc229\") " Dec 10 15:44:56 crc kubenswrapper[4755]: I1210 15:44:56.439696 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5843012-5395-4dac-9506-b2e080cdc229-kube-api-access-k9ts5" (OuterVolumeSpecName: "kube-api-access-k9ts5") pod "d5843012-5395-4dac-9506-b2e080cdc229" (UID: "d5843012-5395-4dac-9506-b2e080cdc229"). InnerVolumeSpecName "kube-api-access-k9ts5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:44:56 crc kubenswrapper[4755]: I1210 15:44:56.439750 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5843012-5395-4dac-9506-b2e080cdc229-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "d5843012-5395-4dac-9506-b2e080cdc229" (UID: "d5843012-5395-4dac-9506-b2e080cdc229"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:44:56 crc kubenswrapper[4755]: I1210 15:44:56.439814 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5843012-5395-4dac-9506-b2e080cdc229-certs" (OuterVolumeSpecName: "certs") pod "d5843012-5395-4dac-9506-b2e080cdc229" (UID: "d5843012-5395-4dac-9506-b2e080cdc229"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:44:56 crc kubenswrapper[4755]: I1210 15:44:56.442632 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5843012-5395-4dac-9506-b2e080cdc229-scripts" (OuterVolumeSpecName: "scripts") pod "d5843012-5395-4dac-9506-b2e080cdc229" (UID: "d5843012-5395-4dac-9506-b2e080cdc229"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:44:56 crc kubenswrapper[4755]: I1210 15:44:56.465138 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5843012-5395-4dac-9506-b2e080cdc229-config-data" (OuterVolumeSpecName: "config-data") pod "d5843012-5395-4dac-9506-b2e080cdc229" (UID: "d5843012-5395-4dac-9506-b2e080cdc229"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:44:56 crc kubenswrapper[4755]: I1210 15:44:56.471803 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5843012-5395-4dac-9506-b2e080cdc229-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d5843012-5395-4dac-9506-b2e080cdc229" (UID: "d5843012-5395-4dac-9506-b2e080cdc229"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:44:56 crc kubenswrapper[4755]: I1210 15:44:56.533093 4755 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5843012-5395-4dac-9506-b2e080cdc229-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:56 crc kubenswrapper[4755]: I1210 15:44:56.533143 4755 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d5843012-5395-4dac-9506-b2e080cdc229-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:56 crc kubenswrapper[4755]: I1210 15:44:56.533155 4755 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d5843012-5395-4dac-9506-b2e080cdc229-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:56 crc kubenswrapper[4755]: I1210 15:44:56.533182 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k9ts5\" (UniqueName: \"kubernetes.io/projected/d5843012-5395-4dac-9506-b2e080cdc229-kube-api-access-k9ts5\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:56 crc kubenswrapper[4755]: I1210 15:44:56.533195 4755 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5843012-5395-4dac-9506-b2e080cdc229-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:56 crc kubenswrapper[4755]: I1210 15:44:56.533206 4755 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/projected/d5843012-5395-4dac-9506-b2e080cdc229-certs\") on node \"crc\" DevicePath \"\"" Dec 10 15:44:56 crc kubenswrapper[4755]: I1210 15:44:56.744053 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-mff22" Dec 10 15:44:56 crc kubenswrapper[4755]: I1210 15:44:56.745078 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Dec 10 15:44:56 crc kubenswrapper[4755]: I1210 15:44:56.745949 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"d5843012-5395-4dac-9506-b2e080cdc229","Type":"ContainerDied","Data":"6f2db4715219749af04d614269699ea1dd1d7acfffcf2650477d497df92b77cf"} Dec 10 15:44:56 crc kubenswrapper[4755]: I1210 15:44:56.746017 4755 scope.go:117] "RemoveContainer" containerID="4923bc3b4e1ca7e189ac6c5d0cf80cc1de3c2a26b6e033b555fdd137c7170f7a" Dec 10 15:44:56 crc kubenswrapper[4755]: I1210 15:44:56.822910 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-proc-0"] Dec 10 15:44:56 crc kubenswrapper[4755]: I1210 15:44:56.839530 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-proc-0"] Dec 10 15:44:56 crc kubenswrapper[4755]: I1210 15:44:56.851798 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-mff22"] Dec 10 15:44:56 crc kubenswrapper[4755]: I1210 15:44:56.887423 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-mff22"] Dec 10 15:44:56 crc kubenswrapper[4755]: I1210 15:44:56.900547 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-proc-0"] Dec 10 15:44:56 crc kubenswrapper[4755]: E1210 15:44:56.901030 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16a3f983-2b37-4dfc-944d-c959d5824b69" containerName="init" Dec 10 15:44:56 crc kubenswrapper[4755]: I1210 15:44:56.901058 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="16a3f983-2b37-4dfc-944d-c959d5824b69" containerName="init" Dec 10 15:44:56 crc kubenswrapper[4755]: E1210 15:44:56.901091 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16a3f983-2b37-4dfc-944d-c959d5824b69" containerName="dnsmasq-dns" Dec 10 15:44:56 crc kubenswrapper[4755]: I1210 15:44:56.901100 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="16a3f983-2b37-4dfc-944d-c959d5824b69" containerName="dnsmasq-dns" Dec 10 15:44:56 crc kubenswrapper[4755]: E1210 15:44:56.901119 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5843012-5395-4dac-9506-b2e080cdc229" containerName="cloudkitty-proc" Dec 10 15:44:56 crc kubenswrapper[4755]: I1210 15:44:56.901126 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5843012-5395-4dac-9506-b2e080cdc229" containerName="cloudkitty-proc" Dec 10 15:44:56 crc kubenswrapper[4755]: I1210 15:44:56.901351 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="16a3f983-2b37-4dfc-944d-c959d5824b69" containerName="dnsmasq-dns" Dec 10 15:44:56 crc kubenswrapper[4755]: I1210 15:44:56.901375 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5843012-5395-4dac-9506-b2e080cdc229" containerName="cloudkitty-proc" Dec 10 15:44:56 crc kubenswrapper[4755]: I1210 15:44:56.902813 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Dec 10 15:44:56 crc kubenswrapper[4755]: I1210 15:44:56.905461 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-proc-config-data" Dec 10 15:44:56 crc kubenswrapper[4755]: I1210 15:44:56.930533 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-proc-0"] Dec 10 15:44:56 crc kubenswrapper[4755]: I1210 15:44:56.995032 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 10 15:44:56 crc kubenswrapper[4755]: I1210 15:44:56.995353 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="05ad143b-bb62-4f04-94da-b4473be95da2" containerName="ceilometer-central-agent" containerID="cri-o://1317119dfc28922ac22eb17eeb3e7b438ff12a3a3dcf1a08f48d30a64c9de0b5" gracePeriod=30 Dec 10 15:44:56 crc kubenswrapper[4755]: I1210 15:44:56.996587 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="05ad143b-bb62-4f04-94da-b4473be95da2" containerName="sg-core" containerID="cri-o://6755b231d887b112762de4edbe10406e7553667e317f41eb6232b913bb47796b" gracePeriod=30 Dec 10 15:44:56 crc kubenswrapper[4755]: I1210 15:44:56.996742 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="05ad143b-bb62-4f04-94da-b4473be95da2" containerName="proxy-httpd" containerID="cri-o://f6915ec4e6022634899246154151fac515dd017f2f725a58c7c31fd0f5c66d3d" gracePeriod=30 Dec 10 15:44:56 crc kubenswrapper[4755]: I1210 15:44:56.996798 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="05ad143b-bb62-4f04-94da-b4473be95da2" containerName="ceilometer-notification-agent" containerID="cri-o://8ec378a30c21ccb7364ac5b12909974c507cb0542be73687d497ffd56696ec80" gracePeriod=30 Dec 10 15:44:57 crc kubenswrapper[4755]: I1210 15:44:57.003976 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 10 15:44:57 crc kubenswrapper[4755]: I1210 15:44:57.045870 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/83ca4bf3-4811-4418-af2b-0fdc5e299a00-certs\") pod \"cloudkitty-proc-0\" (UID: \"83ca4bf3-4811-4418-af2b-0fdc5e299a00\") " pod="openstack/cloudkitty-proc-0" Dec 10 15:44:57 crc kubenswrapper[4755]: I1210 15:44:57.045968 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83ca4bf3-4811-4418-af2b-0fdc5e299a00-config-data\") pod \"cloudkitty-proc-0\" (UID: \"83ca4bf3-4811-4418-af2b-0fdc5e299a00\") " pod="openstack/cloudkitty-proc-0" Dec 10 15:44:57 crc kubenswrapper[4755]: I1210 15:44:57.046012 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kn6bl\" (UniqueName: \"kubernetes.io/projected/83ca4bf3-4811-4418-af2b-0fdc5e299a00-kube-api-access-kn6bl\") pod \"cloudkitty-proc-0\" (UID: \"83ca4bf3-4811-4418-af2b-0fdc5e299a00\") " pod="openstack/cloudkitty-proc-0" Dec 10 15:44:57 crc kubenswrapper[4755]: I1210 15:44:57.046049 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83ca4bf3-4811-4418-af2b-0fdc5e299a00-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"83ca4bf3-4811-4418-af2b-0fdc5e299a00\") " pod="openstack/cloudkitty-proc-0" Dec 10 15:44:57 crc kubenswrapper[4755]: I1210 15:44:57.046182 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83ca4bf3-4811-4418-af2b-0fdc5e299a00-scripts\") pod \"cloudkitty-proc-0\" (UID: \"83ca4bf3-4811-4418-af2b-0fdc5e299a00\") " pod="openstack/cloudkitty-proc-0" Dec 10 15:44:57 crc kubenswrapper[4755]: I1210 15:44:57.046286 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/83ca4bf3-4811-4418-af2b-0fdc5e299a00-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"83ca4bf3-4811-4418-af2b-0fdc5e299a00\") " pod="openstack/cloudkitty-proc-0" Dec 10 15:44:57 crc kubenswrapper[4755]: I1210 15:44:57.148301 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/83ca4bf3-4811-4418-af2b-0fdc5e299a00-certs\") pod \"cloudkitty-proc-0\" (UID: \"83ca4bf3-4811-4418-af2b-0fdc5e299a00\") " pod="openstack/cloudkitty-proc-0" Dec 10 15:44:57 crc kubenswrapper[4755]: I1210 15:44:57.148419 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83ca4bf3-4811-4418-af2b-0fdc5e299a00-config-data\") pod \"cloudkitty-proc-0\" (UID: \"83ca4bf3-4811-4418-af2b-0fdc5e299a00\") " pod="openstack/cloudkitty-proc-0" Dec 10 15:44:57 crc kubenswrapper[4755]: I1210 15:44:57.148488 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kn6bl\" (UniqueName: \"kubernetes.io/projected/83ca4bf3-4811-4418-af2b-0fdc5e299a00-kube-api-access-kn6bl\") pod \"cloudkitty-proc-0\" (UID: \"83ca4bf3-4811-4418-af2b-0fdc5e299a00\") " pod="openstack/cloudkitty-proc-0" Dec 10 15:44:57 crc kubenswrapper[4755]: I1210 15:44:57.148527 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83ca4bf3-4811-4418-af2b-0fdc5e299a00-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"83ca4bf3-4811-4418-af2b-0fdc5e299a00\") " pod="openstack/cloudkitty-proc-0" Dec 10 15:44:57 crc kubenswrapper[4755]: I1210 15:44:57.148564 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83ca4bf3-4811-4418-af2b-0fdc5e299a00-scripts\") pod \"cloudkitty-proc-0\" (UID: \"83ca4bf3-4811-4418-af2b-0fdc5e299a00\") " pod="openstack/cloudkitty-proc-0" Dec 10 15:44:57 crc kubenswrapper[4755]: I1210 15:44:57.148609 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/83ca4bf3-4811-4418-af2b-0fdc5e299a00-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"83ca4bf3-4811-4418-af2b-0fdc5e299a00\") " pod="openstack/cloudkitty-proc-0" Dec 10 15:44:57 crc kubenswrapper[4755]: I1210 15:44:57.155578 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83ca4bf3-4811-4418-af2b-0fdc5e299a00-scripts\") pod \"cloudkitty-proc-0\" (UID: \"83ca4bf3-4811-4418-af2b-0fdc5e299a00\") " pod="openstack/cloudkitty-proc-0" Dec 10 15:44:57 crc kubenswrapper[4755]: I1210 15:44:57.157571 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83ca4bf3-4811-4418-af2b-0fdc5e299a00-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"83ca4bf3-4811-4418-af2b-0fdc5e299a00\") " pod="openstack/cloudkitty-proc-0" Dec 10 15:44:57 crc kubenswrapper[4755]: I1210 15:44:57.157993 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/83ca4bf3-4811-4418-af2b-0fdc5e299a00-certs\") pod \"cloudkitty-proc-0\" (UID: \"83ca4bf3-4811-4418-af2b-0fdc5e299a00\") " pod="openstack/cloudkitty-proc-0" Dec 10 15:44:57 crc kubenswrapper[4755]: I1210 15:44:57.158065 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83ca4bf3-4811-4418-af2b-0fdc5e299a00-config-data\") pod \"cloudkitty-proc-0\" (UID: \"83ca4bf3-4811-4418-af2b-0fdc5e299a00\") " pod="openstack/cloudkitty-proc-0" Dec 10 15:44:57 crc kubenswrapper[4755]: I1210 15:44:57.158618 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/83ca4bf3-4811-4418-af2b-0fdc5e299a00-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"83ca4bf3-4811-4418-af2b-0fdc5e299a00\") " pod="openstack/cloudkitty-proc-0" Dec 10 15:44:57 crc kubenswrapper[4755]: I1210 15:44:57.168891 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kn6bl\" (UniqueName: \"kubernetes.io/projected/83ca4bf3-4811-4418-af2b-0fdc5e299a00-kube-api-access-kn6bl\") pod \"cloudkitty-proc-0\" (UID: \"83ca4bf3-4811-4418-af2b-0fdc5e299a00\") " pod="openstack/cloudkitty-proc-0" Dec 10 15:44:57 crc kubenswrapper[4755]: I1210 15:44:57.227280 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Dec 10 15:44:57 crc kubenswrapper[4755]: I1210 15:44:57.260169 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Dec 10 15:44:57 crc kubenswrapper[4755]: I1210 15:44:57.607177 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-5b8c78b5dc-vl479"] Dec 10 15:44:57 crc kubenswrapper[4755]: I1210 15:44:57.609593 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-5b8c78b5dc-vl479" Dec 10 15:44:57 crc kubenswrapper[4755]: I1210 15:44:57.612406 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Dec 10 15:44:57 crc kubenswrapper[4755]: I1210 15:44:57.614094 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Dec 10 15:44:57 crc kubenswrapper[4755]: I1210 15:44:57.614331 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Dec 10 15:44:57 crc kubenswrapper[4755]: I1210 15:44:57.625675 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-5b8c78b5dc-vl479"] Dec 10 15:44:57 crc kubenswrapper[4755]: I1210 15:44:57.769400 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/77a6e4fa-6291-41fc-a165-9fe6d6039810-run-httpd\") pod \"swift-proxy-5b8c78b5dc-vl479\" (UID: \"77a6e4fa-6291-41fc-a165-9fe6d6039810\") " pod="openstack/swift-proxy-5b8c78b5dc-vl479" Dec 10 15:44:57 crc kubenswrapper[4755]: I1210 15:44:57.770425 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/77a6e4fa-6291-41fc-a165-9fe6d6039810-etc-swift\") pod \"swift-proxy-5b8c78b5dc-vl479\" (UID: \"77a6e4fa-6291-41fc-a165-9fe6d6039810\") " pod="openstack/swift-proxy-5b8c78b5dc-vl479" Dec 10 15:44:57 crc kubenswrapper[4755]: I1210 15:44:57.770682 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdk5m\" (UniqueName: \"kubernetes.io/projected/77a6e4fa-6291-41fc-a165-9fe6d6039810-kube-api-access-kdk5m\") pod \"swift-proxy-5b8c78b5dc-vl479\" (UID: \"77a6e4fa-6291-41fc-a165-9fe6d6039810\") " pod="openstack/swift-proxy-5b8c78b5dc-vl479" Dec 10 15:44:57 crc kubenswrapper[4755]: I1210 15:44:57.770803 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/77a6e4fa-6291-41fc-a165-9fe6d6039810-log-httpd\") pod \"swift-proxy-5b8c78b5dc-vl479\" (UID: \"77a6e4fa-6291-41fc-a165-9fe6d6039810\") " pod="openstack/swift-proxy-5b8c78b5dc-vl479" Dec 10 15:44:57 crc kubenswrapper[4755]: I1210 15:44:57.770938 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77a6e4fa-6291-41fc-a165-9fe6d6039810-combined-ca-bundle\") pod \"swift-proxy-5b8c78b5dc-vl479\" (UID: \"77a6e4fa-6291-41fc-a165-9fe6d6039810\") " pod="openstack/swift-proxy-5b8c78b5dc-vl479" Dec 10 15:44:57 crc kubenswrapper[4755]: I1210 15:44:57.771132 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77a6e4fa-6291-41fc-a165-9fe6d6039810-config-data\") pod \"swift-proxy-5b8c78b5dc-vl479\" (UID: \"77a6e4fa-6291-41fc-a165-9fe6d6039810\") " pod="openstack/swift-proxy-5b8c78b5dc-vl479" Dec 10 15:44:57 crc kubenswrapper[4755]: I1210 15:44:57.771381 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/77a6e4fa-6291-41fc-a165-9fe6d6039810-public-tls-certs\") pod \"swift-proxy-5b8c78b5dc-vl479\" (UID: \"77a6e4fa-6291-41fc-a165-9fe6d6039810\") " pod="openstack/swift-proxy-5b8c78b5dc-vl479" Dec 10 15:44:57 crc kubenswrapper[4755]: I1210 15:44:57.771868 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/77a6e4fa-6291-41fc-a165-9fe6d6039810-internal-tls-certs\") pod \"swift-proxy-5b8c78b5dc-vl479\" (UID: \"77a6e4fa-6291-41fc-a165-9fe6d6039810\") " pod="openstack/swift-proxy-5b8c78b5dc-vl479" Dec 10 15:44:57 crc kubenswrapper[4755]: I1210 15:44:57.775301 4755 generic.go:334] "Generic (PLEG): container finished" podID="05ad143b-bb62-4f04-94da-b4473be95da2" containerID="f6915ec4e6022634899246154151fac515dd017f2f725a58c7c31fd0f5c66d3d" exitCode=0 Dec 10 15:44:57 crc kubenswrapper[4755]: I1210 15:44:57.775336 4755 generic.go:334] "Generic (PLEG): container finished" podID="05ad143b-bb62-4f04-94da-b4473be95da2" containerID="6755b231d887b112762de4edbe10406e7553667e317f41eb6232b913bb47796b" exitCode=2 Dec 10 15:44:57 crc kubenswrapper[4755]: I1210 15:44:57.775347 4755 generic.go:334] "Generic (PLEG): container finished" podID="05ad143b-bb62-4f04-94da-b4473be95da2" containerID="1317119dfc28922ac22eb17eeb3e7b438ff12a3a3dcf1a08f48d30a64c9de0b5" exitCode=0 Dec 10 15:44:57 crc kubenswrapper[4755]: I1210 15:44:57.784415 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16a3f983-2b37-4dfc-944d-c959d5824b69" path="/var/lib/kubelet/pods/16a3f983-2b37-4dfc-944d-c959d5824b69/volumes" Dec 10 15:44:57 crc kubenswrapper[4755]: I1210 15:44:57.785148 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5843012-5395-4dac-9506-b2e080cdc229" path="/var/lib/kubelet/pods/d5843012-5395-4dac-9506-b2e080cdc229/volumes" Dec 10 15:44:57 crc kubenswrapper[4755]: I1210 15:44:57.785741 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"05ad143b-bb62-4f04-94da-b4473be95da2","Type":"ContainerDied","Data":"f6915ec4e6022634899246154151fac515dd017f2f725a58c7c31fd0f5c66d3d"} Dec 10 15:44:57 crc kubenswrapper[4755]: I1210 15:44:57.785779 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-proc-0"] Dec 10 15:44:57 crc kubenswrapper[4755]: I1210 15:44:57.785799 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"05ad143b-bb62-4f04-94da-b4473be95da2","Type":"ContainerDied","Data":"6755b231d887b112762de4edbe10406e7553667e317f41eb6232b913bb47796b"} Dec 10 15:44:57 crc kubenswrapper[4755]: I1210 15:44:57.785811 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"05ad143b-bb62-4f04-94da-b4473be95da2","Type":"ContainerDied","Data":"1317119dfc28922ac22eb17eeb3e7b438ff12a3a3dcf1a08f48d30a64c9de0b5"} Dec 10 15:44:57 crc kubenswrapper[4755]: I1210 15:44:57.873761 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/77a6e4fa-6291-41fc-a165-9fe6d6039810-public-tls-certs\") pod \"swift-proxy-5b8c78b5dc-vl479\" (UID: \"77a6e4fa-6291-41fc-a165-9fe6d6039810\") " pod="openstack/swift-proxy-5b8c78b5dc-vl479" Dec 10 15:44:57 crc kubenswrapper[4755]: I1210 15:44:57.874176 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/77a6e4fa-6291-41fc-a165-9fe6d6039810-internal-tls-certs\") pod \"swift-proxy-5b8c78b5dc-vl479\" (UID: \"77a6e4fa-6291-41fc-a165-9fe6d6039810\") " pod="openstack/swift-proxy-5b8c78b5dc-vl479" Dec 10 15:44:57 crc kubenswrapper[4755]: I1210 15:44:57.874279 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/77a6e4fa-6291-41fc-a165-9fe6d6039810-run-httpd\") pod \"swift-proxy-5b8c78b5dc-vl479\" (UID: \"77a6e4fa-6291-41fc-a165-9fe6d6039810\") " pod="openstack/swift-proxy-5b8c78b5dc-vl479" Dec 10 15:44:57 crc kubenswrapper[4755]: I1210 15:44:57.874330 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/77a6e4fa-6291-41fc-a165-9fe6d6039810-etc-swift\") pod \"swift-proxy-5b8c78b5dc-vl479\" (UID: \"77a6e4fa-6291-41fc-a165-9fe6d6039810\") " pod="openstack/swift-proxy-5b8c78b5dc-vl479" Dec 10 15:44:57 crc kubenswrapper[4755]: I1210 15:44:57.874428 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdk5m\" (UniqueName: \"kubernetes.io/projected/77a6e4fa-6291-41fc-a165-9fe6d6039810-kube-api-access-kdk5m\") pod \"swift-proxy-5b8c78b5dc-vl479\" (UID: \"77a6e4fa-6291-41fc-a165-9fe6d6039810\") " pod="openstack/swift-proxy-5b8c78b5dc-vl479" Dec 10 15:44:57 crc kubenswrapper[4755]: I1210 15:44:57.874489 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/77a6e4fa-6291-41fc-a165-9fe6d6039810-log-httpd\") pod \"swift-proxy-5b8c78b5dc-vl479\" (UID: \"77a6e4fa-6291-41fc-a165-9fe6d6039810\") " pod="openstack/swift-proxy-5b8c78b5dc-vl479" Dec 10 15:44:57 crc kubenswrapper[4755]: I1210 15:44:57.874524 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77a6e4fa-6291-41fc-a165-9fe6d6039810-combined-ca-bundle\") pod \"swift-proxy-5b8c78b5dc-vl479\" (UID: \"77a6e4fa-6291-41fc-a165-9fe6d6039810\") " pod="openstack/swift-proxy-5b8c78b5dc-vl479" Dec 10 15:44:57 crc kubenswrapper[4755]: I1210 15:44:57.874584 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77a6e4fa-6291-41fc-a165-9fe6d6039810-config-data\") pod \"swift-proxy-5b8c78b5dc-vl479\" (UID: \"77a6e4fa-6291-41fc-a165-9fe6d6039810\") " pod="openstack/swift-proxy-5b8c78b5dc-vl479" Dec 10 15:44:57 crc kubenswrapper[4755]: I1210 15:44:57.877657 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/77a6e4fa-6291-41fc-a165-9fe6d6039810-run-httpd\") pod \"swift-proxy-5b8c78b5dc-vl479\" (UID: \"77a6e4fa-6291-41fc-a165-9fe6d6039810\") " pod="openstack/swift-proxy-5b8c78b5dc-vl479" Dec 10 15:44:57 crc kubenswrapper[4755]: I1210 15:44:57.883708 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/77a6e4fa-6291-41fc-a165-9fe6d6039810-internal-tls-certs\") pod \"swift-proxy-5b8c78b5dc-vl479\" (UID: \"77a6e4fa-6291-41fc-a165-9fe6d6039810\") " pod="openstack/swift-proxy-5b8c78b5dc-vl479" Dec 10 15:44:57 crc kubenswrapper[4755]: I1210 15:44:57.884025 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/77a6e4fa-6291-41fc-a165-9fe6d6039810-log-httpd\") pod \"swift-proxy-5b8c78b5dc-vl479\" (UID: \"77a6e4fa-6291-41fc-a165-9fe6d6039810\") " pod="openstack/swift-proxy-5b8c78b5dc-vl479" Dec 10 15:44:57 crc kubenswrapper[4755]: I1210 15:44:57.886994 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77a6e4fa-6291-41fc-a165-9fe6d6039810-combined-ca-bundle\") pod \"swift-proxy-5b8c78b5dc-vl479\" (UID: \"77a6e4fa-6291-41fc-a165-9fe6d6039810\") " pod="openstack/swift-proxy-5b8c78b5dc-vl479" Dec 10 15:44:57 crc kubenswrapper[4755]: I1210 15:44:57.888404 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/77a6e4fa-6291-41fc-a165-9fe6d6039810-public-tls-certs\") pod \"swift-proxy-5b8c78b5dc-vl479\" (UID: \"77a6e4fa-6291-41fc-a165-9fe6d6039810\") " pod="openstack/swift-proxy-5b8c78b5dc-vl479" Dec 10 15:44:57 crc kubenswrapper[4755]: I1210 15:44:57.897635 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77a6e4fa-6291-41fc-a165-9fe6d6039810-config-data\") pod \"swift-proxy-5b8c78b5dc-vl479\" (UID: \"77a6e4fa-6291-41fc-a165-9fe6d6039810\") " pod="openstack/swift-proxy-5b8c78b5dc-vl479" Dec 10 15:44:57 crc kubenswrapper[4755]: I1210 15:44:57.900444 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdk5m\" (UniqueName: \"kubernetes.io/projected/77a6e4fa-6291-41fc-a165-9fe6d6039810-kube-api-access-kdk5m\") pod \"swift-proxy-5b8c78b5dc-vl479\" (UID: \"77a6e4fa-6291-41fc-a165-9fe6d6039810\") " pod="openstack/swift-proxy-5b8c78b5dc-vl479" Dec 10 15:44:57 crc kubenswrapper[4755]: I1210 15:44:57.905964 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/77a6e4fa-6291-41fc-a165-9fe6d6039810-etc-swift\") pod \"swift-proxy-5b8c78b5dc-vl479\" (UID: \"77a6e4fa-6291-41fc-a165-9fe6d6039810\") " pod="openstack/swift-proxy-5b8c78b5dc-vl479" Dec 10 15:44:57 crc kubenswrapper[4755]: I1210 15:44:57.933023 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-5b8c78b5dc-vl479" Dec 10 15:44:58 crc kubenswrapper[4755]: I1210 15:44:58.079248 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Dec 10 15:44:58 crc kubenswrapper[4755]: I1210 15:44:58.684131 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-5b8c78b5dc-vl479"] Dec 10 15:44:58 crc kubenswrapper[4755]: W1210 15:44:58.703127 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod77a6e4fa_6291_41fc_a165_9fe6d6039810.slice/crio-6558cbe03d59fed2547bde36eee1e87ce247006d237e4c596d202e3601b72311 WatchSource:0}: Error finding container 6558cbe03d59fed2547bde36eee1e87ce247006d237e4c596d202e3601b72311: Status 404 returned error can't find the container with id 6558cbe03d59fed2547bde36eee1e87ce247006d237e4c596d202e3601b72311 Dec 10 15:44:58 crc kubenswrapper[4755]: I1210 15:44:58.815767 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5b8c78b5dc-vl479" event={"ID":"77a6e4fa-6291-41fc-a165-9fe6d6039810","Type":"ContainerStarted","Data":"6558cbe03d59fed2547bde36eee1e87ce247006d237e4c596d202e3601b72311"} Dec 10 15:44:58 crc kubenswrapper[4755]: I1210 15:44:58.817638 4755 generic.go:334] "Generic (PLEG): container finished" podID="05ad143b-bb62-4f04-94da-b4473be95da2" containerID="8ec378a30c21ccb7364ac5b12909974c507cb0542be73687d497ffd56696ec80" exitCode=0 Dec 10 15:44:58 crc kubenswrapper[4755]: I1210 15:44:58.817672 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"05ad143b-bb62-4f04-94da-b4473be95da2","Type":"ContainerDied","Data":"8ec378a30c21ccb7364ac5b12909974c507cb0542be73687d497ffd56696ec80"} Dec 10 15:44:58 crc kubenswrapper[4755]: I1210 15:44:58.820117 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"83ca4bf3-4811-4418-af2b-0fdc5e299a00","Type":"ContainerStarted","Data":"24ccadc9c45111f5cbe20607275ed9395aedb86da1f2b16db2d5ef6d0092b297"} Dec 10 15:44:58 crc kubenswrapper[4755]: I1210 15:44:58.821045 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"83ca4bf3-4811-4418-af2b-0fdc5e299a00","Type":"ContainerStarted","Data":"f27de51107600b19f9413beaee2a246fbc5c83c828b1a0466eabe4becaa3c894"} Dec 10 15:44:58 crc kubenswrapper[4755]: I1210 15:44:58.857715 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-proc-0" podStartSLOduration=2.857681806 podStartE2EDuration="2.857681806s" podCreationTimestamp="2025-12-10 15:44:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:44:58.852792753 +0000 UTC m=+1295.453676385" watchObservedRunningTime="2025-12-10 15:44:58.857681806 +0000 UTC m=+1295.458565438" Dec 10 15:44:59 crc kubenswrapper[4755]: I1210 15:44:59.541923 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-hdt77"] Dec 10 15:44:59 crc kubenswrapper[4755]: I1210 15:44:59.543427 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-hdt77" Dec 10 15:44:59 crc kubenswrapper[4755]: I1210 15:44:59.572724 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-hdt77"] Dec 10 15:44:59 crc kubenswrapper[4755]: I1210 15:44:59.635682 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c7364f60-3c77-4234-9bed-d0e8f92d0bca-operator-scripts\") pod \"nova-api-db-create-hdt77\" (UID: \"c7364f60-3c77-4234-9bed-d0e8f92d0bca\") " pod="openstack/nova-api-db-create-hdt77" Dec 10 15:44:59 crc kubenswrapper[4755]: I1210 15:44:59.636134 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhm8p\" (UniqueName: \"kubernetes.io/projected/c7364f60-3c77-4234-9bed-d0e8f92d0bca-kube-api-access-hhm8p\") pod \"nova-api-db-create-hdt77\" (UID: \"c7364f60-3c77-4234-9bed-d0e8f92d0bca\") " pod="openstack/nova-api-db-create-hdt77" Dec 10 15:44:59 crc kubenswrapper[4755]: I1210 15:44:59.650177 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-625d-account-create-update-mn5ph"] Dec 10 15:44:59 crc kubenswrapper[4755]: I1210 15:44:59.651995 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-625d-account-create-update-mn5ph" Dec 10 15:44:59 crc kubenswrapper[4755]: I1210 15:44:59.657128 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Dec 10 15:44:59 crc kubenswrapper[4755]: I1210 15:44:59.676292 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-64rbl"] Dec 10 15:44:59 crc kubenswrapper[4755]: I1210 15:44:59.679821 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-64rbl" Dec 10 15:44:59 crc kubenswrapper[4755]: I1210 15:44:59.691190 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-625d-account-create-update-mn5ph"] Dec 10 15:44:59 crc kubenswrapper[4755]: I1210 15:44:59.714041 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-64rbl"] Dec 10 15:44:59 crc kubenswrapper[4755]: I1210 15:44:59.737382 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhm8p\" (UniqueName: \"kubernetes.io/projected/c7364f60-3c77-4234-9bed-d0e8f92d0bca-kube-api-access-hhm8p\") pod \"nova-api-db-create-hdt77\" (UID: \"c7364f60-3c77-4234-9bed-d0e8f92d0bca\") " pod="openstack/nova-api-db-create-hdt77" Dec 10 15:44:59 crc kubenswrapper[4755]: I1210 15:44:59.737429 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hht45\" (UniqueName: \"kubernetes.io/projected/3347b9d4-ec43-4f20-a896-4c3f26ecb892-kube-api-access-hht45\") pod \"nova-cell0-db-create-64rbl\" (UID: \"3347b9d4-ec43-4f20-a896-4c3f26ecb892\") " pod="openstack/nova-cell0-db-create-64rbl" Dec 10 15:44:59 crc kubenswrapper[4755]: I1210 15:44:59.737461 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c7364f60-3c77-4234-9bed-d0e8f92d0bca-operator-scripts\") pod \"nova-api-db-create-hdt77\" (UID: \"c7364f60-3c77-4234-9bed-d0e8f92d0bca\") " pod="openstack/nova-api-db-create-hdt77" Dec 10 15:44:59 crc kubenswrapper[4755]: I1210 15:44:59.737509 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8twv\" (UniqueName: \"kubernetes.io/projected/5dbb4b56-2d08-40ba-8cce-70f548573384-kube-api-access-p8twv\") pod \"nova-api-625d-account-create-update-mn5ph\" (UID: \"5dbb4b56-2d08-40ba-8cce-70f548573384\") " pod="openstack/nova-api-625d-account-create-update-mn5ph" Dec 10 15:44:59 crc kubenswrapper[4755]: I1210 15:44:59.737710 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3347b9d4-ec43-4f20-a896-4c3f26ecb892-operator-scripts\") pod \"nova-cell0-db-create-64rbl\" (UID: \"3347b9d4-ec43-4f20-a896-4c3f26ecb892\") " pod="openstack/nova-cell0-db-create-64rbl" Dec 10 15:44:59 crc kubenswrapper[4755]: I1210 15:44:59.737773 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5dbb4b56-2d08-40ba-8cce-70f548573384-operator-scripts\") pod \"nova-api-625d-account-create-update-mn5ph\" (UID: \"5dbb4b56-2d08-40ba-8cce-70f548573384\") " pod="openstack/nova-api-625d-account-create-update-mn5ph" Dec 10 15:44:59 crc kubenswrapper[4755]: I1210 15:44:59.738520 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c7364f60-3c77-4234-9bed-d0e8f92d0bca-operator-scripts\") pod \"nova-api-db-create-hdt77\" (UID: \"c7364f60-3c77-4234-9bed-d0e8f92d0bca\") " pod="openstack/nova-api-db-create-hdt77" Dec 10 15:44:59 crc kubenswrapper[4755]: I1210 15:44:59.760637 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhm8p\" (UniqueName: \"kubernetes.io/projected/c7364f60-3c77-4234-9bed-d0e8f92d0bca-kube-api-access-hhm8p\") pod \"nova-api-db-create-hdt77\" (UID: \"c7364f60-3c77-4234-9bed-d0e8f92d0bca\") " pod="openstack/nova-api-db-create-hdt77" Dec 10 15:44:59 crc kubenswrapper[4755]: I1210 15:44:59.842167 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hht45\" (UniqueName: \"kubernetes.io/projected/3347b9d4-ec43-4f20-a896-4c3f26ecb892-kube-api-access-hht45\") pod \"nova-cell0-db-create-64rbl\" (UID: \"3347b9d4-ec43-4f20-a896-4c3f26ecb892\") " pod="openstack/nova-cell0-db-create-64rbl" Dec 10 15:44:59 crc kubenswrapper[4755]: I1210 15:44:59.842285 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8twv\" (UniqueName: \"kubernetes.io/projected/5dbb4b56-2d08-40ba-8cce-70f548573384-kube-api-access-p8twv\") pod \"nova-api-625d-account-create-update-mn5ph\" (UID: \"5dbb4b56-2d08-40ba-8cce-70f548573384\") " pod="openstack/nova-api-625d-account-create-update-mn5ph" Dec 10 15:44:59 crc kubenswrapper[4755]: I1210 15:44:59.842370 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3347b9d4-ec43-4f20-a896-4c3f26ecb892-operator-scripts\") pod \"nova-cell0-db-create-64rbl\" (UID: \"3347b9d4-ec43-4f20-a896-4c3f26ecb892\") " pod="openstack/nova-cell0-db-create-64rbl" Dec 10 15:44:59 crc kubenswrapper[4755]: I1210 15:44:59.842411 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5dbb4b56-2d08-40ba-8cce-70f548573384-operator-scripts\") pod \"nova-api-625d-account-create-update-mn5ph\" (UID: \"5dbb4b56-2d08-40ba-8cce-70f548573384\") " pod="openstack/nova-api-625d-account-create-update-mn5ph" Dec 10 15:44:59 crc kubenswrapper[4755]: I1210 15:44:59.843144 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5dbb4b56-2d08-40ba-8cce-70f548573384-operator-scripts\") pod \"nova-api-625d-account-create-update-mn5ph\" (UID: \"5dbb4b56-2d08-40ba-8cce-70f548573384\") " pod="openstack/nova-api-625d-account-create-update-mn5ph" Dec 10 15:44:59 crc kubenswrapper[4755]: I1210 15:44:59.846513 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3347b9d4-ec43-4f20-a896-4c3f26ecb892-operator-scripts\") pod \"nova-cell0-db-create-64rbl\" (UID: \"3347b9d4-ec43-4f20-a896-4c3f26ecb892\") " pod="openstack/nova-cell0-db-create-64rbl" Dec 10 15:44:59 crc kubenswrapper[4755]: I1210 15:44:59.871539 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-jrbpt"] Dec 10 15:44:59 crc kubenswrapper[4755]: I1210 15:44:59.872891 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-jrbpt" Dec 10 15:44:59 crc kubenswrapper[4755]: I1210 15:44:59.882162 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5b8c78b5dc-vl479" event={"ID":"77a6e4fa-6291-41fc-a165-9fe6d6039810","Type":"ContainerStarted","Data":"76c5990ee420657a96cccf1bdfe516c838cc4bc2576fc3f588345e26d102e926"} Dec 10 15:44:59 crc kubenswrapper[4755]: I1210 15:44:59.882752 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-hdt77" Dec 10 15:44:59 crc kubenswrapper[4755]: I1210 15:44:59.893265 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-dca9-account-create-update-lw8s5"] Dec 10 15:44:59 crc kubenswrapper[4755]: I1210 15:44:59.934449 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8twv\" (UniqueName: \"kubernetes.io/projected/5dbb4b56-2d08-40ba-8cce-70f548573384-kube-api-access-p8twv\") pod \"nova-api-625d-account-create-update-mn5ph\" (UID: \"5dbb4b56-2d08-40ba-8cce-70f548573384\") " pod="openstack/nova-api-625d-account-create-update-mn5ph" Dec 10 15:44:59 crc kubenswrapper[4755]: I1210 15:44:59.939209 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hht45\" (UniqueName: \"kubernetes.io/projected/3347b9d4-ec43-4f20-a896-4c3f26ecb892-kube-api-access-hht45\") pod \"nova-cell0-db-create-64rbl\" (UID: \"3347b9d4-ec43-4f20-a896-4c3f26ecb892\") " pod="openstack/nova-cell0-db-create-64rbl" Dec 10 15:44:59 crc kubenswrapper[4755]: I1210 15:44:59.945283 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-jrbpt"] Dec 10 15:44:59 crc kubenswrapper[4755]: I1210 15:44:59.945393 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-dca9-account-create-update-lw8s5" Dec 10 15:44:59 crc kubenswrapper[4755]: I1210 15:44:59.969226 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Dec 10 15:44:59 crc kubenswrapper[4755]: I1210 15:44:59.992419 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-dca9-account-create-update-lw8s5"] Dec 10 15:45:00 crc kubenswrapper[4755]: I1210 15:45:00.016407 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-625d-account-create-update-mn5ph" Dec 10 15:45:00 crc kubenswrapper[4755]: I1210 15:45:00.062552 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvj86\" (UniqueName: \"kubernetes.io/projected/0ce71806-31d9-482e-860b-3fceb024e17f-kube-api-access-xvj86\") pod \"nova-cell1-db-create-jrbpt\" (UID: \"0ce71806-31d9-482e-860b-3fceb024e17f\") " pod="openstack/nova-cell1-db-create-jrbpt" Dec 10 15:45:00 crc kubenswrapper[4755]: I1210 15:45:00.063983 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-64rbl" Dec 10 15:45:00 crc kubenswrapper[4755]: I1210 15:45:00.065122 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdprk\" (UniqueName: \"kubernetes.io/projected/833228f4-cb63-4a39-aada-9481b9cdb3e5-kube-api-access-kdprk\") pod \"nova-cell0-dca9-account-create-update-lw8s5\" (UID: \"833228f4-cb63-4a39-aada-9481b9cdb3e5\") " pod="openstack/nova-cell0-dca9-account-create-update-lw8s5" Dec 10 15:45:00 crc kubenswrapper[4755]: I1210 15:45:00.089644 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/833228f4-cb63-4a39-aada-9481b9cdb3e5-operator-scripts\") pod \"nova-cell0-dca9-account-create-update-lw8s5\" (UID: \"833228f4-cb63-4a39-aada-9481b9cdb3e5\") " pod="openstack/nova-cell0-dca9-account-create-update-lw8s5" Dec 10 15:45:00 crc kubenswrapper[4755]: I1210 15:45:00.091818 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ce71806-31d9-482e-860b-3fceb024e17f-operator-scripts\") pod \"nova-cell1-db-create-jrbpt\" (UID: \"0ce71806-31d9-482e-860b-3fceb024e17f\") " pod="openstack/nova-cell1-db-create-jrbpt" Dec 10 15:45:00 crc kubenswrapper[4755]: I1210 15:45:00.108621 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-a9ee-account-create-update-zctzv"] Dec 10 15:45:00 crc kubenswrapper[4755]: I1210 15:45:00.118565 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-a9ee-account-create-update-zctzv" Dec 10 15:45:00 crc kubenswrapper[4755]: I1210 15:45:00.124611 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Dec 10 15:45:00 crc kubenswrapper[4755]: I1210 15:45:00.159609 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-a9ee-account-create-update-zctzv"] Dec 10 15:45:00 crc kubenswrapper[4755]: I1210 15:45:00.188283 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 15:45:00 crc kubenswrapper[4755]: I1210 15:45:00.190549 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29423025-mmtdq"] Dec 10 15:45:00 crc kubenswrapper[4755]: E1210 15:45:00.191028 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05ad143b-bb62-4f04-94da-b4473be95da2" containerName="ceilometer-central-agent" Dec 10 15:45:00 crc kubenswrapper[4755]: I1210 15:45:00.191049 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="05ad143b-bb62-4f04-94da-b4473be95da2" containerName="ceilometer-central-agent" Dec 10 15:45:00 crc kubenswrapper[4755]: E1210 15:45:00.191066 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05ad143b-bb62-4f04-94da-b4473be95da2" containerName="sg-core" Dec 10 15:45:00 crc kubenswrapper[4755]: I1210 15:45:00.191072 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="05ad143b-bb62-4f04-94da-b4473be95da2" containerName="sg-core" Dec 10 15:45:00 crc kubenswrapper[4755]: E1210 15:45:00.191108 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05ad143b-bb62-4f04-94da-b4473be95da2" containerName="proxy-httpd" Dec 10 15:45:00 crc kubenswrapper[4755]: I1210 15:45:00.191114 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="05ad143b-bb62-4f04-94da-b4473be95da2" containerName="proxy-httpd" Dec 10 15:45:00 crc kubenswrapper[4755]: E1210 15:45:00.191122 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05ad143b-bb62-4f04-94da-b4473be95da2" containerName="ceilometer-notification-agent" Dec 10 15:45:00 crc kubenswrapper[4755]: I1210 15:45:00.191127 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="05ad143b-bb62-4f04-94da-b4473be95da2" containerName="ceilometer-notification-agent" Dec 10 15:45:00 crc kubenswrapper[4755]: I1210 15:45:00.191335 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="05ad143b-bb62-4f04-94da-b4473be95da2" containerName="proxy-httpd" Dec 10 15:45:00 crc kubenswrapper[4755]: I1210 15:45:00.191354 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="05ad143b-bb62-4f04-94da-b4473be95da2" containerName="ceilometer-central-agent" Dec 10 15:45:00 crc kubenswrapper[4755]: I1210 15:45:00.191363 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="05ad143b-bb62-4f04-94da-b4473be95da2" containerName="ceilometer-notification-agent" Dec 10 15:45:00 crc kubenswrapper[4755]: I1210 15:45:00.191380 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="05ad143b-bb62-4f04-94da-b4473be95da2" containerName="sg-core" Dec 10 15:45:00 crc kubenswrapper[4755]: I1210 15:45:00.192248 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29423025-mmtdq" Dec 10 15:45:00 crc kubenswrapper[4755]: I1210 15:45:00.194500 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 10 15:45:00 crc kubenswrapper[4755]: I1210 15:45:00.194741 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 10 15:45:00 crc kubenswrapper[4755]: I1210 15:45:00.207486 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgc95\" (UniqueName: \"kubernetes.io/projected/8002c1bd-43bb-4d3d-b06a-e391505af5b5-kube-api-access-lgc95\") pod \"nova-cell1-a9ee-account-create-update-zctzv\" (UID: \"8002c1bd-43bb-4d3d-b06a-e391505af5b5\") " pod="openstack/nova-cell1-a9ee-account-create-update-zctzv" Dec 10 15:45:00 crc kubenswrapper[4755]: I1210 15:45:00.207548 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8002c1bd-43bb-4d3d-b06a-e391505af5b5-operator-scripts\") pod \"nova-cell1-a9ee-account-create-update-zctzv\" (UID: \"8002c1bd-43bb-4d3d-b06a-e391505af5b5\") " pod="openstack/nova-cell1-a9ee-account-create-update-zctzv" Dec 10 15:45:00 crc kubenswrapper[4755]: I1210 15:45:00.207612 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ce71806-31d9-482e-860b-3fceb024e17f-operator-scripts\") pod \"nova-cell1-db-create-jrbpt\" (UID: \"0ce71806-31d9-482e-860b-3fceb024e17f\") " pod="openstack/nova-cell1-db-create-jrbpt" Dec 10 15:45:00 crc kubenswrapper[4755]: I1210 15:45:00.207742 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvj86\" (UniqueName: \"kubernetes.io/projected/0ce71806-31d9-482e-860b-3fceb024e17f-kube-api-access-xvj86\") pod \"nova-cell1-db-create-jrbpt\" (UID: \"0ce71806-31d9-482e-860b-3fceb024e17f\") " pod="openstack/nova-cell1-db-create-jrbpt" Dec 10 15:45:00 crc kubenswrapper[4755]: I1210 15:45:00.216771 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/833228f4-cb63-4a39-aada-9481b9cdb3e5-operator-scripts\") pod \"nova-cell0-dca9-account-create-update-lw8s5\" (UID: \"833228f4-cb63-4a39-aada-9481b9cdb3e5\") " pod="openstack/nova-cell0-dca9-account-create-update-lw8s5" Dec 10 15:45:00 crc kubenswrapper[4755]: I1210 15:45:00.216844 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdprk\" (UniqueName: \"kubernetes.io/projected/833228f4-cb63-4a39-aada-9481b9cdb3e5-kube-api-access-kdprk\") pod \"nova-cell0-dca9-account-create-update-lw8s5\" (UID: \"833228f4-cb63-4a39-aada-9481b9cdb3e5\") " pod="openstack/nova-cell0-dca9-account-create-update-lw8s5" Dec 10 15:45:00 crc kubenswrapper[4755]: I1210 15:45:00.218145 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ce71806-31d9-482e-860b-3fceb024e17f-operator-scripts\") pod \"nova-cell1-db-create-jrbpt\" (UID: \"0ce71806-31d9-482e-860b-3fceb024e17f\") " pod="openstack/nova-cell1-db-create-jrbpt" Dec 10 15:45:00 crc kubenswrapper[4755]: I1210 15:45:00.221576 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/833228f4-cb63-4a39-aada-9481b9cdb3e5-operator-scripts\") pod \"nova-cell0-dca9-account-create-update-lw8s5\" (UID: \"833228f4-cb63-4a39-aada-9481b9cdb3e5\") " pod="openstack/nova-cell0-dca9-account-create-update-lw8s5" Dec 10 15:45:00 crc kubenswrapper[4755]: I1210 15:45:00.265904 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29423025-mmtdq"] Dec 10 15:45:00 crc kubenswrapper[4755]: I1210 15:45:00.278124 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdprk\" (UniqueName: \"kubernetes.io/projected/833228f4-cb63-4a39-aada-9481b9cdb3e5-kube-api-access-kdprk\") pod \"nova-cell0-dca9-account-create-update-lw8s5\" (UID: \"833228f4-cb63-4a39-aada-9481b9cdb3e5\") " pod="openstack/nova-cell0-dca9-account-create-update-lw8s5" Dec 10 15:45:00 crc kubenswrapper[4755]: I1210 15:45:00.314659 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvj86\" (UniqueName: \"kubernetes.io/projected/0ce71806-31d9-482e-860b-3fceb024e17f-kube-api-access-xvj86\") pod \"nova-cell1-db-create-jrbpt\" (UID: \"0ce71806-31d9-482e-860b-3fceb024e17f\") " pod="openstack/nova-cell1-db-create-jrbpt" Dec 10 15:45:00 crc kubenswrapper[4755]: I1210 15:45:00.321324 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/05ad143b-bb62-4f04-94da-b4473be95da2-run-httpd\") pod \"05ad143b-bb62-4f04-94da-b4473be95da2\" (UID: \"05ad143b-bb62-4f04-94da-b4473be95da2\") " Dec 10 15:45:00 crc kubenswrapper[4755]: I1210 15:45:00.321537 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/05ad143b-bb62-4f04-94da-b4473be95da2-sg-core-conf-yaml\") pod \"05ad143b-bb62-4f04-94da-b4473be95da2\" (UID: \"05ad143b-bb62-4f04-94da-b4473be95da2\") " Dec 10 15:45:00 crc kubenswrapper[4755]: I1210 15:45:00.321603 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/05ad143b-bb62-4f04-94da-b4473be95da2-log-httpd\") pod \"05ad143b-bb62-4f04-94da-b4473be95da2\" (UID: \"05ad143b-bb62-4f04-94da-b4473be95da2\") " Dec 10 15:45:00 crc kubenswrapper[4755]: I1210 15:45:00.321707 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05ad143b-bb62-4f04-94da-b4473be95da2-scripts\") pod \"05ad143b-bb62-4f04-94da-b4473be95da2\" (UID: \"05ad143b-bb62-4f04-94da-b4473be95da2\") " Dec 10 15:45:00 crc kubenswrapper[4755]: I1210 15:45:00.321744 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xbhgg\" (UniqueName: \"kubernetes.io/projected/05ad143b-bb62-4f04-94da-b4473be95da2-kube-api-access-xbhgg\") pod \"05ad143b-bb62-4f04-94da-b4473be95da2\" (UID: \"05ad143b-bb62-4f04-94da-b4473be95da2\") " Dec 10 15:45:00 crc kubenswrapper[4755]: I1210 15:45:00.321783 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05ad143b-bb62-4f04-94da-b4473be95da2-config-data\") pod \"05ad143b-bb62-4f04-94da-b4473be95da2\" (UID: \"05ad143b-bb62-4f04-94da-b4473be95da2\") " Dec 10 15:45:00 crc kubenswrapper[4755]: I1210 15:45:00.321808 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05ad143b-bb62-4f04-94da-b4473be95da2-combined-ca-bundle\") pod \"05ad143b-bb62-4f04-94da-b4473be95da2\" (UID: \"05ad143b-bb62-4f04-94da-b4473be95da2\") " Dec 10 15:45:00 crc kubenswrapper[4755]: I1210 15:45:00.322097 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a116975b-8d46-40b6-99e4-134b1558c5d9-secret-volume\") pod \"collect-profiles-29423025-mmtdq\" (UID: \"a116975b-8d46-40b6-99e4-134b1558c5d9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423025-mmtdq" Dec 10 15:45:00 crc kubenswrapper[4755]: I1210 15:45:00.322139 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbbl9\" (UniqueName: \"kubernetes.io/projected/a116975b-8d46-40b6-99e4-134b1558c5d9-kube-api-access-wbbl9\") pod \"collect-profiles-29423025-mmtdq\" (UID: \"a116975b-8d46-40b6-99e4-134b1558c5d9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423025-mmtdq" Dec 10 15:45:00 crc kubenswrapper[4755]: I1210 15:45:00.322182 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a116975b-8d46-40b6-99e4-134b1558c5d9-config-volume\") pod \"collect-profiles-29423025-mmtdq\" (UID: \"a116975b-8d46-40b6-99e4-134b1558c5d9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423025-mmtdq" Dec 10 15:45:00 crc kubenswrapper[4755]: I1210 15:45:00.322237 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgc95\" (UniqueName: \"kubernetes.io/projected/8002c1bd-43bb-4d3d-b06a-e391505af5b5-kube-api-access-lgc95\") pod \"nova-cell1-a9ee-account-create-update-zctzv\" (UID: \"8002c1bd-43bb-4d3d-b06a-e391505af5b5\") " pod="openstack/nova-cell1-a9ee-account-create-update-zctzv" Dec 10 15:45:00 crc kubenswrapper[4755]: I1210 15:45:00.322266 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8002c1bd-43bb-4d3d-b06a-e391505af5b5-operator-scripts\") pod \"nova-cell1-a9ee-account-create-update-zctzv\" (UID: \"8002c1bd-43bb-4d3d-b06a-e391505af5b5\") " pod="openstack/nova-cell1-a9ee-account-create-update-zctzv" Dec 10 15:45:00 crc kubenswrapper[4755]: I1210 15:45:00.326549 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05ad143b-bb62-4f04-94da-b4473be95da2-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "05ad143b-bb62-4f04-94da-b4473be95da2" (UID: "05ad143b-bb62-4f04-94da-b4473be95da2"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:45:00 crc kubenswrapper[4755]: I1210 15:45:00.327449 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8002c1bd-43bb-4d3d-b06a-e391505af5b5-operator-scripts\") pod \"nova-cell1-a9ee-account-create-update-zctzv\" (UID: \"8002c1bd-43bb-4d3d-b06a-e391505af5b5\") " pod="openstack/nova-cell1-a9ee-account-create-update-zctzv" Dec 10 15:45:00 crc kubenswrapper[4755]: I1210 15:45:00.328087 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05ad143b-bb62-4f04-94da-b4473be95da2-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "05ad143b-bb62-4f04-94da-b4473be95da2" (UID: "05ad143b-bb62-4f04-94da-b4473be95da2"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:45:00 crc kubenswrapper[4755]: I1210 15:45:00.329954 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05ad143b-bb62-4f04-94da-b4473be95da2-scripts" (OuterVolumeSpecName: "scripts") pod "05ad143b-bb62-4f04-94da-b4473be95da2" (UID: "05ad143b-bb62-4f04-94da-b4473be95da2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:45:00 crc kubenswrapper[4755]: I1210 15:45:00.347829 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05ad143b-bb62-4f04-94da-b4473be95da2-kube-api-access-xbhgg" (OuterVolumeSpecName: "kube-api-access-xbhgg") pod "05ad143b-bb62-4f04-94da-b4473be95da2" (UID: "05ad143b-bb62-4f04-94da-b4473be95da2"). InnerVolumeSpecName "kube-api-access-xbhgg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:45:00 crc kubenswrapper[4755]: I1210 15:45:00.375454 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgc95\" (UniqueName: \"kubernetes.io/projected/8002c1bd-43bb-4d3d-b06a-e391505af5b5-kube-api-access-lgc95\") pod \"nova-cell1-a9ee-account-create-update-zctzv\" (UID: \"8002c1bd-43bb-4d3d-b06a-e391505af5b5\") " pod="openstack/nova-cell1-a9ee-account-create-update-zctzv" Dec 10 15:45:00 crc kubenswrapper[4755]: I1210 15:45:00.426875 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a116975b-8d46-40b6-99e4-134b1558c5d9-secret-volume\") pod \"collect-profiles-29423025-mmtdq\" (UID: \"a116975b-8d46-40b6-99e4-134b1558c5d9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423025-mmtdq" Dec 10 15:45:00 crc kubenswrapper[4755]: I1210 15:45:00.427396 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbbl9\" (UniqueName: \"kubernetes.io/projected/a116975b-8d46-40b6-99e4-134b1558c5d9-kube-api-access-wbbl9\") pod \"collect-profiles-29423025-mmtdq\" (UID: \"a116975b-8d46-40b6-99e4-134b1558c5d9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423025-mmtdq" Dec 10 15:45:00 crc kubenswrapper[4755]: I1210 15:45:00.427438 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a116975b-8d46-40b6-99e4-134b1558c5d9-config-volume\") pod \"collect-profiles-29423025-mmtdq\" (UID: \"a116975b-8d46-40b6-99e4-134b1558c5d9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423025-mmtdq" Dec 10 15:45:00 crc kubenswrapper[4755]: I1210 15:45:00.427677 4755 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/05ad143b-bb62-4f04-94da-b4473be95da2-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 10 15:45:00 crc kubenswrapper[4755]: I1210 15:45:00.427696 4755 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05ad143b-bb62-4f04-94da-b4473be95da2-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 15:45:00 crc kubenswrapper[4755]: I1210 15:45:00.427709 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xbhgg\" (UniqueName: \"kubernetes.io/projected/05ad143b-bb62-4f04-94da-b4473be95da2-kube-api-access-xbhgg\") on node \"crc\" DevicePath \"\"" Dec 10 15:45:00 crc kubenswrapper[4755]: I1210 15:45:00.427721 4755 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/05ad143b-bb62-4f04-94da-b4473be95da2-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 10 15:45:00 crc kubenswrapper[4755]: I1210 15:45:00.437452 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a116975b-8d46-40b6-99e4-134b1558c5d9-secret-volume\") pod \"collect-profiles-29423025-mmtdq\" (UID: \"a116975b-8d46-40b6-99e4-134b1558c5d9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423025-mmtdq" Dec 10 15:45:00 crc kubenswrapper[4755]: I1210 15:45:00.443340 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a116975b-8d46-40b6-99e4-134b1558c5d9-config-volume\") pod \"collect-profiles-29423025-mmtdq\" (UID: \"a116975b-8d46-40b6-99e4-134b1558c5d9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423025-mmtdq" Dec 10 15:45:00 crc kubenswrapper[4755]: I1210 15:45:00.459397 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-jrbpt" Dec 10 15:45:00 crc kubenswrapper[4755]: I1210 15:45:00.487147 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbbl9\" (UniqueName: \"kubernetes.io/projected/a116975b-8d46-40b6-99e4-134b1558c5d9-kube-api-access-wbbl9\") pod \"collect-profiles-29423025-mmtdq\" (UID: \"a116975b-8d46-40b6-99e4-134b1558c5d9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423025-mmtdq" Dec 10 15:45:00 crc kubenswrapper[4755]: I1210 15:45:00.514234 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-dca9-account-create-update-lw8s5" Dec 10 15:45:00 crc kubenswrapper[4755]: I1210 15:45:00.539560 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-a9ee-account-create-update-zctzv" Dec 10 15:45:00 crc kubenswrapper[4755]: I1210 15:45:00.572851 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05ad143b-bb62-4f04-94da-b4473be95da2-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "05ad143b-bb62-4f04-94da-b4473be95da2" (UID: "05ad143b-bb62-4f04-94da-b4473be95da2"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:45:00 crc kubenswrapper[4755]: I1210 15:45:00.634194 4755 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/05ad143b-bb62-4f04-94da-b4473be95da2-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 10 15:45:00 crc kubenswrapper[4755]: I1210 15:45:00.662780 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29423025-mmtdq" Dec 10 15:45:00 crc kubenswrapper[4755]: I1210 15:45:00.795664 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05ad143b-bb62-4f04-94da-b4473be95da2-config-data" (OuterVolumeSpecName: "config-data") pod "05ad143b-bb62-4f04-94da-b4473be95da2" (UID: "05ad143b-bb62-4f04-94da-b4473be95da2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:45:00 crc kubenswrapper[4755]: I1210 15:45:00.809150 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05ad143b-bb62-4f04-94da-b4473be95da2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "05ad143b-bb62-4f04-94da-b4473be95da2" (UID: "05ad143b-bb62-4f04-94da-b4473be95da2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:45:00 crc kubenswrapper[4755]: I1210 15:45:00.845350 4755 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05ad143b-bb62-4f04-94da-b4473be95da2-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 15:45:00 crc kubenswrapper[4755]: I1210 15:45:00.845635 4755 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05ad143b-bb62-4f04-94da-b4473be95da2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:45:00 crc kubenswrapper[4755]: I1210 15:45:00.865126 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-hdt77"] Dec 10 15:45:01 crc kubenswrapper[4755]: I1210 15:45:01.035777 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5b8c78b5dc-vl479" event={"ID":"77a6e4fa-6291-41fc-a165-9fe6d6039810","Type":"ContainerStarted","Data":"6892e0767046a975bdd5df9990797a277cf87606ced534636d6abb3f4fdd8b85"} Dec 10 15:45:01 crc kubenswrapper[4755]: I1210 15:45:01.035845 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5b8c78b5dc-vl479" Dec 10 15:45:01 crc kubenswrapper[4755]: I1210 15:45:01.035886 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5b8c78b5dc-vl479" Dec 10 15:45:01 crc kubenswrapper[4755]: I1210 15:45:01.073916 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-5b8c78b5dc-vl479" podStartSLOduration=4.073892913 podStartE2EDuration="4.073892913s" podCreationTimestamp="2025-12-10 15:44:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:45:01.061913379 +0000 UTC m=+1297.662797021" watchObservedRunningTime="2025-12-10 15:45:01.073892913 +0000 UTC m=+1297.674776545" Dec 10 15:45:01 crc kubenswrapper[4755]: I1210 15:45:01.104770 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"05ad143b-bb62-4f04-94da-b4473be95da2","Type":"ContainerDied","Data":"08eefd5afb1069d855e6673233228563625c2e85281cfd67731c650b82d5e4ac"} Dec 10 15:45:01 crc kubenswrapper[4755]: I1210 15:45:01.104832 4755 scope.go:117] "RemoveContainer" containerID="f6915ec4e6022634899246154151fac515dd017f2f725a58c7c31fd0f5c66d3d" Dec 10 15:45:01 crc kubenswrapper[4755]: I1210 15:45:01.104949 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 15:45:01 crc kubenswrapper[4755]: I1210 15:45:01.189366 4755 scope.go:117] "RemoveContainer" containerID="6755b231d887b112762de4edbe10406e7553667e317f41eb6232b913bb47796b" Dec 10 15:45:01 crc kubenswrapper[4755]: I1210 15:45:01.190169 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 10 15:45:01 crc kubenswrapper[4755]: I1210 15:45:01.241743 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 10 15:45:01 crc kubenswrapper[4755]: I1210 15:45:01.277612 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 10 15:45:01 crc kubenswrapper[4755]: I1210 15:45:01.285712 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 15:45:01 crc kubenswrapper[4755]: I1210 15:45:01.298346 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 10 15:45:01 crc kubenswrapper[4755]: I1210 15:45:01.298778 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 10 15:45:01 crc kubenswrapper[4755]: I1210 15:45:01.333754 4755 scope.go:117] "RemoveContainer" containerID="8ec378a30c21ccb7364ac5b12909974c507cb0542be73687d497ffd56696ec80" Dec 10 15:45:01 crc kubenswrapper[4755]: I1210 15:45:01.377578 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 10 15:45:01 crc kubenswrapper[4755]: I1210 15:45:01.390626 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30e81eb3-8296-4b76-8a90-a76aa64a4656-config-data\") pod \"ceilometer-0\" (UID: \"30e81eb3-8296-4b76-8a90-a76aa64a4656\") " pod="openstack/ceilometer-0" Dec 10 15:45:01 crc kubenswrapper[4755]: I1210 15:45:01.390681 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30e81eb3-8296-4b76-8a90-a76aa64a4656-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"30e81eb3-8296-4b76-8a90-a76aa64a4656\") " pod="openstack/ceilometer-0" Dec 10 15:45:01 crc kubenswrapper[4755]: I1210 15:45:01.390708 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30e81eb3-8296-4b76-8a90-a76aa64a4656-log-httpd\") pod \"ceilometer-0\" (UID: \"30e81eb3-8296-4b76-8a90-a76aa64a4656\") " pod="openstack/ceilometer-0" Dec 10 15:45:01 crc kubenswrapper[4755]: I1210 15:45:01.390753 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30e81eb3-8296-4b76-8a90-a76aa64a4656-scripts\") pod \"ceilometer-0\" (UID: \"30e81eb3-8296-4b76-8a90-a76aa64a4656\") " pod="openstack/ceilometer-0" Dec 10 15:45:01 crc kubenswrapper[4755]: I1210 15:45:01.390829 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/30e81eb3-8296-4b76-8a90-a76aa64a4656-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"30e81eb3-8296-4b76-8a90-a76aa64a4656\") " pod="openstack/ceilometer-0" Dec 10 15:45:01 crc kubenswrapper[4755]: I1210 15:45:01.390849 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqlpr\" (UniqueName: \"kubernetes.io/projected/30e81eb3-8296-4b76-8a90-a76aa64a4656-kube-api-access-nqlpr\") pod \"ceilometer-0\" (UID: \"30e81eb3-8296-4b76-8a90-a76aa64a4656\") " pod="openstack/ceilometer-0" Dec 10 15:45:01 crc kubenswrapper[4755]: I1210 15:45:01.390887 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30e81eb3-8296-4b76-8a90-a76aa64a4656-run-httpd\") pod \"ceilometer-0\" (UID: \"30e81eb3-8296-4b76-8a90-a76aa64a4656\") " pod="openstack/ceilometer-0" Dec 10 15:45:01 crc kubenswrapper[4755]: I1210 15:45:01.394630 4755 scope.go:117] "RemoveContainer" containerID="1317119dfc28922ac22eb17eeb3e7b438ff12a3a3dcf1a08f48d30a64c9de0b5" Dec 10 15:45:01 crc kubenswrapper[4755]: I1210 15:45:01.494815 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30e81eb3-8296-4b76-8a90-a76aa64a4656-run-httpd\") pod \"ceilometer-0\" (UID: \"30e81eb3-8296-4b76-8a90-a76aa64a4656\") " pod="openstack/ceilometer-0" Dec 10 15:45:01 crc kubenswrapper[4755]: I1210 15:45:01.494926 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30e81eb3-8296-4b76-8a90-a76aa64a4656-config-data\") pod \"ceilometer-0\" (UID: \"30e81eb3-8296-4b76-8a90-a76aa64a4656\") " pod="openstack/ceilometer-0" Dec 10 15:45:01 crc kubenswrapper[4755]: I1210 15:45:01.494970 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30e81eb3-8296-4b76-8a90-a76aa64a4656-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"30e81eb3-8296-4b76-8a90-a76aa64a4656\") " pod="openstack/ceilometer-0" Dec 10 15:45:01 crc kubenswrapper[4755]: I1210 15:45:01.495002 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30e81eb3-8296-4b76-8a90-a76aa64a4656-log-httpd\") pod \"ceilometer-0\" (UID: \"30e81eb3-8296-4b76-8a90-a76aa64a4656\") " pod="openstack/ceilometer-0" Dec 10 15:45:01 crc kubenswrapper[4755]: I1210 15:45:01.495063 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30e81eb3-8296-4b76-8a90-a76aa64a4656-scripts\") pod \"ceilometer-0\" (UID: \"30e81eb3-8296-4b76-8a90-a76aa64a4656\") " pod="openstack/ceilometer-0" Dec 10 15:45:01 crc kubenswrapper[4755]: I1210 15:45:01.495161 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/30e81eb3-8296-4b76-8a90-a76aa64a4656-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"30e81eb3-8296-4b76-8a90-a76aa64a4656\") " pod="openstack/ceilometer-0" Dec 10 15:45:01 crc kubenswrapper[4755]: I1210 15:45:01.495191 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqlpr\" (UniqueName: \"kubernetes.io/projected/30e81eb3-8296-4b76-8a90-a76aa64a4656-kube-api-access-nqlpr\") pod \"ceilometer-0\" (UID: \"30e81eb3-8296-4b76-8a90-a76aa64a4656\") " pod="openstack/ceilometer-0" Dec 10 15:45:01 crc kubenswrapper[4755]: I1210 15:45:01.496385 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30e81eb3-8296-4b76-8a90-a76aa64a4656-log-httpd\") pod \"ceilometer-0\" (UID: \"30e81eb3-8296-4b76-8a90-a76aa64a4656\") " pod="openstack/ceilometer-0" Dec 10 15:45:01 crc kubenswrapper[4755]: I1210 15:45:01.496729 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30e81eb3-8296-4b76-8a90-a76aa64a4656-run-httpd\") pod \"ceilometer-0\" (UID: \"30e81eb3-8296-4b76-8a90-a76aa64a4656\") " pod="openstack/ceilometer-0" Dec 10 15:45:01 crc kubenswrapper[4755]: I1210 15:45:01.504766 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30e81eb3-8296-4b76-8a90-a76aa64a4656-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"30e81eb3-8296-4b76-8a90-a76aa64a4656\") " pod="openstack/ceilometer-0" Dec 10 15:45:01 crc kubenswrapper[4755]: I1210 15:45:01.513615 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30e81eb3-8296-4b76-8a90-a76aa64a4656-config-data\") pod \"ceilometer-0\" (UID: \"30e81eb3-8296-4b76-8a90-a76aa64a4656\") " pod="openstack/ceilometer-0" Dec 10 15:45:01 crc kubenswrapper[4755]: I1210 15:45:01.530014 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/30e81eb3-8296-4b76-8a90-a76aa64a4656-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"30e81eb3-8296-4b76-8a90-a76aa64a4656\") " pod="openstack/ceilometer-0" Dec 10 15:45:01 crc kubenswrapper[4755]: I1210 15:45:01.530568 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30e81eb3-8296-4b76-8a90-a76aa64a4656-scripts\") pod \"ceilometer-0\" (UID: \"30e81eb3-8296-4b76-8a90-a76aa64a4656\") " pod="openstack/ceilometer-0" Dec 10 15:45:01 crc kubenswrapper[4755]: I1210 15:45:01.541564 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqlpr\" (UniqueName: \"kubernetes.io/projected/30e81eb3-8296-4b76-8a90-a76aa64a4656-kube-api-access-nqlpr\") pod \"ceilometer-0\" (UID: \"30e81eb3-8296-4b76-8a90-a76aa64a4656\") " pod="openstack/ceilometer-0" Dec 10 15:45:01 crc kubenswrapper[4755]: I1210 15:45:01.636872 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 15:45:01 crc kubenswrapper[4755]: I1210 15:45:01.741939 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-64rbl"] Dec 10 15:45:01 crc kubenswrapper[4755]: I1210 15:45:01.801166 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05ad143b-bb62-4f04-94da-b4473be95da2" path="/var/lib/kubelet/pods/05ad143b-bb62-4f04-94da-b4473be95da2/volumes" Dec 10 15:45:01 crc kubenswrapper[4755]: I1210 15:45:01.801903 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-jrbpt"] Dec 10 15:45:01 crc kubenswrapper[4755]: I1210 15:45:01.801928 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-625d-account-create-update-mn5ph"] Dec 10 15:45:01 crc kubenswrapper[4755]: I1210 15:45:01.945200 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-dca9-account-create-update-lw8s5"] Dec 10 15:45:02 crc kubenswrapper[4755]: W1210 15:45:02.058511 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod833228f4_cb63_4a39_aada_9481b9cdb3e5.slice/crio-358f81e2e5e45893917351a1cc5b35c5a5494794e763e18f44d142eb91dfe98c WatchSource:0}: Error finding container 358f81e2e5e45893917351a1cc5b35c5a5494794e763e18f44d142eb91dfe98c: Status 404 returned error can't find the container with id 358f81e2e5e45893917351a1cc5b35c5a5494794e763e18f44d142eb91dfe98c Dec 10 15:45:02 crc kubenswrapper[4755]: I1210 15:45:02.090026 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-a9ee-account-create-update-zctzv"] Dec 10 15:45:02 crc kubenswrapper[4755]: I1210 15:45:02.151571 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-hdt77" event={"ID":"c7364f60-3c77-4234-9bed-d0e8f92d0bca","Type":"ContainerStarted","Data":"de585e3e9a268a166a4f4f255254cac44d96a00b4553c3cfefd6bb7821f1edea"} Dec 10 15:45:02 crc kubenswrapper[4755]: I1210 15:45:02.155868 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-64rbl" event={"ID":"3347b9d4-ec43-4f20-a896-4c3f26ecb892","Type":"ContainerStarted","Data":"4b20b76b69225c15e021ae69e307aa672ea66a0c0e25d95ed96759763a724c76"} Dec 10 15:45:02 crc kubenswrapper[4755]: I1210 15:45:02.157138 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-dca9-account-create-update-lw8s5" event={"ID":"833228f4-cb63-4a39-aada-9481b9cdb3e5","Type":"ContainerStarted","Data":"358f81e2e5e45893917351a1cc5b35c5a5494794e763e18f44d142eb91dfe98c"} Dec 10 15:45:02 crc kubenswrapper[4755]: I1210 15:45:02.165850 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-jrbpt" event={"ID":"0ce71806-31d9-482e-860b-3fceb024e17f","Type":"ContainerStarted","Data":"5a2ff47766031977821345a43968b7343d250de80907b5e679e4153d78a9080d"} Dec 10 15:45:02 crc kubenswrapper[4755]: I1210 15:45:02.174563 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-625d-account-create-update-mn5ph" event={"ID":"5dbb4b56-2d08-40ba-8cce-70f548573384","Type":"ContainerStarted","Data":"85a8c2542d9d7fbca4ed015719aaaa6699dbdfddea0e82413e78d663fd3c25db"} Dec 10 15:45:02 crc kubenswrapper[4755]: I1210 15:45:02.254072 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29423025-mmtdq"] Dec 10 15:45:02 crc kubenswrapper[4755]: W1210 15:45:02.273809 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda116975b_8d46_40b6_99e4_134b1558c5d9.slice/crio-20bd7817f91e962399e1c2c667a0a107944f99c3d524e56070388a5f555a65ae WatchSource:0}: Error finding container 20bd7817f91e962399e1c2c667a0a107944f99c3d524e56070388a5f555a65ae: Status 404 returned error can't find the container with id 20bd7817f91e962399e1c2c667a0a107944f99c3d524e56070388a5f555a65ae Dec 10 15:45:02 crc kubenswrapper[4755]: I1210 15:45:02.412239 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 10 15:45:02 crc kubenswrapper[4755]: I1210 15:45:02.570616 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Dec 10 15:45:03 crc kubenswrapper[4755]: I1210 15:45:03.198195 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-a9ee-account-create-update-zctzv" event={"ID":"8002c1bd-43bb-4d3d-b06a-e391505af5b5","Type":"ContainerStarted","Data":"bcac80fbbb26dc41702ab40a8db61bcd1a435948f89ff8f8432d49b80678d6f0"} Dec 10 15:45:03 crc kubenswrapper[4755]: I1210 15:45:03.200393 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29423025-mmtdq" event={"ID":"a116975b-8d46-40b6-99e4-134b1558c5d9","Type":"ContainerStarted","Data":"20bd7817f91e962399e1c2c667a0a107944f99c3d524e56070388a5f555a65ae"} Dec 10 15:45:03 crc kubenswrapper[4755]: I1210 15:45:03.202296 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30e81eb3-8296-4b76-8a90-a76aa64a4656","Type":"ContainerStarted","Data":"c9953c1fd4ef8444efe8dc470b213685cbdb33ca2efaa28955214f5f2bbcbdcd"} Dec 10 15:45:04 crc kubenswrapper[4755]: I1210 15:45:04.212665 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-hdt77" event={"ID":"c7364f60-3c77-4234-9bed-d0e8f92d0bca","Type":"ContainerStarted","Data":"77a4bc7fae8602bacecc29cb7bf5070eaffc1797961e84e560aa1afcef358b4f"} Dec 10 15:45:06 crc kubenswrapper[4755]: I1210 15:45:06.236459 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29423025-mmtdq" event={"ID":"a116975b-8d46-40b6-99e4-134b1558c5d9","Type":"ContainerStarted","Data":"12f329ae012dedd19f8b9e8a92666195fe04129d66f2e6d1935f981a121c909b"} Dec 10 15:45:06 crc kubenswrapper[4755]: I1210 15:45:06.238826 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-64rbl" event={"ID":"3347b9d4-ec43-4f20-a896-4c3f26ecb892","Type":"ContainerStarted","Data":"40ddfbb29e08e538d25bb254cd786fefa0268837e8f54f4706716b65feb6b363"} Dec 10 15:45:06 crc kubenswrapper[4755]: I1210 15:45:06.241139 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-a9ee-account-create-update-zctzv" event={"ID":"8002c1bd-43bb-4d3d-b06a-e391505af5b5","Type":"ContainerStarted","Data":"962f4ba07936ca5ba9c2964c6d1610a2ee6074837e0d8367490d1be7505b1a99"} Dec 10 15:45:06 crc kubenswrapper[4755]: I1210 15:45:06.243006 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-dca9-account-create-update-lw8s5" event={"ID":"833228f4-cb63-4a39-aada-9481b9cdb3e5","Type":"ContainerStarted","Data":"104730d17cfed610864e030290e6b1c29573152fc2d8089806dab1b17ee67e86"} Dec 10 15:45:06 crc kubenswrapper[4755]: I1210 15:45:06.244411 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-jrbpt" event={"ID":"0ce71806-31d9-482e-860b-3fceb024e17f","Type":"ContainerStarted","Data":"c1f6de90b51c9c2a5bbc79b58a3477229d33b82c58f4df6c9ce192e2135ada2e"} Dec 10 15:45:06 crc kubenswrapper[4755]: I1210 15:45:06.245959 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-625d-account-create-update-mn5ph" event={"ID":"5dbb4b56-2d08-40ba-8cce-70f548573384","Type":"ContainerStarted","Data":"45a0d33acde2d0dbe6f25ad4b92bff5ad31e1c3da9321d4ce3e27a3fa9322692"} Dec 10 15:45:07 crc kubenswrapper[4755]: I1210 15:45:07.282624 4755 generic.go:334] "Generic (PLEG): container finished" podID="a116975b-8d46-40b6-99e4-134b1558c5d9" containerID="12f329ae012dedd19f8b9e8a92666195fe04129d66f2e6d1935f981a121c909b" exitCode=0 Dec 10 15:45:07 crc kubenswrapper[4755]: I1210 15:45:07.282847 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29423025-mmtdq" event={"ID":"a116975b-8d46-40b6-99e4-134b1558c5d9","Type":"ContainerDied","Data":"12f329ae012dedd19f8b9e8a92666195fe04129d66f2e6d1935f981a121c909b"} Dec 10 15:45:07 crc kubenswrapper[4755]: I1210 15:45:07.325589 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-jrbpt" podStartSLOduration=8.32555885 podStartE2EDuration="8.32555885s" podCreationTimestamp="2025-12-10 15:44:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:45:07.305373023 +0000 UTC m=+1303.906256655" watchObservedRunningTime="2025-12-10 15:45:07.32555885 +0000 UTC m=+1303.926442482" Dec 10 15:45:07 crc kubenswrapper[4755]: I1210 15:45:07.352564 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-a9ee-account-create-update-zctzv" podStartSLOduration=7.352540752 podStartE2EDuration="7.352540752s" podCreationTimestamp="2025-12-10 15:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:45:07.328025047 +0000 UTC m=+1303.928908679" watchObservedRunningTime="2025-12-10 15:45:07.352540752 +0000 UTC m=+1303.953424384" Dec 10 15:45:07 crc kubenswrapper[4755]: I1210 15:45:07.422422 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-625d-account-create-update-mn5ph" podStartSLOduration=8.422401108 podStartE2EDuration="8.422401108s" podCreationTimestamp="2025-12-10 15:44:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:45:07.397878754 +0000 UTC m=+1303.998762386" watchObservedRunningTime="2025-12-10 15:45:07.422401108 +0000 UTC m=+1304.023284740" Dec 10 15:45:07 crc kubenswrapper[4755]: I1210 15:45:07.460175 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-dca9-account-create-update-lw8s5" podStartSLOduration=8.460156054 podStartE2EDuration="8.460156054s" podCreationTimestamp="2025-12-10 15:44:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:45:07.420623001 +0000 UTC m=+1304.021506633" watchObservedRunningTime="2025-12-10 15:45:07.460156054 +0000 UTC m=+1304.061039686" Dec 10 15:45:07 crc kubenswrapper[4755]: I1210 15:45:07.494672 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-hdt77" podStartSLOduration=8.49465317 podStartE2EDuration="8.49465317s" podCreationTimestamp="2025-12-10 15:44:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:45:07.453079751 +0000 UTC m=+1304.053963383" watchObservedRunningTime="2025-12-10 15:45:07.49465317 +0000 UTC m=+1304.095536802" Dec 10 15:45:07 crc kubenswrapper[4755]: I1210 15:45:07.520295 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-64rbl" podStartSLOduration=8.520276225 podStartE2EDuration="8.520276225s" podCreationTimestamp="2025-12-10 15:44:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:45:07.488220466 +0000 UTC m=+1304.089104098" watchObservedRunningTime="2025-12-10 15:45:07.520276225 +0000 UTC m=+1304.121159857" Dec 10 15:45:07 crc kubenswrapper[4755]: I1210 15:45:07.940131 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-5b8c78b5dc-vl479" Dec 10 15:45:07 crc kubenswrapper[4755]: I1210 15:45:07.940788 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-5b8c78b5dc-vl479" Dec 10 15:45:08 crc kubenswrapper[4755]: I1210 15:45:08.311512 4755 generic.go:334] "Generic (PLEG): container finished" podID="3347b9d4-ec43-4f20-a896-4c3f26ecb892" containerID="40ddfbb29e08e538d25bb254cd786fefa0268837e8f54f4706716b65feb6b363" exitCode=0 Dec 10 15:45:08 crc kubenswrapper[4755]: I1210 15:45:08.311677 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-64rbl" event={"ID":"3347b9d4-ec43-4f20-a896-4c3f26ecb892","Type":"ContainerDied","Data":"40ddfbb29e08e538d25bb254cd786fefa0268837e8f54f4706716b65feb6b363"} Dec 10 15:45:08 crc kubenswrapper[4755]: I1210 15:45:08.313860 4755 generic.go:334] "Generic (PLEG): container finished" podID="0ce71806-31d9-482e-860b-3fceb024e17f" containerID="c1f6de90b51c9c2a5bbc79b58a3477229d33b82c58f4df6c9ce192e2135ada2e" exitCode=0 Dec 10 15:45:08 crc kubenswrapper[4755]: I1210 15:45:08.313916 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-jrbpt" event={"ID":"0ce71806-31d9-482e-860b-3fceb024e17f","Type":"ContainerDied","Data":"c1f6de90b51c9c2a5bbc79b58a3477229d33b82c58f4df6c9ce192e2135ada2e"} Dec 10 15:45:08 crc kubenswrapper[4755]: I1210 15:45:08.319353 4755 generic.go:334] "Generic (PLEG): container finished" podID="c7364f60-3c77-4234-9bed-d0e8f92d0bca" containerID="77a4bc7fae8602bacecc29cb7bf5070eaffc1797961e84e560aa1afcef358b4f" exitCode=0 Dec 10 15:45:08 crc kubenswrapper[4755]: I1210 15:45:08.320456 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-hdt77" event={"ID":"c7364f60-3c77-4234-9bed-d0e8f92d0bca","Type":"ContainerDied","Data":"77a4bc7fae8602bacecc29cb7bf5070eaffc1797961e84e560aa1afcef358b4f"} Dec 10 15:45:09 crc kubenswrapper[4755]: I1210 15:45:09.332916 4755 generic.go:334] "Generic (PLEG): container finished" podID="8002c1bd-43bb-4d3d-b06a-e391505af5b5" containerID="962f4ba07936ca5ba9c2964c6d1610a2ee6074837e0d8367490d1be7505b1a99" exitCode=0 Dec 10 15:45:09 crc kubenswrapper[4755]: I1210 15:45:09.333303 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-a9ee-account-create-update-zctzv" event={"ID":"8002c1bd-43bb-4d3d-b06a-e391505af5b5","Type":"ContainerDied","Data":"962f4ba07936ca5ba9c2964c6d1610a2ee6074837e0d8367490d1be7505b1a99"} Dec 10 15:45:09 crc kubenswrapper[4755]: I1210 15:45:09.337753 4755 generic.go:334] "Generic (PLEG): container finished" podID="833228f4-cb63-4a39-aada-9481b9cdb3e5" containerID="104730d17cfed610864e030290e6b1c29573152fc2d8089806dab1b17ee67e86" exitCode=0 Dec 10 15:45:09 crc kubenswrapper[4755]: I1210 15:45:09.337809 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-dca9-account-create-update-lw8s5" event={"ID":"833228f4-cb63-4a39-aada-9481b9cdb3e5","Type":"ContainerDied","Data":"104730d17cfed610864e030290e6b1c29573152fc2d8089806dab1b17ee67e86"} Dec 10 15:45:09 crc kubenswrapper[4755]: I1210 15:45:09.339413 4755 generic.go:334] "Generic (PLEG): container finished" podID="5dbb4b56-2d08-40ba-8cce-70f548573384" containerID="45a0d33acde2d0dbe6f25ad4b92bff5ad31e1c3da9321d4ce3e27a3fa9322692" exitCode=0 Dec 10 15:45:09 crc kubenswrapper[4755]: I1210 15:45:09.339565 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-625d-account-create-update-mn5ph" event={"ID":"5dbb4b56-2d08-40ba-8cce-70f548573384","Type":"ContainerDied","Data":"45a0d33acde2d0dbe6f25ad4b92bff5ad31e1c3da9321d4ce3e27a3fa9322692"} Dec 10 15:45:14 crc kubenswrapper[4755]: E1210 15:45:14.597750 4755 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified" Dec 10 15:45:14 crc kubenswrapper[4755]: E1210 15:45:14.598587 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstackclient,Image:quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified,Command:[/bin/sleep],Args:[infinity],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nd6h58ch556h588h66bhb7hddh58ch5d9hbbh65dh7fh8bh5c6h699h59ch595h556h55hdbh5bch65dh5c6h5c5h5c4h68fh5h677h569h88h5f5h5fq,ValueFrom:nil,},EnvVar{Name:OS_CLOUD,Value:default,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_CA_CERT,Value:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_HOST,Value:metric-storage-prometheus.openstack.svc,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_PORT,Value:9090,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:openstack-config,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/cloudrc,SubPath:cloudrc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ss5sh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42401,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42401,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstackclient_openstack(a9d4b4ae-84b2-4971-aafc-6f5bdad0b69d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 10 15:45:14 crc kubenswrapper[4755]: E1210 15:45:14.600154 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstackclient" podUID="a9d4b4ae-84b2-4971-aafc-6f5bdad0b69d" Dec 10 15:45:14 crc kubenswrapper[4755]: I1210 15:45:14.779245 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-hdt77" Dec 10 15:45:14 crc kubenswrapper[4755]: I1210 15:45:14.787906 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-64rbl" Dec 10 15:45:14 crc kubenswrapper[4755]: I1210 15:45:14.844965 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29423025-mmtdq" Dec 10 15:45:14 crc kubenswrapper[4755]: I1210 15:45:14.852611 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-625d-account-create-update-mn5ph" Dec 10 15:45:14 crc kubenswrapper[4755]: I1210 15:45:14.854293 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-dca9-account-create-update-lw8s5" Dec 10 15:45:14 crc kubenswrapper[4755]: I1210 15:45:14.879883 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-jrbpt" Dec 10 15:45:14 crc kubenswrapper[4755]: I1210 15:45:14.883144 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-a9ee-account-create-update-zctzv" Dec 10 15:45:14 crc kubenswrapper[4755]: I1210 15:45:14.927357 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbbl9\" (UniqueName: \"kubernetes.io/projected/a116975b-8d46-40b6-99e4-134b1558c5d9-kube-api-access-wbbl9\") pod \"a116975b-8d46-40b6-99e4-134b1558c5d9\" (UID: \"a116975b-8d46-40b6-99e4-134b1558c5d9\") " Dec 10 15:45:14 crc kubenswrapper[4755]: I1210 15:45:14.927459 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hhm8p\" (UniqueName: \"kubernetes.io/projected/c7364f60-3c77-4234-9bed-d0e8f92d0bca-kube-api-access-hhm8p\") pod \"c7364f60-3c77-4234-9bed-d0e8f92d0bca\" (UID: \"c7364f60-3c77-4234-9bed-d0e8f92d0bca\") " Dec 10 15:45:14 crc kubenswrapper[4755]: I1210 15:45:14.927509 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c7364f60-3c77-4234-9bed-d0e8f92d0bca-operator-scripts\") pod \"c7364f60-3c77-4234-9bed-d0e8f92d0bca\" (UID: \"c7364f60-3c77-4234-9bed-d0e8f92d0bca\") " Dec 10 15:45:14 crc kubenswrapper[4755]: I1210 15:45:14.927533 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hht45\" (UniqueName: \"kubernetes.io/projected/3347b9d4-ec43-4f20-a896-4c3f26ecb892-kube-api-access-hht45\") pod \"3347b9d4-ec43-4f20-a896-4c3f26ecb892\" (UID: \"3347b9d4-ec43-4f20-a896-4c3f26ecb892\") " Dec 10 15:45:14 crc kubenswrapper[4755]: I1210 15:45:14.927563 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p8twv\" (UniqueName: \"kubernetes.io/projected/5dbb4b56-2d08-40ba-8cce-70f548573384-kube-api-access-p8twv\") pod \"5dbb4b56-2d08-40ba-8cce-70f548573384\" (UID: \"5dbb4b56-2d08-40ba-8cce-70f548573384\") " Dec 10 15:45:14 crc kubenswrapper[4755]: I1210 15:45:14.927582 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3347b9d4-ec43-4f20-a896-4c3f26ecb892-operator-scripts\") pod \"3347b9d4-ec43-4f20-a896-4c3f26ecb892\" (UID: \"3347b9d4-ec43-4f20-a896-4c3f26ecb892\") " Dec 10 15:45:14 crc kubenswrapper[4755]: I1210 15:45:14.927620 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kdprk\" (UniqueName: \"kubernetes.io/projected/833228f4-cb63-4a39-aada-9481b9cdb3e5-kube-api-access-kdprk\") pod \"833228f4-cb63-4a39-aada-9481b9cdb3e5\" (UID: \"833228f4-cb63-4a39-aada-9481b9cdb3e5\") " Dec 10 15:45:14 crc kubenswrapper[4755]: I1210 15:45:14.927786 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a116975b-8d46-40b6-99e4-134b1558c5d9-secret-volume\") pod \"a116975b-8d46-40b6-99e4-134b1558c5d9\" (UID: \"a116975b-8d46-40b6-99e4-134b1558c5d9\") " Dec 10 15:45:14 crc kubenswrapper[4755]: I1210 15:45:14.927804 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/833228f4-cb63-4a39-aada-9481b9cdb3e5-operator-scripts\") pod \"833228f4-cb63-4a39-aada-9481b9cdb3e5\" (UID: \"833228f4-cb63-4a39-aada-9481b9cdb3e5\") " Dec 10 15:45:14 crc kubenswrapper[4755]: I1210 15:45:14.927834 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5dbb4b56-2d08-40ba-8cce-70f548573384-operator-scripts\") pod \"5dbb4b56-2d08-40ba-8cce-70f548573384\" (UID: \"5dbb4b56-2d08-40ba-8cce-70f548573384\") " Dec 10 15:45:14 crc kubenswrapper[4755]: I1210 15:45:14.927869 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a116975b-8d46-40b6-99e4-134b1558c5d9-config-volume\") pod \"a116975b-8d46-40b6-99e4-134b1558c5d9\" (UID: \"a116975b-8d46-40b6-99e4-134b1558c5d9\") " Dec 10 15:45:14 crc kubenswrapper[4755]: I1210 15:45:14.933271 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a116975b-8d46-40b6-99e4-134b1558c5d9-config-volume" (OuterVolumeSpecName: "config-volume") pod "a116975b-8d46-40b6-99e4-134b1558c5d9" (UID: "a116975b-8d46-40b6-99e4-134b1558c5d9"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:45:14 crc kubenswrapper[4755]: I1210 15:45:14.933710 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7364f60-3c77-4234-9bed-d0e8f92d0bca-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c7364f60-3c77-4234-9bed-d0e8f92d0bca" (UID: "c7364f60-3c77-4234-9bed-d0e8f92d0bca"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:45:14 crc kubenswrapper[4755]: I1210 15:45:14.935335 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/833228f4-cb63-4a39-aada-9481b9cdb3e5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "833228f4-cb63-4a39-aada-9481b9cdb3e5" (UID: "833228f4-cb63-4a39-aada-9481b9cdb3e5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:45:14 crc kubenswrapper[4755]: I1210 15:45:14.935725 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5dbb4b56-2d08-40ba-8cce-70f548573384-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5dbb4b56-2d08-40ba-8cce-70f548573384" (UID: "5dbb4b56-2d08-40ba-8cce-70f548573384"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:45:14 crc kubenswrapper[4755]: I1210 15:45:14.936053 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3347b9d4-ec43-4f20-a896-4c3f26ecb892-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3347b9d4-ec43-4f20-a896-4c3f26ecb892" (UID: "3347b9d4-ec43-4f20-a896-4c3f26ecb892"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:45:14 crc kubenswrapper[4755]: I1210 15:45:14.960697 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7364f60-3c77-4234-9bed-d0e8f92d0bca-kube-api-access-hhm8p" (OuterVolumeSpecName: "kube-api-access-hhm8p") pod "c7364f60-3c77-4234-9bed-d0e8f92d0bca" (UID: "c7364f60-3c77-4234-9bed-d0e8f92d0bca"). InnerVolumeSpecName "kube-api-access-hhm8p". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:45:14 crc kubenswrapper[4755]: I1210 15:45:14.968915 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a116975b-8d46-40b6-99e4-134b1558c5d9-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a116975b-8d46-40b6-99e4-134b1558c5d9" (UID: "a116975b-8d46-40b6-99e4-134b1558c5d9"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:45:14 crc kubenswrapper[4755]: I1210 15:45:14.970818 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5dbb4b56-2d08-40ba-8cce-70f548573384-kube-api-access-p8twv" (OuterVolumeSpecName: "kube-api-access-p8twv") pod "5dbb4b56-2d08-40ba-8cce-70f548573384" (UID: "5dbb4b56-2d08-40ba-8cce-70f548573384"). InnerVolumeSpecName "kube-api-access-p8twv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:45:14 crc kubenswrapper[4755]: I1210 15:45:14.971693 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a116975b-8d46-40b6-99e4-134b1558c5d9-kube-api-access-wbbl9" (OuterVolumeSpecName: "kube-api-access-wbbl9") pod "a116975b-8d46-40b6-99e4-134b1558c5d9" (UID: "a116975b-8d46-40b6-99e4-134b1558c5d9"). InnerVolumeSpecName "kube-api-access-wbbl9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:45:14 crc kubenswrapper[4755]: I1210 15:45:14.980587 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3347b9d4-ec43-4f20-a896-4c3f26ecb892-kube-api-access-hht45" (OuterVolumeSpecName: "kube-api-access-hht45") pod "3347b9d4-ec43-4f20-a896-4c3f26ecb892" (UID: "3347b9d4-ec43-4f20-a896-4c3f26ecb892"). InnerVolumeSpecName "kube-api-access-hht45". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:45:15 crc kubenswrapper[4755]: I1210 15:45:15.006676 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/833228f4-cb63-4a39-aada-9481b9cdb3e5-kube-api-access-kdprk" (OuterVolumeSpecName: "kube-api-access-kdprk") pod "833228f4-cb63-4a39-aada-9481b9cdb3e5" (UID: "833228f4-cb63-4a39-aada-9481b9cdb3e5"). InnerVolumeSpecName "kube-api-access-kdprk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:45:15 crc kubenswrapper[4755]: I1210 15:45:15.032503 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ce71806-31d9-482e-860b-3fceb024e17f-operator-scripts\") pod \"0ce71806-31d9-482e-860b-3fceb024e17f\" (UID: \"0ce71806-31d9-482e-860b-3fceb024e17f\") " Dec 10 15:45:15 crc kubenswrapper[4755]: I1210 15:45:15.032793 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xvj86\" (UniqueName: \"kubernetes.io/projected/0ce71806-31d9-482e-860b-3fceb024e17f-kube-api-access-xvj86\") pod \"0ce71806-31d9-482e-860b-3fceb024e17f\" (UID: \"0ce71806-31d9-482e-860b-3fceb024e17f\") " Dec 10 15:45:15 crc kubenswrapper[4755]: I1210 15:45:15.032870 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lgc95\" (UniqueName: \"kubernetes.io/projected/8002c1bd-43bb-4d3d-b06a-e391505af5b5-kube-api-access-lgc95\") pod \"8002c1bd-43bb-4d3d-b06a-e391505af5b5\" (UID: \"8002c1bd-43bb-4d3d-b06a-e391505af5b5\") " Dec 10 15:45:15 crc kubenswrapper[4755]: I1210 15:45:15.032962 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8002c1bd-43bb-4d3d-b06a-e391505af5b5-operator-scripts\") pod \"8002c1bd-43bb-4d3d-b06a-e391505af5b5\" (UID: \"8002c1bd-43bb-4d3d-b06a-e391505af5b5\") " Dec 10 15:45:15 crc kubenswrapper[4755]: I1210 15:45:15.033406 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ce71806-31d9-482e-860b-3fceb024e17f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0ce71806-31d9-482e-860b-3fceb024e17f" (UID: "0ce71806-31d9-482e-860b-3fceb024e17f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:45:15 crc kubenswrapper[4755]: I1210 15:45:15.033570 4755 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ce71806-31d9-482e-860b-3fceb024e17f-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 15:45:15 crc kubenswrapper[4755]: I1210 15:45:15.033614 4755 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a116975b-8d46-40b6-99e4-134b1558c5d9-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 10 15:45:15 crc kubenswrapper[4755]: I1210 15:45:15.033623 4755 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/833228f4-cb63-4a39-aada-9481b9cdb3e5-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 15:45:15 crc kubenswrapper[4755]: I1210 15:45:15.033634 4755 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5dbb4b56-2d08-40ba-8cce-70f548573384-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 15:45:15 crc kubenswrapper[4755]: I1210 15:45:15.033642 4755 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a116975b-8d46-40b6-99e4-134b1558c5d9-config-volume\") on node \"crc\" DevicePath \"\"" Dec 10 15:45:15 crc kubenswrapper[4755]: I1210 15:45:15.033651 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wbbl9\" (UniqueName: \"kubernetes.io/projected/a116975b-8d46-40b6-99e4-134b1558c5d9-kube-api-access-wbbl9\") on node \"crc\" DevicePath \"\"" Dec 10 15:45:15 crc kubenswrapper[4755]: I1210 15:45:15.033659 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hhm8p\" (UniqueName: \"kubernetes.io/projected/c7364f60-3c77-4234-9bed-d0e8f92d0bca-kube-api-access-hhm8p\") on node \"crc\" DevicePath \"\"" Dec 10 15:45:15 crc kubenswrapper[4755]: I1210 15:45:15.033669 4755 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c7364f60-3c77-4234-9bed-d0e8f92d0bca-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 15:45:15 crc kubenswrapper[4755]: I1210 15:45:15.033677 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hht45\" (UniqueName: \"kubernetes.io/projected/3347b9d4-ec43-4f20-a896-4c3f26ecb892-kube-api-access-hht45\") on node \"crc\" DevicePath \"\"" Dec 10 15:45:15 crc kubenswrapper[4755]: I1210 15:45:15.033686 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p8twv\" (UniqueName: \"kubernetes.io/projected/5dbb4b56-2d08-40ba-8cce-70f548573384-kube-api-access-p8twv\") on node \"crc\" DevicePath \"\"" Dec 10 15:45:15 crc kubenswrapper[4755]: I1210 15:45:15.033694 4755 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3347b9d4-ec43-4f20-a896-4c3f26ecb892-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 15:45:15 crc kubenswrapper[4755]: I1210 15:45:15.033702 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kdprk\" (UniqueName: \"kubernetes.io/projected/833228f4-cb63-4a39-aada-9481b9cdb3e5-kube-api-access-kdprk\") on node \"crc\" DevicePath \"\"" Dec 10 15:45:15 crc kubenswrapper[4755]: I1210 15:45:15.033811 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8002c1bd-43bb-4d3d-b06a-e391505af5b5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8002c1bd-43bb-4d3d-b06a-e391505af5b5" (UID: "8002c1bd-43bb-4d3d-b06a-e391505af5b5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:45:15 crc kubenswrapper[4755]: I1210 15:45:15.040804 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ce71806-31d9-482e-860b-3fceb024e17f-kube-api-access-xvj86" (OuterVolumeSpecName: "kube-api-access-xvj86") pod "0ce71806-31d9-482e-860b-3fceb024e17f" (UID: "0ce71806-31d9-482e-860b-3fceb024e17f"). InnerVolumeSpecName "kube-api-access-xvj86". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:45:15 crc kubenswrapper[4755]: I1210 15:45:15.043726 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8002c1bd-43bb-4d3d-b06a-e391505af5b5-kube-api-access-lgc95" (OuterVolumeSpecName: "kube-api-access-lgc95") pod "8002c1bd-43bb-4d3d-b06a-e391505af5b5" (UID: "8002c1bd-43bb-4d3d-b06a-e391505af5b5"). InnerVolumeSpecName "kube-api-access-lgc95". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:45:15 crc kubenswrapper[4755]: I1210 15:45:15.136193 4755 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8002c1bd-43bb-4d3d-b06a-e391505af5b5-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 15:45:15 crc kubenswrapper[4755]: I1210 15:45:15.136241 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xvj86\" (UniqueName: \"kubernetes.io/projected/0ce71806-31d9-482e-860b-3fceb024e17f-kube-api-access-xvj86\") on node \"crc\" DevicePath \"\"" Dec 10 15:45:15 crc kubenswrapper[4755]: I1210 15:45:15.136254 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lgc95\" (UniqueName: \"kubernetes.io/projected/8002c1bd-43bb-4d3d-b06a-e391505af5b5-kube-api-access-lgc95\") on node \"crc\" DevicePath \"\"" Dec 10 15:45:15 crc kubenswrapper[4755]: I1210 15:45:15.416890 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-625d-account-create-update-mn5ph" event={"ID":"5dbb4b56-2d08-40ba-8cce-70f548573384","Type":"ContainerDied","Data":"85a8c2542d9d7fbca4ed015719aaaa6699dbdfddea0e82413e78d663fd3c25db"} Dec 10 15:45:15 crc kubenswrapper[4755]: I1210 15:45:15.416935 4755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="85a8c2542d9d7fbca4ed015719aaaa6699dbdfddea0e82413e78d663fd3c25db" Dec 10 15:45:15 crc kubenswrapper[4755]: I1210 15:45:15.417002 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-625d-account-create-update-mn5ph" Dec 10 15:45:15 crc kubenswrapper[4755]: I1210 15:45:15.427846 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30e81eb3-8296-4b76-8a90-a76aa64a4656","Type":"ContainerStarted","Data":"73e24f9baadda3917ae541f0b88f2e4e47c402f183df236a7a4db4904d1eb46e"} Dec 10 15:45:15 crc kubenswrapper[4755]: I1210 15:45:15.429059 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-hdt77" event={"ID":"c7364f60-3c77-4234-9bed-d0e8f92d0bca","Type":"ContainerDied","Data":"de585e3e9a268a166a4f4f255254cac44d96a00b4553c3cfefd6bb7821f1edea"} Dec 10 15:45:15 crc kubenswrapper[4755]: I1210 15:45:15.429082 4755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de585e3e9a268a166a4f4f255254cac44d96a00b4553c3cfefd6bb7821f1edea" Dec 10 15:45:15 crc kubenswrapper[4755]: I1210 15:45:15.429129 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-hdt77" Dec 10 15:45:15 crc kubenswrapper[4755]: I1210 15:45:15.431967 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29423025-mmtdq" event={"ID":"a116975b-8d46-40b6-99e4-134b1558c5d9","Type":"ContainerDied","Data":"20bd7817f91e962399e1c2c667a0a107944f99c3d524e56070388a5f555a65ae"} Dec 10 15:45:15 crc kubenswrapper[4755]: I1210 15:45:15.432010 4755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="20bd7817f91e962399e1c2c667a0a107944f99c3d524e56070388a5f555a65ae" Dec 10 15:45:15 crc kubenswrapper[4755]: I1210 15:45:15.432071 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29423025-mmtdq" Dec 10 15:45:15 crc kubenswrapper[4755]: I1210 15:45:15.434156 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-64rbl" event={"ID":"3347b9d4-ec43-4f20-a896-4c3f26ecb892","Type":"ContainerDied","Data":"4b20b76b69225c15e021ae69e307aa672ea66a0c0e25d95ed96759763a724c76"} Dec 10 15:45:15 crc kubenswrapper[4755]: I1210 15:45:15.434187 4755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b20b76b69225c15e021ae69e307aa672ea66a0c0e25d95ed96759763a724c76" Dec 10 15:45:15 crc kubenswrapper[4755]: I1210 15:45:15.434241 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-64rbl" Dec 10 15:45:15 crc kubenswrapper[4755]: I1210 15:45:15.436868 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-a9ee-account-create-update-zctzv" event={"ID":"8002c1bd-43bb-4d3d-b06a-e391505af5b5","Type":"ContainerDied","Data":"bcac80fbbb26dc41702ab40a8db61bcd1a435948f89ff8f8432d49b80678d6f0"} Dec 10 15:45:15 crc kubenswrapper[4755]: I1210 15:45:15.436931 4755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bcac80fbbb26dc41702ab40a8db61bcd1a435948f89ff8f8432d49b80678d6f0" Dec 10 15:45:15 crc kubenswrapper[4755]: I1210 15:45:15.436886 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-a9ee-account-create-update-zctzv" Dec 10 15:45:15 crc kubenswrapper[4755]: I1210 15:45:15.439661 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-dca9-account-create-update-lw8s5" event={"ID":"833228f4-cb63-4a39-aada-9481b9cdb3e5","Type":"ContainerDied","Data":"358f81e2e5e45893917351a1cc5b35c5a5494794e763e18f44d142eb91dfe98c"} Dec 10 15:45:15 crc kubenswrapper[4755]: I1210 15:45:15.439700 4755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="358f81e2e5e45893917351a1cc5b35c5a5494794e763e18f44d142eb91dfe98c" Dec 10 15:45:15 crc kubenswrapper[4755]: I1210 15:45:15.439724 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-dca9-account-create-update-lw8s5" Dec 10 15:45:15 crc kubenswrapper[4755]: I1210 15:45:15.442721 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-jrbpt" Dec 10 15:45:15 crc kubenswrapper[4755]: I1210 15:45:15.443268 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-jrbpt" event={"ID":"0ce71806-31d9-482e-860b-3fceb024e17f","Type":"ContainerDied","Data":"5a2ff47766031977821345a43968b7343d250de80907b5e679e4153d78a9080d"} Dec 10 15:45:15 crc kubenswrapper[4755]: I1210 15:45:15.443316 4755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5a2ff47766031977821345a43968b7343d250de80907b5e679e4153d78a9080d" Dec 10 15:45:15 crc kubenswrapper[4755]: E1210 15:45:15.446453 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified\\\"\"" pod="openstack/openstackclient" podUID="a9d4b4ae-84b2-4971-aafc-6f5bdad0b69d" Dec 10 15:45:16 crc kubenswrapper[4755]: I1210 15:45:16.452303 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30e81eb3-8296-4b76-8a90-a76aa64a4656","Type":"ContainerStarted","Data":"079c62377d13ed19579a575b7fe47dc7c1edbbe270c1d3bf0bf28003c888725b"} Dec 10 15:45:17 crc kubenswrapper[4755]: I1210 15:45:17.464115 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30e81eb3-8296-4b76-8a90-a76aa64a4656","Type":"ContainerStarted","Data":"9f6e1693a0a2b9c922ce1f3d342e28115f0604e388f6fc83bbf8afa112cccd87"} Dec 10 15:45:19 crc kubenswrapper[4755]: I1210 15:45:19.489957 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30e81eb3-8296-4b76-8a90-a76aa64a4656","Type":"ContainerStarted","Data":"7631807abf1a3119ba32bbabd4cc51a3c1a254f3a80cb42d33c734b144557480"} Dec 10 15:45:19 crc kubenswrapper[4755]: I1210 15:45:19.490309 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 10 15:45:19 crc kubenswrapper[4755]: I1210 15:45:19.515852 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.766093041 podStartE2EDuration="18.515837035s" podCreationTimestamp="2025-12-10 15:45:01 +0000 UTC" firstStartedPulling="2025-12-10 15:45:02.478296376 +0000 UTC m=+1299.079180008" lastFinishedPulling="2025-12-10 15:45:18.22804037 +0000 UTC m=+1314.828924002" observedRunningTime="2025-12-10 15:45:19.511592711 +0000 UTC m=+1316.112476363" watchObservedRunningTime="2025-12-10 15:45:19.515837035 +0000 UTC m=+1316.116720667" Dec 10 15:45:20 crc kubenswrapper[4755]: I1210 15:45:20.219844 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-f9wl6"] Dec 10 15:45:20 crc kubenswrapper[4755]: E1210 15:45:20.220830 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a116975b-8d46-40b6-99e4-134b1558c5d9" containerName="collect-profiles" Dec 10 15:45:20 crc kubenswrapper[4755]: I1210 15:45:20.220865 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="a116975b-8d46-40b6-99e4-134b1558c5d9" containerName="collect-profiles" Dec 10 15:45:20 crc kubenswrapper[4755]: E1210 15:45:20.220881 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8002c1bd-43bb-4d3d-b06a-e391505af5b5" containerName="mariadb-account-create-update" Dec 10 15:45:20 crc kubenswrapper[4755]: I1210 15:45:20.220888 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="8002c1bd-43bb-4d3d-b06a-e391505af5b5" containerName="mariadb-account-create-update" Dec 10 15:45:20 crc kubenswrapper[4755]: E1210 15:45:20.220898 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5dbb4b56-2d08-40ba-8cce-70f548573384" containerName="mariadb-account-create-update" Dec 10 15:45:20 crc kubenswrapper[4755]: I1210 15:45:20.220906 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="5dbb4b56-2d08-40ba-8cce-70f548573384" containerName="mariadb-account-create-update" Dec 10 15:45:20 crc kubenswrapper[4755]: E1210 15:45:20.220925 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="833228f4-cb63-4a39-aada-9481b9cdb3e5" containerName="mariadb-account-create-update" Dec 10 15:45:20 crc kubenswrapper[4755]: I1210 15:45:20.220933 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="833228f4-cb63-4a39-aada-9481b9cdb3e5" containerName="mariadb-account-create-update" Dec 10 15:45:20 crc kubenswrapper[4755]: E1210 15:45:20.220949 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3347b9d4-ec43-4f20-a896-4c3f26ecb892" containerName="mariadb-database-create" Dec 10 15:45:20 crc kubenswrapper[4755]: I1210 15:45:20.220957 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="3347b9d4-ec43-4f20-a896-4c3f26ecb892" containerName="mariadb-database-create" Dec 10 15:45:20 crc kubenswrapper[4755]: E1210 15:45:20.220977 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ce71806-31d9-482e-860b-3fceb024e17f" containerName="mariadb-database-create" Dec 10 15:45:20 crc kubenswrapper[4755]: I1210 15:45:20.220984 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ce71806-31d9-482e-860b-3fceb024e17f" containerName="mariadb-database-create" Dec 10 15:45:20 crc kubenswrapper[4755]: E1210 15:45:20.221005 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7364f60-3c77-4234-9bed-d0e8f92d0bca" containerName="mariadb-database-create" Dec 10 15:45:20 crc kubenswrapper[4755]: I1210 15:45:20.221015 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7364f60-3c77-4234-9bed-d0e8f92d0bca" containerName="mariadb-database-create" Dec 10 15:45:20 crc kubenswrapper[4755]: I1210 15:45:20.221240 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7364f60-3c77-4234-9bed-d0e8f92d0bca" containerName="mariadb-database-create" Dec 10 15:45:20 crc kubenswrapper[4755]: I1210 15:45:20.221261 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="833228f4-cb63-4a39-aada-9481b9cdb3e5" containerName="mariadb-account-create-update" Dec 10 15:45:20 crc kubenswrapper[4755]: I1210 15:45:20.221277 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="3347b9d4-ec43-4f20-a896-4c3f26ecb892" containerName="mariadb-database-create" Dec 10 15:45:20 crc kubenswrapper[4755]: I1210 15:45:20.221289 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="a116975b-8d46-40b6-99e4-134b1558c5d9" containerName="collect-profiles" Dec 10 15:45:20 crc kubenswrapper[4755]: I1210 15:45:20.221305 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ce71806-31d9-482e-860b-3fceb024e17f" containerName="mariadb-database-create" Dec 10 15:45:20 crc kubenswrapper[4755]: I1210 15:45:20.221315 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="8002c1bd-43bb-4d3d-b06a-e391505af5b5" containerName="mariadb-account-create-update" Dec 10 15:45:20 crc kubenswrapper[4755]: I1210 15:45:20.221327 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="5dbb4b56-2d08-40ba-8cce-70f548573384" containerName="mariadb-account-create-update" Dec 10 15:45:20 crc kubenswrapper[4755]: I1210 15:45:20.222486 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-f9wl6" Dec 10 15:45:20 crc kubenswrapper[4755]: I1210 15:45:20.225604 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Dec 10 15:45:20 crc kubenswrapper[4755]: I1210 15:45:20.225868 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Dec 10 15:45:20 crc kubenswrapper[4755]: I1210 15:45:20.227626 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-xpgvv" Dec 10 15:45:20 crc kubenswrapper[4755]: I1210 15:45:20.234376 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-f9wl6"] Dec 10 15:45:20 crc kubenswrapper[4755]: I1210 15:45:20.361965 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/609b4b0b-1c46-4b66-bfd5-d42a91e325c4-scripts\") pod \"nova-cell0-conductor-db-sync-f9wl6\" (UID: \"609b4b0b-1c46-4b66-bfd5-d42a91e325c4\") " pod="openstack/nova-cell0-conductor-db-sync-f9wl6" Dec 10 15:45:20 crc kubenswrapper[4755]: I1210 15:45:20.362079 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lj5mj\" (UniqueName: \"kubernetes.io/projected/609b4b0b-1c46-4b66-bfd5-d42a91e325c4-kube-api-access-lj5mj\") pod \"nova-cell0-conductor-db-sync-f9wl6\" (UID: \"609b4b0b-1c46-4b66-bfd5-d42a91e325c4\") " pod="openstack/nova-cell0-conductor-db-sync-f9wl6" Dec 10 15:45:20 crc kubenswrapper[4755]: I1210 15:45:20.362115 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/609b4b0b-1c46-4b66-bfd5-d42a91e325c4-config-data\") pod \"nova-cell0-conductor-db-sync-f9wl6\" (UID: \"609b4b0b-1c46-4b66-bfd5-d42a91e325c4\") " pod="openstack/nova-cell0-conductor-db-sync-f9wl6" Dec 10 15:45:20 crc kubenswrapper[4755]: I1210 15:45:20.362160 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/609b4b0b-1c46-4b66-bfd5-d42a91e325c4-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-f9wl6\" (UID: \"609b4b0b-1c46-4b66-bfd5-d42a91e325c4\") " pod="openstack/nova-cell0-conductor-db-sync-f9wl6" Dec 10 15:45:20 crc kubenswrapper[4755]: I1210 15:45:20.464639 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/609b4b0b-1c46-4b66-bfd5-d42a91e325c4-scripts\") pod \"nova-cell0-conductor-db-sync-f9wl6\" (UID: \"609b4b0b-1c46-4b66-bfd5-d42a91e325c4\") " pod="openstack/nova-cell0-conductor-db-sync-f9wl6" Dec 10 15:45:20 crc kubenswrapper[4755]: I1210 15:45:20.464731 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lj5mj\" (UniqueName: \"kubernetes.io/projected/609b4b0b-1c46-4b66-bfd5-d42a91e325c4-kube-api-access-lj5mj\") pod \"nova-cell0-conductor-db-sync-f9wl6\" (UID: \"609b4b0b-1c46-4b66-bfd5-d42a91e325c4\") " pod="openstack/nova-cell0-conductor-db-sync-f9wl6" Dec 10 15:45:20 crc kubenswrapper[4755]: I1210 15:45:20.464768 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/609b4b0b-1c46-4b66-bfd5-d42a91e325c4-config-data\") pod \"nova-cell0-conductor-db-sync-f9wl6\" (UID: \"609b4b0b-1c46-4b66-bfd5-d42a91e325c4\") " pod="openstack/nova-cell0-conductor-db-sync-f9wl6" Dec 10 15:45:20 crc kubenswrapper[4755]: I1210 15:45:20.464813 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/609b4b0b-1c46-4b66-bfd5-d42a91e325c4-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-f9wl6\" (UID: \"609b4b0b-1c46-4b66-bfd5-d42a91e325c4\") " pod="openstack/nova-cell0-conductor-db-sync-f9wl6" Dec 10 15:45:20 crc kubenswrapper[4755]: I1210 15:45:20.470197 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/609b4b0b-1c46-4b66-bfd5-d42a91e325c4-config-data\") pod \"nova-cell0-conductor-db-sync-f9wl6\" (UID: \"609b4b0b-1c46-4b66-bfd5-d42a91e325c4\") " pod="openstack/nova-cell0-conductor-db-sync-f9wl6" Dec 10 15:45:20 crc kubenswrapper[4755]: I1210 15:45:20.470645 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/609b4b0b-1c46-4b66-bfd5-d42a91e325c4-scripts\") pod \"nova-cell0-conductor-db-sync-f9wl6\" (UID: \"609b4b0b-1c46-4b66-bfd5-d42a91e325c4\") " pod="openstack/nova-cell0-conductor-db-sync-f9wl6" Dec 10 15:45:20 crc kubenswrapper[4755]: I1210 15:45:20.472061 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/609b4b0b-1c46-4b66-bfd5-d42a91e325c4-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-f9wl6\" (UID: \"609b4b0b-1c46-4b66-bfd5-d42a91e325c4\") " pod="openstack/nova-cell0-conductor-db-sync-f9wl6" Dec 10 15:45:20 crc kubenswrapper[4755]: I1210 15:45:20.489145 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lj5mj\" (UniqueName: \"kubernetes.io/projected/609b4b0b-1c46-4b66-bfd5-d42a91e325c4-kube-api-access-lj5mj\") pod \"nova-cell0-conductor-db-sync-f9wl6\" (UID: \"609b4b0b-1c46-4b66-bfd5-d42a91e325c4\") " pod="openstack/nova-cell0-conductor-db-sync-f9wl6" Dec 10 15:45:20 crc kubenswrapper[4755]: I1210 15:45:20.550662 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-f9wl6" Dec 10 15:45:21 crc kubenswrapper[4755]: I1210 15:45:21.083839 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-f9wl6"] Dec 10 15:45:21 crc kubenswrapper[4755]: I1210 15:45:21.523824 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-f9wl6" event={"ID":"609b4b0b-1c46-4b66-bfd5-d42a91e325c4","Type":"ContainerStarted","Data":"71a246bbb4a6f4adbce9c5fda108267b45b27a7adab880165da8d56c59258bdc"} Dec 10 15:45:25 crc kubenswrapper[4755]: I1210 15:45:25.943090 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 10 15:45:25 crc kubenswrapper[4755]: I1210 15:45:25.944586 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="30e81eb3-8296-4b76-8a90-a76aa64a4656" containerName="ceilometer-central-agent" containerID="cri-o://73e24f9baadda3917ae541f0b88f2e4e47c402f183df236a7a4db4904d1eb46e" gracePeriod=30 Dec 10 15:45:25 crc kubenswrapper[4755]: I1210 15:45:25.944940 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="30e81eb3-8296-4b76-8a90-a76aa64a4656" containerName="ceilometer-notification-agent" containerID="cri-o://079c62377d13ed19579a575b7fe47dc7c1edbbe270c1d3bf0bf28003c888725b" gracePeriod=30 Dec 10 15:45:25 crc kubenswrapper[4755]: I1210 15:45:25.944935 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="30e81eb3-8296-4b76-8a90-a76aa64a4656" containerName="proxy-httpd" containerID="cri-o://7631807abf1a3119ba32bbabd4cc51a3c1a254f3a80cb42d33c734b144557480" gracePeriod=30 Dec 10 15:45:25 crc kubenswrapper[4755]: I1210 15:45:25.944958 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="30e81eb3-8296-4b76-8a90-a76aa64a4656" containerName="sg-core" containerID="cri-o://9f6e1693a0a2b9c922ce1f3d342e28115f0604e388f6fc83bbf8afa112cccd87" gracePeriod=30 Dec 10 15:45:26 crc kubenswrapper[4755]: E1210 15:45:26.346608 4755 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod30e81eb3_8296_4b76_8a90_a76aa64a4656.slice/crio-conmon-73e24f9baadda3917ae541f0b88f2e4e47c402f183df236a7a4db4904d1eb46e.scope\": RecentStats: unable to find data in memory cache]" Dec 10 15:45:26 crc kubenswrapper[4755]: I1210 15:45:26.593507 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30e81eb3-8296-4b76-8a90-a76aa64a4656","Type":"ContainerDied","Data":"7631807abf1a3119ba32bbabd4cc51a3c1a254f3a80cb42d33c734b144557480"} Dec 10 15:45:26 crc kubenswrapper[4755]: I1210 15:45:26.593523 4755 generic.go:334] "Generic (PLEG): container finished" podID="30e81eb3-8296-4b76-8a90-a76aa64a4656" containerID="7631807abf1a3119ba32bbabd4cc51a3c1a254f3a80cb42d33c734b144557480" exitCode=0 Dec 10 15:45:26 crc kubenswrapper[4755]: I1210 15:45:26.593591 4755 generic.go:334] "Generic (PLEG): container finished" podID="30e81eb3-8296-4b76-8a90-a76aa64a4656" containerID="9f6e1693a0a2b9c922ce1f3d342e28115f0604e388f6fc83bbf8afa112cccd87" exitCode=2 Dec 10 15:45:26 crc kubenswrapper[4755]: I1210 15:45:26.593603 4755 generic.go:334] "Generic (PLEG): container finished" podID="30e81eb3-8296-4b76-8a90-a76aa64a4656" containerID="079c62377d13ed19579a575b7fe47dc7c1edbbe270c1d3bf0bf28003c888725b" exitCode=0 Dec 10 15:45:26 crc kubenswrapper[4755]: I1210 15:45:26.593613 4755 generic.go:334] "Generic (PLEG): container finished" podID="30e81eb3-8296-4b76-8a90-a76aa64a4656" containerID="73e24f9baadda3917ae541f0b88f2e4e47c402f183df236a7a4db4904d1eb46e" exitCode=0 Dec 10 15:45:26 crc kubenswrapper[4755]: I1210 15:45:26.593625 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30e81eb3-8296-4b76-8a90-a76aa64a4656","Type":"ContainerDied","Data":"9f6e1693a0a2b9c922ce1f3d342e28115f0604e388f6fc83bbf8afa112cccd87"} Dec 10 15:45:26 crc kubenswrapper[4755]: I1210 15:45:26.593653 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30e81eb3-8296-4b76-8a90-a76aa64a4656","Type":"ContainerDied","Data":"079c62377d13ed19579a575b7fe47dc7c1edbbe270c1d3bf0bf28003c888725b"} Dec 10 15:45:26 crc kubenswrapper[4755]: I1210 15:45:26.593665 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30e81eb3-8296-4b76-8a90-a76aa64a4656","Type":"ContainerDied","Data":"73e24f9baadda3917ae541f0b88f2e4e47c402f183df236a7a4db4904d1eb46e"} Dec 10 15:45:31 crc kubenswrapper[4755]: I1210 15:45:31.649843 4755 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="30e81eb3-8296-4b76-8a90-a76aa64a4656" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.200:3000/\": dial tcp 10.217.0.200:3000: connect: connection refused" Dec 10 15:45:32 crc kubenswrapper[4755]: I1210 15:45:32.309705 4755 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cloudkitty-api-0" podUID="0051924b-bff8-4934-92b8-f787e29c758e" containerName="cloudkitty-api" probeResult="failure" output="Get \"https://10.217.0.190:8889/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 10 15:45:32 crc kubenswrapper[4755]: I1210 15:45:32.309709 4755 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-api-0" podUID="0051924b-bff8-4934-92b8-f787e29c758e" containerName="cloudkitty-api" probeResult="failure" output="Get \"https://10.217.0.190:8889/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 10 15:45:35 crc kubenswrapper[4755]: I1210 15:45:35.166095 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 15:45:35 crc kubenswrapper[4755]: I1210 15:45:35.319207 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30e81eb3-8296-4b76-8a90-a76aa64a4656-run-httpd\") pod \"30e81eb3-8296-4b76-8a90-a76aa64a4656\" (UID: \"30e81eb3-8296-4b76-8a90-a76aa64a4656\") " Dec 10 15:45:35 crc kubenswrapper[4755]: I1210 15:45:35.319619 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30e81eb3-8296-4b76-8a90-a76aa64a4656-config-data\") pod \"30e81eb3-8296-4b76-8a90-a76aa64a4656\" (UID: \"30e81eb3-8296-4b76-8a90-a76aa64a4656\") " Dec 10 15:45:35 crc kubenswrapper[4755]: I1210 15:45:35.319687 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30e81eb3-8296-4b76-8a90-a76aa64a4656-combined-ca-bundle\") pod \"30e81eb3-8296-4b76-8a90-a76aa64a4656\" (UID: \"30e81eb3-8296-4b76-8a90-a76aa64a4656\") " Dec 10 15:45:35 crc kubenswrapper[4755]: I1210 15:45:35.319712 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30e81eb3-8296-4b76-8a90-a76aa64a4656-log-httpd\") pod \"30e81eb3-8296-4b76-8a90-a76aa64a4656\" (UID: \"30e81eb3-8296-4b76-8a90-a76aa64a4656\") " Dec 10 15:45:35 crc kubenswrapper[4755]: I1210 15:45:35.319771 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30e81eb3-8296-4b76-8a90-a76aa64a4656-scripts\") pod \"30e81eb3-8296-4b76-8a90-a76aa64a4656\" (UID: \"30e81eb3-8296-4b76-8a90-a76aa64a4656\") " Dec 10 15:45:35 crc kubenswrapper[4755]: I1210 15:45:35.319860 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30e81eb3-8296-4b76-8a90-a76aa64a4656-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "30e81eb3-8296-4b76-8a90-a76aa64a4656" (UID: "30e81eb3-8296-4b76-8a90-a76aa64a4656"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:45:35 crc kubenswrapper[4755]: I1210 15:45:35.319940 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nqlpr\" (UniqueName: \"kubernetes.io/projected/30e81eb3-8296-4b76-8a90-a76aa64a4656-kube-api-access-nqlpr\") pod \"30e81eb3-8296-4b76-8a90-a76aa64a4656\" (UID: \"30e81eb3-8296-4b76-8a90-a76aa64a4656\") " Dec 10 15:45:35 crc kubenswrapper[4755]: I1210 15:45:35.319998 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/30e81eb3-8296-4b76-8a90-a76aa64a4656-sg-core-conf-yaml\") pod \"30e81eb3-8296-4b76-8a90-a76aa64a4656\" (UID: \"30e81eb3-8296-4b76-8a90-a76aa64a4656\") " Dec 10 15:45:35 crc kubenswrapper[4755]: I1210 15:45:35.320416 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30e81eb3-8296-4b76-8a90-a76aa64a4656-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "30e81eb3-8296-4b76-8a90-a76aa64a4656" (UID: "30e81eb3-8296-4b76-8a90-a76aa64a4656"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:45:35 crc kubenswrapper[4755]: I1210 15:45:35.320633 4755 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30e81eb3-8296-4b76-8a90-a76aa64a4656-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 10 15:45:35 crc kubenswrapper[4755]: I1210 15:45:35.320660 4755 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30e81eb3-8296-4b76-8a90-a76aa64a4656-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 10 15:45:35 crc kubenswrapper[4755]: I1210 15:45:35.328077 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30e81eb3-8296-4b76-8a90-a76aa64a4656-kube-api-access-nqlpr" (OuterVolumeSpecName: "kube-api-access-nqlpr") pod "30e81eb3-8296-4b76-8a90-a76aa64a4656" (UID: "30e81eb3-8296-4b76-8a90-a76aa64a4656"). InnerVolumeSpecName "kube-api-access-nqlpr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:45:35 crc kubenswrapper[4755]: I1210 15:45:35.333596 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30e81eb3-8296-4b76-8a90-a76aa64a4656-scripts" (OuterVolumeSpecName: "scripts") pod "30e81eb3-8296-4b76-8a90-a76aa64a4656" (UID: "30e81eb3-8296-4b76-8a90-a76aa64a4656"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:45:35 crc kubenswrapper[4755]: I1210 15:45:35.360078 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30e81eb3-8296-4b76-8a90-a76aa64a4656-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "30e81eb3-8296-4b76-8a90-a76aa64a4656" (UID: "30e81eb3-8296-4b76-8a90-a76aa64a4656"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:45:35 crc kubenswrapper[4755]: I1210 15:45:35.419593 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-api-0" Dec 10 15:45:35 crc kubenswrapper[4755]: I1210 15:45:35.422432 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nqlpr\" (UniqueName: \"kubernetes.io/projected/30e81eb3-8296-4b76-8a90-a76aa64a4656-kube-api-access-nqlpr\") on node \"crc\" DevicePath \"\"" Dec 10 15:45:35 crc kubenswrapper[4755]: I1210 15:45:35.422459 4755 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/30e81eb3-8296-4b76-8a90-a76aa64a4656-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 10 15:45:35 crc kubenswrapper[4755]: I1210 15:45:35.422474 4755 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30e81eb3-8296-4b76-8a90-a76aa64a4656-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 15:45:35 crc kubenswrapper[4755]: I1210 15:45:35.716278 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30e81eb3-8296-4b76-8a90-a76aa64a4656","Type":"ContainerDied","Data":"c9953c1fd4ef8444efe8dc470b213685cbdb33ca2efaa28955214f5f2bbcbdcd"} Dec 10 15:45:35 crc kubenswrapper[4755]: I1210 15:45:35.716427 4755 scope.go:117] "RemoveContainer" containerID="7631807abf1a3119ba32bbabd4cc51a3c1a254f3a80cb42d33c734b144557480" Dec 10 15:45:35 crc kubenswrapper[4755]: I1210 15:45:35.716590 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 15:45:36 crc kubenswrapper[4755]: I1210 15:45:36.446610 4755 scope.go:117] "RemoveContainer" containerID="9f6e1693a0a2b9c922ce1f3d342e28115f0604e388f6fc83bbf8afa112cccd87" Dec 10 15:45:36 crc kubenswrapper[4755]: I1210 15:45:36.527782 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30e81eb3-8296-4b76-8a90-a76aa64a4656-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "30e81eb3-8296-4b76-8a90-a76aa64a4656" (UID: "30e81eb3-8296-4b76-8a90-a76aa64a4656"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:45:36 crc kubenswrapper[4755]: I1210 15:45:36.552873 4755 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30e81eb3-8296-4b76-8a90-a76aa64a4656-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:45:36 crc kubenswrapper[4755]: I1210 15:45:36.603078 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30e81eb3-8296-4b76-8a90-a76aa64a4656-config-data" (OuterVolumeSpecName: "config-data") pod "30e81eb3-8296-4b76-8a90-a76aa64a4656" (UID: "30e81eb3-8296-4b76-8a90-a76aa64a4656"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:45:36 crc kubenswrapper[4755]: I1210 15:45:36.654463 4755 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30e81eb3-8296-4b76-8a90-a76aa64a4656-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 15:45:36 crc kubenswrapper[4755]: I1210 15:45:36.699460 4755 scope.go:117] "RemoveContainer" containerID="079c62377d13ed19579a575b7fe47dc7c1edbbe270c1d3bf0bf28003c888725b" Dec 10 15:45:36 crc kubenswrapper[4755]: I1210 15:45:36.726573 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 10 15:45:36 crc kubenswrapper[4755]: I1210 15:45:36.749766 4755 scope.go:117] "RemoveContainer" containerID="73e24f9baadda3917ae541f0b88f2e4e47c402f183df236a7a4db4904d1eb46e" Dec 10 15:45:36 crc kubenswrapper[4755]: I1210 15:45:36.753285 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 10 15:45:36 crc kubenswrapper[4755]: I1210 15:45:36.770554 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 10 15:45:36 crc kubenswrapper[4755]: E1210 15:45:36.771081 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30e81eb3-8296-4b76-8a90-a76aa64a4656" containerName="ceilometer-central-agent" Dec 10 15:45:36 crc kubenswrapper[4755]: I1210 15:45:36.771103 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="30e81eb3-8296-4b76-8a90-a76aa64a4656" containerName="ceilometer-central-agent" Dec 10 15:45:36 crc kubenswrapper[4755]: E1210 15:45:36.771117 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30e81eb3-8296-4b76-8a90-a76aa64a4656" containerName="ceilometer-notification-agent" Dec 10 15:45:36 crc kubenswrapper[4755]: I1210 15:45:36.771126 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="30e81eb3-8296-4b76-8a90-a76aa64a4656" containerName="ceilometer-notification-agent" Dec 10 15:45:36 crc kubenswrapper[4755]: E1210 15:45:36.771137 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30e81eb3-8296-4b76-8a90-a76aa64a4656" containerName="proxy-httpd" Dec 10 15:45:36 crc kubenswrapper[4755]: I1210 15:45:36.771144 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="30e81eb3-8296-4b76-8a90-a76aa64a4656" containerName="proxy-httpd" Dec 10 15:45:36 crc kubenswrapper[4755]: E1210 15:45:36.771174 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30e81eb3-8296-4b76-8a90-a76aa64a4656" containerName="sg-core" Dec 10 15:45:36 crc kubenswrapper[4755]: I1210 15:45:36.771179 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="30e81eb3-8296-4b76-8a90-a76aa64a4656" containerName="sg-core" Dec 10 15:45:36 crc kubenswrapper[4755]: I1210 15:45:36.771437 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="30e81eb3-8296-4b76-8a90-a76aa64a4656" containerName="sg-core" Dec 10 15:45:36 crc kubenswrapper[4755]: I1210 15:45:36.771473 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="30e81eb3-8296-4b76-8a90-a76aa64a4656" containerName="ceilometer-central-agent" Dec 10 15:45:36 crc kubenswrapper[4755]: I1210 15:45:36.771572 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="30e81eb3-8296-4b76-8a90-a76aa64a4656" containerName="proxy-httpd" Dec 10 15:45:36 crc kubenswrapper[4755]: I1210 15:45:36.771587 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="30e81eb3-8296-4b76-8a90-a76aa64a4656" containerName="ceilometer-notification-agent" Dec 10 15:45:36 crc kubenswrapper[4755]: I1210 15:45:36.774278 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 15:45:36 crc kubenswrapper[4755]: I1210 15:45:36.780843 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 10 15:45:36 crc kubenswrapper[4755]: I1210 15:45:36.781021 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 10 15:45:36 crc kubenswrapper[4755]: I1210 15:45:36.784271 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 10 15:45:36 crc kubenswrapper[4755]: I1210 15:45:36.960298 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0989d538-4ba6-45bb-847f-d9c55fe8ba5f-log-httpd\") pod \"ceilometer-0\" (UID: \"0989d538-4ba6-45bb-847f-d9c55fe8ba5f\") " pod="openstack/ceilometer-0" Dec 10 15:45:36 crc kubenswrapper[4755]: I1210 15:45:36.960574 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0989d538-4ba6-45bb-847f-d9c55fe8ba5f-config-data\") pod \"ceilometer-0\" (UID: \"0989d538-4ba6-45bb-847f-d9c55fe8ba5f\") " pod="openstack/ceilometer-0" Dec 10 15:45:36 crc kubenswrapper[4755]: I1210 15:45:36.960618 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0989d538-4ba6-45bb-847f-d9c55fe8ba5f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0989d538-4ba6-45bb-847f-d9c55fe8ba5f\") " pod="openstack/ceilometer-0" Dec 10 15:45:36 crc kubenswrapper[4755]: I1210 15:45:36.960645 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0989d538-4ba6-45bb-847f-d9c55fe8ba5f-scripts\") pod \"ceilometer-0\" (UID: \"0989d538-4ba6-45bb-847f-d9c55fe8ba5f\") " pod="openstack/ceilometer-0" Dec 10 15:45:36 crc kubenswrapper[4755]: I1210 15:45:36.960689 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0989d538-4ba6-45bb-847f-d9c55fe8ba5f-run-httpd\") pod \"ceilometer-0\" (UID: \"0989d538-4ba6-45bb-847f-d9c55fe8ba5f\") " pod="openstack/ceilometer-0" Dec 10 15:45:36 crc kubenswrapper[4755]: I1210 15:45:36.960715 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0989d538-4ba6-45bb-847f-d9c55fe8ba5f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0989d538-4ba6-45bb-847f-d9c55fe8ba5f\") " pod="openstack/ceilometer-0" Dec 10 15:45:36 crc kubenswrapper[4755]: I1210 15:45:36.960751 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bz7sh\" (UniqueName: \"kubernetes.io/projected/0989d538-4ba6-45bb-847f-d9c55fe8ba5f-kube-api-access-bz7sh\") pod \"ceilometer-0\" (UID: \"0989d538-4ba6-45bb-847f-d9c55fe8ba5f\") " pod="openstack/ceilometer-0" Dec 10 15:45:37 crc kubenswrapper[4755]: I1210 15:45:37.062390 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0989d538-4ba6-45bb-847f-d9c55fe8ba5f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0989d538-4ba6-45bb-847f-d9c55fe8ba5f\") " pod="openstack/ceilometer-0" Dec 10 15:45:37 crc kubenswrapper[4755]: I1210 15:45:37.062499 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bz7sh\" (UniqueName: \"kubernetes.io/projected/0989d538-4ba6-45bb-847f-d9c55fe8ba5f-kube-api-access-bz7sh\") pod \"ceilometer-0\" (UID: \"0989d538-4ba6-45bb-847f-d9c55fe8ba5f\") " pod="openstack/ceilometer-0" Dec 10 15:45:37 crc kubenswrapper[4755]: I1210 15:45:37.062604 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0989d538-4ba6-45bb-847f-d9c55fe8ba5f-log-httpd\") pod \"ceilometer-0\" (UID: \"0989d538-4ba6-45bb-847f-d9c55fe8ba5f\") " pod="openstack/ceilometer-0" Dec 10 15:45:37 crc kubenswrapper[4755]: I1210 15:45:37.062637 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0989d538-4ba6-45bb-847f-d9c55fe8ba5f-config-data\") pod \"ceilometer-0\" (UID: \"0989d538-4ba6-45bb-847f-d9c55fe8ba5f\") " pod="openstack/ceilometer-0" Dec 10 15:45:37 crc kubenswrapper[4755]: I1210 15:45:37.062694 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0989d538-4ba6-45bb-847f-d9c55fe8ba5f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0989d538-4ba6-45bb-847f-d9c55fe8ba5f\") " pod="openstack/ceilometer-0" Dec 10 15:45:37 crc kubenswrapper[4755]: I1210 15:45:37.062731 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0989d538-4ba6-45bb-847f-d9c55fe8ba5f-scripts\") pod \"ceilometer-0\" (UID: \"0989d538-4ba6-45bb-847f-d9c55fe8ba5f\") " pod="openstack/ceilometer-0" Dec 10 15:45:37 crc kubenswrapper[4755]: I1210 15:45:37.062799 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0989d538-4ba6-45bb-847f-d9c55fe8ba5f-run-httpd\") pod \"ceilometer-0\" (UID: \"0989d538-4ba6-45bb-847f-d9c55fe8ba5f\") " pod="openstack/ceilometer-0" Dec 10 15:45:37 crc kubenswrapper[4755]: I1210 15:45:37.063542 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0989d538-4ba6-45bb-847f-d9c55fe8ba5f-run-httpd\") pod \"ceilometer-0\" (UID: \"0989d538-4ba6-45bb-847f-d9c55fe8ba5f\") " pod="openstack/ceilometer-0" Dec 10 15:45:37 crc kubenswrapper[4755]: I1210 15:45:37.063791 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0989d538-4ba6-45bb-847f-d9c55fe8ba5f-log-httpd\") pod \"ceilometer-0\" (UID: \"0989d538-4ba6-45bb-847f-d9c55fe8ba5f\") " pod="openstack/ceilometer-0" Dec 10 15:45:37 crc kubenswrapper[4755]: I1210 15:45:37.069549 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0989d538-4ba6-45bb-847f-d9c55fe8ba5f-config-data\") pod \"ceilometer-0\" (UID: \"0989d538-4ba6-45bb-847f-d9c55fe8ba5f\") " pod="openstack/ceilometer-0" Dec 10 15:45:37 crc kubenswrapper[4755]: I1210 15:45:37.121534 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0989d538-4ba6-45bb-847f-d9c55fe8ba5f-scripts\") pod \"ceilometer-0\" (UID: \"0989d538-4ba6-45bb-847f-d9c55fe8ba5f\") " pod="openstack/ceilometer-0" Dec 10 15:45:37 crc kubenswrapper[4755]: I1210 15:45:37.121631 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0989d538-4ba6-45bb-847f-d9c55fe8ba5f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0989d538-4ba6-45bb-847f-d9c55fe8ba5f\") " pod="openstack/ceilometer-0" Dec 10 15:45:37 crc kubenswrapper[4755]: I1210 15:45:37.122772 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0989d538-4ba6-45bb-847f-d9c55fe8ba5f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0989d538-4ba6-45bb-847f-d9c55fe8ba5f\") " pod="openstack/ceilometer-0" Dec 10 15:45:37 crc kubenswrapper[4755]: I1210 15:45:37.128162 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bz7sh\" (UniqueName: \"kubernetes.io/projected/0989d538-4ba6-45bb-847f-d9c55fe8ba5f-kube-api-access-bz7sh\") pod \"ceilometer-0\" (UID: \"0989d538-4ba6-45bb-847f-d9c55fe8ba5f\") " pod="openstack/ceilometer-0" Dec 10 15:45:37 crc kubenswrapper[4755]: I1210 15:45:37.429655 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 15:45:37 crc kubenswrapper[4755]: I1210 15:45:37.773162 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30e81eb3-8296-4b76-8a90-a76aa64a4656" path="/var/lib/kubelet/pods/30e81eb3-8296-4b76-8a90-a76aa64a4656/volumes" Dec 10 15:45:37 crc kubenswrapper[4755]: I1210 15:45:37.956281 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 10 15:45:40 crc kubenswrapper[4755]: I1210 15:45:40.054669 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 10 15:45:40 crc kubenswrapper[4755]: I1210 15:45:40.358799 4755 patch_prober.go:28] interesting pod/machine-config-daemon-ggt8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 15:45:40 crc kubenswrapper[4755]: I1210 15:45:40.359114 4755 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 15:45:40 crc kubenswrapper[4755]: I1210 15:45:40.788624 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"a9d4b4ae-84b2-4971-aafc-6f5bdad0b69d","Type":"ContainerStarted","Data":"56d6e71bde3dac00c6540a045ee4406c4d7d83909e6915226be1e4d8ac69ce8d"} Dec 10 15:45:40 crc kubenswrapper[4755]: I1210 15:45:40.791067 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0989d538-4ba6-45bb-847f-d9c55fe8ba5f","Type":"ContainerStarted","Data":"ac105205efcd4bb29065a8757941ac679248a848ce5eab8e1fe2d12c2a4284d2"} Dec 10 15:45:40 crc kubenswrapper[4755]: I1210 15:45:40.792608 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-f9wl6" event={"ID":"609b4b0b-1c46-4b66-bfd5-d42a91e325c4","Type":"ContainerStarted","Data":"fb300d4332a4ed727e73ad046dc7f955e46e7275f8df1c4ab6d91bf93c5a1561"} Dec 10 15:45:40 crc kubenswrapper[4755]: I1210 15:45:40.808040 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.7412158680000003 podStartE2EDuration="53.808024624s" podCreationTimestamp="2025-12-10 15:44:47 +0000 UTC" firstStartedPulling="2025-12-10 15:44:48.490049416 +0000 UTC m=+1285.090933048" lastFinishedPulling="2025-12-10 15:45:39.556858172 +0000 UTC m=+1336.157741804" observedRunningTime="2025-12-10 15:45:40.803578903 +0000 UTC m=+1337.404462555" watchObservedRunningTime="2025-12-10 15:45:40.808024624 +0000 UTC m=+1337.408908246" Dec 10 15:45:40 crc kubenswrapper[4755]: I1210 15:45:40.826085 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-f9wl6" podStartSLOduration=2.313471174 podStartE2EDuration="20.826065264s" podCreationTimestamp="2025-12-10 15:45:20 +0000 UTC" firstStartedPulling="2025-12-10 15:45:21.090099077 +0000 UTC m=+1317.690982709" lastFinishedPulling="2025-12-10 15:45:39.602693167 +0000 UTC m=+1336.203576799" observedRunningTime="2025-12-10 15:45:40.82368259 +0000 UTC m=+1337.424566232" watchObservedRunningTime="2025-12-10 15:45:40.826065264 +0000 UTC m=+1337.426948896" Dec 10 15:45:41 crc kubenswrapper[4755]: I1210 15:45:41.617379 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 10 15:45:41 crc kubenswrapper[4755]: I1210 15:45:41.619188 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="f8dcc743-2980-4fbb-94bc-b4a8afb79bad" containerName="glance-log" containerID="cri-o://334eb45f075fb7aee2f28fb137282499b0eecefc1171619fb75ae904f14a67d3" gracePeriod=30 Dec 10 15:45:41 crc kubenswrapper[4755]: I1210 15:45:41.619713 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="f8dcc743-2980-4fbb-94bc-b4a8afb79bad" containerName="glance-httpd" containerID="cri-o://588f396a3a268f85f05e46a46e1d5d5d38096cb5d896181631d658d4943bc1d1" gracePeriod=30 Dec 10 15:45:41 crc kubenswrapper[4755]: I1210 15:45:41.805648 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0989d538-4ba6-45bb-847f-d9c55fe8ba5f","Type":"ContainerStarted","Data":"e5806e17688f3be187570afafa92d8b96488e3c12a3190a2c8af430d64c827a7"} Dec 10 15:45:41 crc kubenswrapper[4755]: I1210 15:45:41.807819 4755 generic.go:334] "Generic (PLEG): container finished" podID="f8dcc743-2980-4fbb-94bc-b4a8afb79bad" containerID="334eb45f075fb7aee2f28fb137282499b0eecefc1171619fb75ae904f14a67d3" exitCode=143 Dec 10 15:45:41 crc kubenswrapper[4755]: I1210 15:45:41.807931 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f8dcc743-2980-4fbb-94bc-b4a8afb79bad","Type":"ContainerDied","Data":"334eb45f075fb7aee2f28fb137282499b0eecefc1171619fb75ae904f14a67d3"} Dec 10 15:45:42 crc kubenswrapper[4755]: I1210 15:45:42.822327 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0989d538-4ba6-45bb-847f-d9c55fe8ba5f","Type":"ContainerStarted","Data":"dc5bcbdb29e717dcb93bd963ac9136ac27ba26b65143d174c1e63eb611fa256e"} Dec 10 15:45:43 crc kubenswrapper[4755]: I1210 15:45:43.844503 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0989d538-4ba6-45bb-847f-d9c55fe8ba5f","Type":"ContainerStarted","Data":"d3662bbe8fdcebce34085b1bb62308162b14e549f93a7f6c62941ab4598c056b"} Dec 10 15:45:45 crc kubenswrapper[4755]: I1210 15:45:45.596586 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 10 15:45:45 crc kubenswrapper[4755]: I1210 15:45:45.688211 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f8dcc743-2980-4fbb-94bc-b4a8afb79bad-scripts\") pod \"f8dcc743-2980-4fbb-94bc-b4a8afb79bad\" (UID: \"f8dcc743-2980-4fbb-94bc-b4a8afb79bad\") " Dec 10 15:45:45 crc kubenswrapper[4755]: I1210 15:45:45.690514 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8dcc743-2980-4fbb-94bc-b4a8afb79bad-internal-tls-certs\") pod \"f8dcc743-2980-4fbb-94bc-b4a8afb79bad\" (UID: \"f8dcc743-2980-4fbb-94bc-b4a8afb79bad\") " Dec 10 15:45:45 crc kubenswrapper[4755]: I1210 15:45:45.691014 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jjqn4\" (UniqueName: \"kubernetes.io/projected/f8dcc743-2980-4fbb-94bc-b4a8afb79bad-kube-api-access-jjqn4\") pod \"f8dcc743-2980-4fbb-94bc-b4a8afb79bad\" (UID: \"f8dcc743-2980-4fbb-94bc-b4a8afb79bad\") " Dec 10 15:45:45 crc kubenswrapper[4755]: I1210 15:45:45.691178 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8dcc743-2980-4fbb-94bc-b4a8afb79bad-config-data\") pod \"f8dcc743-2980-4fbb-94bc-b4a8afb79bad\" (UID: \"f8dcc743-2980-4fbb-94bc-b4a8afb79bad\") " Dec 10 15:45:45 crc kubenswrapper[4755]: I1210 15:45:45.691224 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8dcc743-2980-4fbb-94bc-b4a8afb79bad-combined-ca-bundle\") pod \"f8dcc743-2980-4fbb-94bc-b4a8afb79bad\" (UID: \"f8dcc743-2980-4fbb-94bc-b4a8afb79bad\") " Dec 10 15:45:45 crc kubenswrapper[4755]: I1210 15:45:45.691655 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f8dcc743-2980-4fbb-94bc-b4a8afb79bad-httpd-run\") pod \"f8dcc743-2980-4fbb-94bc-b4a8afb79bad\" (UID: \"f8dcc743-2980-4fbb-94bc-b4a8afb79bad\") " Dec 10 15:45:45 crc kubenswrapper[4755]: I1210 15:45:45.691713 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f8dcc743-2980-4fbb-94bc-b4a8afb79bad-logs\") pod \"f8dcc743-2980-4fbb-94bc-b4a8afb79bad\" (UID: \"f8dcc743-2980-4fbb-94bc-b4a8afb79bad\") " Dec 10 15:45:45 crc kubenswrapper[4755]: I1210 15:45:45.692002 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3c1ecacc-2c1b-4e07-9def-36c303767d2a\") pod \"f8dcc743-2980-4fbb-94bc-b4a8afb79bad\" (UID: \"f8dcc743-2980-4fbb-94bc-b4a8afb79bad\") " Dec 10 15:45:45 crc kubenswrapper[4755]: I1210 15:45:45.694185 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f8dcc743-2980-4fbb-94bc-b4a8afb79bad-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "f8dcc743-2980-4fbb-94bc-b4a8afb79bad" (UID: "f8dcc743-2980-4fbb-94bc-b4a8afb79bad"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:45:45 crc kubenswrapper[4755]: I1210 15:45:45.695344 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f8dcc743-2980-4fbb-94bc-b4a8afb79bad-logs" (OuterVolumeSpecName: "logs") pod "f8dcc743-2980-4fbb-94bc-b4a8afb79bad" (UID: "f8dcc743-2980-4fbb-94bc-b4a8afb79bad"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:45:45 crc kubenswrapper[4755]: I1210 15:45:45.699030 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8dcc743-2980-4fbb-94bc-b4a8afb79bad-scripts" (OuterVolumeSpecName: "scripts") pod "f8dcc743-2980-4fbb-94bc-b4a8afb79bad" (UID: "f8dcc743-2980-4fbb-94bc-b4a8afb79bad"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:45:45 crc kubenswrapper[4755]: I1210 15:45:45.709757 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8dcc743-2980-4fbb-94bc-b4a8afb79bad-kube-api-access-jjqn4" (OuterVolumeSpecName: "kube-api-access-jjqn4") pod "f8dcc743-2980-4fbb-94bc-b4a8afb79bad" (UID: "f8dcc743-2980-4fbb-94bc-b4a8afb79bad"). InnerVolumeSpecName "kube-api-access-jjqn4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:45:45 crc kubenswrapper[4755]: I1210 15:45:45.720332 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3c1ecacc-2c1b-4e07-9def-36c303767d2a" (OuterVolumeSpecName: "glance") pod "f8dcc743-2980-4fbb-94bc-b4a8afb79bad" (UID: "f8dcc743-2980-4fbb-94bc-b4a8afb79bad"). InnerVolumeSpecName "pvc-3c1ecacc-2c1b-4e07-9def-36c303767d2a". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 10 15:45:45 crc kubenswrapper[4755]: I1210 15:45:45.732820 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8dcc743-2980-4fbb-94bc-b4a8afb79bad-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f8dcc743-2980-4fbb-94bc-b4a8afb79bad" (UID: "f8dcc743-2980-4fbb-94bc-b4a8afb79bad"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:45:45 crc kubenswrapper[4755]: I1210 15:45:45.756182 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8dcc743-2980-4fbb-94bc-b4a8afb79bad-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "f8dcc743-2980-4fbb-94bc-b4a8afb79bad" (UID: "f8dcc743-2980-4fbb-94bc-b4a8afb79bad"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:45:45 crc kubenswrapper[4755]: I1210 15:45:45.798728 4755 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f8dcc743-2980-4fbb-94bc-b4a8afb79bad-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 15:45:45 crc kubenswrapper[4755]: I1210 15:45:45.798768 4755 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8dcc743-2980-4fbb-94bc-b4a8afb79bad-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 10 15:45:45 crc kubenswrapper[4755]: I1210 15:45:45.798788 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jjqn4\" (UniqueName: \"kubernetes.io/projected/f8dcc743-2980-4fbb-94bc-b4a8afb79bad-kube-api-access-jjqn4\") on node \"crc\" DevicePath \"\"" Dec 10 15:45:45 crc kubenswrapper[4755]: I1210 15:45:45.798803 4755 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f8dcc743-2980-4fbb-94bc-b4a8afb79bad-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 10 15:45:45 crc kubenswrapper[4755]: I1210 15:45:45.798813 4755 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8dcc743-2980-4fbb-94bc-b4a8afb79bad-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:45:45 crc kubenswrapper[4755]: I1210 15:45:45.798823 4755 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f8dcc743-2980-4fbb-94bc-b4a8afb79bad-logs\") on node \"crc\" DevicePath \"\"" Dec 10 15:45:45 crc kubenswrapper[4755]: I1210 15:45:45.798849 4755 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-3c1ecacc-2c1b-4e07-9def-36c303767d2a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3c1ecacc-2c1b-4e07-9def-36c303767d2a\") on node \"crc\" " Dec 10 15:45:45 crc kubenswrapper[4755]: I1210 15:45:45.835521 4755 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Dec 10 15:45:45 crc kubenswrapper[4755]: I1210 15:45:45.835687 4755 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-3c1ecacc-2c1b-4e07-9def-36c303767d2a" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3c1ecacc-2c1b-4e07-9def-36c303767d2a") on node "crc" Dec 10 15:45:45 crc kubenswrapper[4755]: I1210 15:45:45.839800 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8dcc743-2980-4fbb-94bc-b4a8afb79bad-config-data" (OuterVolumeSpecName: "config-data") pod "f8dcc743-2980-4fbb-94bc-b4a8afb79bad" (UID: "f8dcc743-2980-4fbb-94bc-b4a8afb79bad"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:45:45 crc kubenswrapper[4755]: I1210 15:45:45.877045 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0989d538-4ba6-45bb-847f-d9c55fe8ba5f" containerName="ceilometer-central-agent" containerID="cri-o://e5806e17688f3be187570afafa92d8b96488e3c12a3190a2c8af430d64c827a7" gracePeriod=30 Dec 10 15:45:45 crc kubenswrapper[4755]: I1210 15:45:45.877672 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0989d538-4ba6-45bb-847f-d9c55fe8ba5f" containerName="proxy-httpd" containerID="cri-o://cfb83b8e0ea194018b2cfd6745d3704b4bf375330224a3172f05a9c13657c8d1" gracePeriod=30 Dec 10 15:45:45 crc kubenswrapper[4755]: I1210 15:45:45.877735 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0989d538-4ba6-45bb-847f-d9c55fe8ba5f" containerName="sg-core" containerID="cri-o://d3662bbe8fdcebce34085b1bb62308162b14e549f93a7f6c62941ab4598c056b" gracePeriod=30 Dec 10 15:45:45 crc kubenswrapper[4755]: I1210 15:45:45.877770 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0989d538-4ba6-45bb-847f-d9c55fe8ba5f" containerName="ceilometer-notification-agent" containerID="cri-o://dc5bcbdb29e717dcb93bd963ac9136ac27ba26b65143d174c1e63eb611fa256e" gracePeriod=30 Dec 10 15:45:45 crc kubenswrapper[4755]: I1210 15:45:45.877807 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0989d538-4ba6-45bb-847f-d9c55fe8ba5f","Type":"ContainerStarted","Data":"cfb83b8e0ea194018b2cfd6745d3704b4bf375330224a3172f05a9c13657c8d1"} Dec 10 15:45:45 crc kubenswrapper[4755]: I1210 15:45:45.877832 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 10 15:45:45 crc kubenswrapper[4755]: I1210 15:45:45.892422 4755 generic.go:334] "Generic (PLEG): container finished" podID="f8dcc743-2980-4fbb-94bc-b4a8afb79bad" containerID="588f396a3a268f85f05e46a46e1d5d5d38096cb5d896181631d658d4943bc1d1" exitCode=0 Dec 10 15:45:45 crc kubenswrapper[4755]: I1210 15:45:45.892506 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f8dcc743-2980-4fbb-94bc-b4a8afb79bad","Type":"ContainerDied","Data":"588f396a3a268f85f05e46a46e1d5d5d38096cb5d896181631d658d4943bc1d1"} Dec 10 15:45:45 crc kubenswrapper[4755]: I1210 15:45:45.892535 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f8dcc743-2980-4fbb-94bc-b4a8afb79bad","Type":"ContainerDied","Data":"ffdfd9c4fbaeb9121ac40f2906d6f9c3dd2fdc8242778cad0c711608d9beed17"} Dec 10 15:45:45 crc kubenswrapper[4755]: I1210 15:45:45.892551 4755 scope.go:117] "RemoveContainer" containerID="588f396a3a268f85f05e46a46e1d5d5d38096cb5d896181631d658d4943bc1d1" Dec 10 15:45:45 crc kubenswrapper[4755]: I1210 15:45:45.892686 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 10 15:45:45 crc kubenswrapper[4755]: I1210 15:45:45.900206 4755 reconciler_common.go:293] "Volume detached for volume \"pvc-3c1ecacc-2c1b-4e07-9def-36c303767d2a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3c1ecacc-2c1b-4e07-9def-36c303767d2a\") on node \"crc\" DevicePath \"\"" Dec 10 15:45:45 crc kubenswrapper[4755]: I1210 15:45:45.900235 4755 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8dcc743-2980-4fbb-94bc-b4a8afb79bad-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 15:45:45 crc kubenswrapper[4755]: I1210 15:45:45.959048 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=5.1976786520000005 podStartE2EDuration="9.959028724s" podCreationTimestamp="2025-12-10 15:45:36 +0000 UTC" firstStartedPulling="2025-12-10 15:45:40.050822711 +0000 UTC m=+1336.651706333" lastFinishedPulling="2025-12-10 15:45:44.812172773 +0000 UTC m=+1341.413056405" observedRunningTime="2025-12-10 15:45:45.908534304 +0000 UTC m=+1342.509417936" watchObservedRunningTime="2025-12-10 15:45:45.959028724 +0000 UTC m=+1342.559912366" Dec 10 15:45:45 crc kubenswrapper[4755]: I1210 15:45:45.966689 4755 scope.go:117] "RemoveContainer" containerID="334eb45f075fb7aee2f28fb137282499b0eecefc1171619fb75ae904f14a67d3" Dec 10 15:45:45 crc kubenswrapper[4755]: I1210 15:45:45.977884 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 10 15:45:45 crc kubenswrapper[4755]: I1210 15:45:45.996223 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 10 15:45:46 crc kubenswrapper[4755]: I1210 15:45:46.004376 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 10 15:45:46 crc kubenswrapper[4755]: E1210 15:45:46.004810 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8dcc743-2980-4fbb-94bc-b4a8afb79bad" containerName="glance-log" Dec 10 15:45:46 crc kubenswrapper[4755]: I1210 15:45:46.004823 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8dcc743-2980-4fbb-94bc-b4a8afb79bad" containerName="glance-log" Dec 10 15:45:46 crc kubenswrapper[4755]: E1210 15:45:46.004841 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8dcc743-2980-4fbb-94bc-b4a8afb79bad" containerName="glance-httpd" Dec 10 15:45:46 crc kubenswrapper[4755]: I1210 15:45:46.004847 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8dcc743-2980-4fbb-94bc-b4a8afb79bad" containerName="glance-httpd" Dec 10 15:45:46 crc kubenswrapper[4755]: I1210 15:45:46.005042 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8dcc743-2980-4fbb-94bc-b4a8afb79bad" containerName="glance-log" Dec 10 15:45:46 crc kubenswrapper[4755]: I1210 15:45:46.005063 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8dcc743-2980-4fbb-94bc-b4a8afb79bad" containerName="glance-httpd" Dec 10 15:45:46 crc kubenswrapper[4755]: I1210 15:45:46.006220 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 10 15:45:46 crc kubenswrapper[4755]: I1210 15:45:46.011626 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Dec 10 15:45:46 crc kubenswrapper[4755]: I1210 15:45:46.012155 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 10 15:45:46 crc kubenswrapper[4755]: I1210 15:45:46.015756 4755 scope.go:117] "RemoveContainer" containerID="588f396a3a268f85f05e46a46e1d5d5d38096cb5d896181631d658d4943bc1d1" Dec 10 15:45:46 crc kubenswrapper[4755]: E1210 15:45:46.016186 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"588f396a3a268f85f05e46a46e1d5d5d38096cb5d896181631d658d4943bc1d1\": container with ID starting with 588f396a3a268f85f05e46a46e1d5d5d38096cb5d896181631d658d4943bc1d1 not found: ID does not exist" containerID="588f396a3a268f85f05e46a46e1d5d5d38096cb5d896181631d658d4943bc1d1" Dec 10 15:45:46 crc kubenswrapper[4755]: I1210 15:45:46.016216 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"588f396a3a268f85f05e46a46e1d5d5d38096cb5d896181631d658d4943bc1d1"} err="failed to get container status \"588f396a3a268f85f05e46a46e1d5d5d38096cb5d896181631d658d4943bc1d1\": rpc error: code = NotFound desc = could not find container \"588f396a3a268f85f05e46a46e1d5d5d38096cb5d896181631d658d4943bc1d1\": container with ID starting with 588f396a3a268f85f05e46a46e1d5d5d38096cb5d896181631d658d4943bc1d1 not found: ID does not exist" Dec 10 15:45:46 crc kubenswrapper[4755]: I1210 15:45:46.016236 4755 scope.go:117] "RemoveContainer" containerID="334eb45f075fb7aee2f28fb137282499b0eecefc1171619fb75ae904f14a67d3" Dec 10 15:45:46 crc kubenswrapper[4755]: E1210 15:45:46.016709 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"334eb45f075fb7aee2f28fb137282499b0eecefc1171619fb75ae904f14a67d3\": container with ID starting with 334eb45f075fb7aee2f28fb137282499b0eecefc1171619fb75ae904f14a67d3 not found: ID does not exist" containerID="334eb45f075fb7aee2f28fb137282499b0eecefc1171619fb75ae904f14a67d3" Dec 10 15:45:46 crc kubenswrapper[4755]: I1210 15:45:46.016733 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"334eb45f075fb7aee2f28fb137282499b0eecefc1171619fb75ae904f14a67d3"} err="failed to get container status \"334eb45f075fb7aee2f28fb137282499b0eecefc1171619fb75ae904f14a67d3\": rpc error: code = NotFound desc = could not find container \"334eb45f075fb7aee2f28fb137282499b0eecefc1171619fb75ae904f14a67d3\": container with ID starting with 334eb45f075fb7aee2f28fb137282499b0eecefc1171619fb75ae904f14a67d3 not found: ID does not exist" Dec 10 15:45:46 crc kubenswrapper[4755]: I1210 15:45:46.027035 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 10 15:45:46 crc kubenswrapper[4755]: I1210 15:45:46.206799 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/650da1cb-bb89-41e8-bd6c-3cad85726723-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"650da1cb-bb89-41e8-bd6c-3cad85726723\") " pod="openstack/glance-default-internal-api-0" Dec 10 15:45:46 crc kubenswrapper[4755]: I1210 15:45:46.206868 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/650da1cb-bb89-41e8-bd6c-3cad85726723-scripts\") pod \"glance-default-internal-api-0\" (UID: \"650da1cb-bb89-41e8-bd6c-3cad85726723\") " pod="openstack/glance-default-internal-api-0" Dec 10 15:45:46 crc kubenswrapper[4755]: I1210 15:45:46.206906 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/650da1cb-bb89-41e8-bd6c-3cad85726723-config-data\") pod \"glance-default-internal-api-0\" (UID: \"650da1cb-bb89-41e8-bd6c-3cad85726723\") " pod="openstack/glance-default-internal-api-0" Dec 10 15:45:46 crc kubenswrapper[4755]: I1210 15:45:46.206929 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/650da1cb-bb89-41e8-bd6c-3cad85726723-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"650da1cb-bb89-41e8-bd6c-3cad85726723\") " pod="openstack/glance-default-internal-api-0" Dec 10 15:45:46 crc kubenswrapper[4755]: I1210 15:45:46.206959 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-3c1ecacc-2c1b-4e07-9def-36c303767d2a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3c1ecacc-2c1b-4e07-9def-36c303767d2a\") pod \"glance-default-internal-api-0\" (UID: \"650da1cb-bb89-41e8-bd6c-3cad85726723\") " pod="openstack/glance-default-internal-api-0" Dec 10 15:45:46 crc kubenswrapper[4755]: I1210 15:45:46.206983 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/650da1cb-bb89-41e8-bd6c-3cad85726723-logs\") pod \"glance-default-internal-api-0\" (UID: \"650da1cb-bb89-41e8-bd6c-3cad85726723\") " pod="openstack/glance-default-internal-api-0" Dec 10 15:45:46 crc kubenswrapper[4755]: I1210 15:45:46.207021 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c47tw\" (UniqueName: \"kubernetes.io/projected/650da1cb-bb89-41e8-bd6c-3cad85726723-kube-api-access-c47tw\") pod \"glance-default-internal-api-0\" (UID: \"650da1cb-bb89-41e8-bd6c-3cad85726723\") " pod="openstack/glance-default-internal-api-0" Dec 10 15:45:46 crc kubenswrapper[4755]: I1210 15:45:46.207085 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/650da1cb-bb89-41e8-bd6c-3cad85726723-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"650da1cb-bb89-41e8-bd6c-3cad85726723\") " pod="openstack/glance-default-internal-api-0" Dec 10 15:45:46 crc kubenswrapper[4755]: I1210 15:45:46.308756 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/650da1cb-bb89-41e8-bd6c-3cad85726723-scripts\") pod \"glance-default-internal-api-0\" (UID: \"650da1cb-bb89-41e8-bd6c-3cad85726723\") " pod="openstack/glance-default-internal-api-0" Dec 10 15:45:46 crc kubenswrapper[4755]: I1210 15:45:46.308856 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/650da1cb-bb89-41e8-bd6c-3cad85726723-config-data\") pod \"glance-default-internal-api-0\" (UID: \"650da1cb-bb89-41e8-bd6c-3cad85726723\") " pod="openstack/glance-default-internal-api-0" Dec 10 15:45:46 crc kubenswrapper[4755]: I1210 15:45:46.308921 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/650da1cb-bb89-41e8-bd6c-3cad85726723-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"650da1cb-bb89-41e8-bd6c-3cad85726723\") " pod="openstack/glance-default-internal-api-0" Dec 10 15:45:46 crc kubenswrapper[4755]: I1210 15:45:46.308984 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-3c1ecacc-2c1b-4e07-9def-36c303767d2a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3c1ecacc-2c1b-4e07-9def-36c303767d2a\") pod \"glance-default-internal-api-0\" (UID: \"650da1cb-bb89-41e8-bd6c-3cad85726723\") " pod="openstack/glance-default-internal-api-0" Dec 10 15:45:46 crc kubenswrapper[4755]: I1210 15:45:46.309017 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/650da1cb-bb89-41e8-bd6c-3cad85726723-logs\") pod \"glance-default-internal-api-0\" (UID: \"650da1cb-bb89-41e8-bd6c-3cad85726723\") " pod="openstack/glance-default-internal-api-0" Dec 10 15:45:46 crc kubenswrapper[4755]: I1210 15:45:46.309100 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c47tw\" (UniqueName: \"kubernetes.io/projected/650da1cb-bb89-41e8-bd6c-3cad85726723-kube-api-access-c47tw\") pod \"glance-default-internal-api-0\" (UID: \"650da1cb-bb89-41e8-bd6c-3cad85726723\") " pod="openstack/glance-default-internal-api-0" Dec 10 15:45:46 crc kubenswrapper[4755]: I1210 15:45:46.309158 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/650da1cb-bb89-41e8-bd6c-3cad85726723-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"650da1cb-bb89-41e8-bd6c-3cad85726723\") " pod="openstack/glance-default-internal-api-0" Dec 10 15:45:46 crc kubenswrapper[4755]: I1210 15:45:46.309279 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/650da1cb-bb89-41e8-bd6c-3cad85726723-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"650da1cb-bb89-41e8-bd6c-3cad85726723\") " pod="openstack/glance-default-internal-api-0" Dec 10 15:45:46 crc kubenswrapper[4755]: I1210 15:45:46.311153 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/650da1cb-bb89-41e8-bd6c-3cad85726723-logs\") pod \"glance-default-internal-api-0\" (UID: \"650da1cb-bb89-41e8-bd6c-3cad85726723\") " pod="openstack/glance-default-internal-api-0" Dec 10 15:45:46 crc kubenswrapper[4755]: I1210 15:45:46.311303 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/650da1cb-bb89-41e8-bd6c-3cad85726723-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"650da1cb-bb89-41e8-bd6c-3cad85726723\") " pod="openstack/glance-default-internal-api-0" Dec 10 15:45:46 crc kubenswrapper[4755]: I1210 15:45:46.314061 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/650da1cb-bb89-41e8-bd6c-3cad85726723-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"650da1cb-bb89-41e8-bd6c-3cad85726723\") " pod="openstack/glance-default-internal-api-0" Dec 10 15:45:46 crc kubenswrapper[4755]: I1210 15:45:46.314737 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/650da1cb-bb89-41e8-bd6c-3cad85726723-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"650da1cb-bb89-41e8-bd6c-3cad85726723\") " pod="openstack/glance-default-internal-api-0" Dec 10 15:45:46 crc kubenswrapper[4755]: I1210 15:45:46.315906 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/650da1cb-bb89-41e8-bd6c-3cad85726723-config-data\") pod \"glance-default-internal-api-0\" (UID: \"650da1cb-bb89-41e8-bd6c-3cad85726723\") " pod="openstack/glance-default-internal-api-0" Dec 10 15:45:46 crc kubenswrapper[4755]: I1210 15:45:46.316570 4755 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 10 15:45:46 crc kubenswrapper[4755]: I1210 15:45:46.316605 4755 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-3c1ecacc-2c1b-4e07-9def-36c303767d2a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3c1ecacc-2c1b-4e07-9def-36c303767d2a\") pod \"glance-default-internal-api-0\" (UID: \"650da1cb-bb89-41e8-bd6c-3cad85726723\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e7209b79faa29a25e04bc03f8c7f38aa826c0ab7d3d63e0c6698575f30077871/globalmount\"" pod="openstack/glance-default-internal-api-0" Dec 10 15:45:46 crc kubenswrapper[4755]: I1210 15:45:46.319231 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/650da1cb-bb89-41e8-bd6c-3cad85726723-scripts\") pod \"glance-default-internal-api-0\" (UID: \"650da1cb-bb89-41e8-bd6c-3cad85726723\") " pod="openstack/glance-default-internal-api-0" Dec 10 15:45:46 crc kubenswrapper[4755]: I1210 15:45:46.331334 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c47tw\" (UniqueName: \"kubernetes.io/projected/650da1cb-bb89-41e8-bd6c-3cad85726723-kube-api-access-c47tw\") pod \"glance-default-internal-api-0\" (UID: \"650da1cb-bb89-41e8-bd6c-3cad85726723\") " pod="openstack/glance-default-internal-api-0" Dec 10 15:45:46 crc kubenswrapper[4755]: I1210 15:45:46.373601 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-3c1ecacc-2c1b-4e07-9def-36c303767d2a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3c1ecacc-2c1b-4e07-9def-36c303767d2a\") pod \"glance-default-internal-api-0\" (UID: \"650da1cb-bb89-41e8-bd6c-3cad85726723\") " pod="openstack/glance-default-internal-api-0" Dec 10 15:45:46 crc kubenswrapper[4755]: I1210 15:45:46.386973 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 10 15:45:46 crc kubenswrapper[4755]: I1210 15:45:46.605115 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 10 15:45:46 crc kubenswrapper[4755]: I1210 15:45:46.616092 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="81d20e6f-155e-444c-b54f-1161b3dff224" containerName="glance-httpd" containerID="cri-o://e5368def68397d95bf652e17404d4b470cf1f532242f4593d2add2b49a02405a" gracePeriod=30 Dec 10 15:45:46 crc kubenswrapper[4755]: I1210 15:45:46.617215 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="81d20e6f-155e-444c-b54f-1161b3dff224" containerName="glance-log" containerID="cri-o://ed9dd5c80054f8faca85036b1dc2e174ad52044a09ee2641742bbc78e5aac6c0" gracePeriod=30 Dec 10 15:45:46 crc kubenswrapper[4755]: I1210 15:45:46.907978 4755 generic.go:334] "Generic (PLEG): container finished" podID="81d20e6f-155e-444c-b54f-1161b3dff224" containerID="ed9dd5c80054f8faca85036b1dc2e174ad52044a09ee2641742bbc78e5aac6c0" exitCode=143 Dec 10 15:45:46 crc kubenswrapper[4755]: I1210 15:45:46.908375 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"81d20e6f-155e-444c-b54f-1161b3dff224","Type":"ContainerDied","Data":"ed9dd5c80054f8faca85036b1dc2e174ad52044a09ee2641742bbc78e5aac6c0"} Dec 10 15:45:46 crc kubenswrapper[4755]: I1210 15:45:46.914053 4755 generic.go:334] "Generic (PLEG): container finished" podID="0989d538-4ba6-45bb-847f-d9c55fe8ba5f" containerID="cfb83b8e0ea194018b2cfd6745d3704b4bf375330224a3172f05a9c13657c8d1" exitCode=0 Dec 10 15:45:46 crc kubenswrapper[4755]: I1210 15:45:46.914095 4755 generic.go:334] "Generic (PLEG): container finished" podID="0989d538-4ba6-45bb-847f-d9c55fe8ba5f" containerID="d3662bbe8fdcebce34085b1bb62308162b14e549f93a7f6c62941ab4598c056b" exitCode=2 Dec 10 15:45:46 crc kubenswrapper[4755]: I1210 15:45:46.914104 4755 generic.go:334] "Generic (PLEG): container finished" podID="0989d538-4ba6-45bb-847f-d9c55fe8ba5f" containerID="dc5bcbdb29e717dcb93bd963ac9136ac27ba26b65143d174c1e63eb611fa256e" exitCode=0 Dec 10 15:45:46 crc kubenswrapper[4755]: I1210 15:45:46.914125 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0989d538-4ba6-45bb-847f-d9c55fe8ba5f","Type":"ContainerDied","Data":"cfb83b8e0ea194018b2cfd6745d3704b4bf375330224a3172f05a9c13657c8d1"} Dec 10 15:45:46 crc kubenswrapper[4755]: I1210 15:45:46.914153 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0989d538-4ba6-45bb-847f-d9c55fe8ba5f","Type":"ContainerDied","Data":"d3662bbe8fdcebce34085b1bb62308162b14e549f93a7f6c62941ab4598c056b"} Dec 10 15:45:46 crc kubenswrapper[4755]: I1210 15:45:46.914164 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0989d538-4ba6-45bb-847f-d9c55fe8ba5f","Type":"ContainerDied","Data":"dc5bcbdb29e717dcb93bd963ac9136ac27ba26b65143d174c1e63eb611fa256e"} Dec 10 15:45:47 crc kubenswrapper[4755]: I1210 15:45:47.018088 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 10 15:45:47 crc kubenswrapper[4755]: W1210 15:45:47.020677 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod650da1cb_bb89_41e8_bd6c_3cad85726723.slice/crio-466cf7b0ad48c4aca5fc913ef2f77e8abcb14763d4ff3a953d06849aa55da239 WatchSource:0}: Error finding container 466cf7b0ad48c4aca5fc913ef2f77e8abcb14763d4ff3a953d06849aa55da239: Status 404 returned error can't find the container with id 466cf7b0ad48c4aca5fc913ef2f77e8abcb14763d4ff3a953d06849aa55da239 Dec 10 15:45:47 crc kubenswrapper[4755]: I1210 15:45:47.771273 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8dcc743-2980-4fbb-94bc-b4a8afb79bad" path="/var/lib/kubelet/pods/f8dcc743-2980-4fbb-94bc-b4a8afb79bad/volumes" Dec 10 15:45:47 crc kubenswrapper[4755]: I1210 15:45:47.977706 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"650da1cb-bb89-41e8-bd6c-3cad85726723","Type":"ContainerStarted","Data":"a83c1dcefd3d8344c75b5a95edd1c538d660fdbedcdcf8275f7e99f0d1858ee7"} Dec 10 15:45:47 crc kubenswrapper[4755]: I1210 15:45:47.977753 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"650da1cb-bb89-41e8-bd6c-3cad85726723","Type":"ContainerStarted","Data":"466cf7b0ad48c4aca5fc913ef2f77e8abcb14763d4ff3a953d06849aa55da239"} Dec 10 15:45:48 crc kubenswrapper[4755]: I1210 15:45:48.989364 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"650da1cb-bb89-41e8-bd6c-3cad85726723","Type":"ContainerStarted","Data":"7f2f239d6d1e2cc63c0e4e0b71bacb7d585738462c84970ed66f029a32c4c0d2"} Dec 10 15:45:49 crc kubenswrapper[4755]: I1210 15:45:49.020049 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.020031612 podStartE2EDuration="4.020031612s" podCreationTimestamp="2025-12-10 15:45:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:45:49.012515389 +0000 UTC m=+1345.613399031" watchObservedRunningTime="2025-12-10 15:45:49.020031612 +0000 UTC m=+1345.620915244" Dec 10 15:45:49 crc kubenswrapper[4755]: I1210 15:45:49.803602 4755 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="81d20e6f-155e-444c-b54f-1161b3dff224" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.176:9292/healthcheck\": read tcp 10.217.0.2:38932->10.217.0.176:9292: read: connection reset by peer" Dec 10 15:45:49 crc kubenswrapper[4755]: I1210 15:45:49.803968 4755 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="81d20e6f-155e-444c-b54f-1161b3dff224" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.176:9292/healthcheck\": read tcp 10.217.0.2:38934->10.217.0.176:9292: read: connection reset by peer" Dec 10 15:45:50 crc kubenswrapper[4755]: I1210 15:45:50.006853 4755 generic.go:334] "Generic (PLEG): container finished" podID="81d20e6f-155e-444c-b54f-1161b3dff224" containerID="e5368def68397d95bf652e17404d4b470cf1f532242f4593d2add2b49a02405a" exitCode=0 Dec 10 15:45:50 crc kubenswrapper[4755]: I1210 15:45:50.006977 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"81d20e6f-155e-444c-b54f-1161b3dff224","Type":"ContainerDied","Data":"e5368def68397d95bf652e17404d4b470cf1f532242f4593d2add2b49a02405a"} Dec 10 15:45:50 crc kubenswrapper[4755]: I1210 15:45:50.383812 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 10 15:45:50 crc kubenswrapper[4755]: I1210 15:45:50.513335 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81d20e6f-155e-444c-b54f-1161b3dff224-combined-ca-bundle\") pod \"81d20e6f-155e-444c-b54f-1161b3dff224\" (UID: \"81d20e6f-155e-444c-b54f-1161b3dff224\") " Dec 10 15:45:50 crc kubenswrapper[4755]: I1210 15:45:50.513397 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81d20e6f-155e-444c-b54f-1161b3dff224-scripts\") pod \"81d20e6f-155e-444c-b54f-1161b3dff224\" (UID: \"81d20e6f-155e-444c-b54f-1161b3dff224\") " Dec 10 15:45:50 crc kubenswrapper[4755]: I1210 15:45:50.513431 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/81d20e6f-155e-444c-b54f-1161b3dff224-httpd-run\") pod \"81d20e6f-155e-444c-b54f-1161b3dff224\" (UID: \"81d20e6f-155e-444c-b54f-1161b3dff224\") " Dec 10 15:45:50 crc kubenswrapper[4755]: I1210 15:45:50.513620 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81d20e6f-155e-444c-b54f-1161b3dff224-config-data\") pod \"81d20e6f-155e-444c-b54f-1161b3dff224\" (UID: \"81d20e6f-155e-444c-b54f-1161b3dff224\") " Dec 10 15:45:50 crc kubenswrapper[4755]: I1210 15:45:50.513672 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k6qrt\" (UniqueName: \"kubernetes.io/projected/81d20e6f-155e-444c-b54f-1161b3dff224-kube-api-access-k6qrt\") pod \"81d20e6f-155e-444c-b54f-1161b3dff224\" (UID: \"81d20e6f-155e-444c-b54f-1161b3dff224\") " Dec 10 15:45:50 crc kubenswrapper[4755]: I1210 15:45:50.513718 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/81d20e6f-155e-444c-b54f-1161b3dff224-public-tls-certs\") pod \"81d20e6f-155e-444c-b54f-1161b3dff224\" (UID: \"81d20e6f-155e-444c-b54f-1161b3dff224\") " Dec 10 15:45:50 crc kubenswrapper[4755]: I1210 15:45:50.513860 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fad4194d-6d90-49f1-a017-ae4167f764c9\") pod \"81d20e6f-155e-444c-b54f-1161b3dff224\" (UID: \"81d20e6f-155e-444c-b54f-1161b3dff224\") " Dec 10 15:45:50 crc kubenswrapper[4755]: I1210 15:45:50.513898 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81d20e6f-155e-444c-b54f-1161b3dff224-logs\") pod \"81d20e6f-155e-444c-b54f-1161b3dff224\" (UID: \"81d20e6f-155e-444c-b54f-1161b3dff224\") " Dec 10 15:45:50 crc kubenswrapper[4755]: I1210 15:45:50.513978 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81d20e6f-155e-444c-b54f-1161b3dff224-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "81d20e6f-155e-444c-b54f-1161b3dff224" (UID: "81d20e6f-155e-444c-b54f-1161b3dff224"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:45:50 crc kubenswrapper[4755]: I1210 15:45:50.514346 4755 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/81d20e6f-155e-444c-b54f-1161b3dff224-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 10 15:45:50 crc kubenswrapper[4755]: I1210 15:45:50.514607 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81d20e6f-155e-444c-b54f-1161b3dff224-logs" (OuterVolumeSpecName: "logs") pod "81d20e6f-155e-444c-b54f-1161b3dff224" (UID: "81d20e6f-155e-444c-b54f-1161b3dff224"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:45:50 crc kubenswrapper[4755]: I1210 15:45:50.520164 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81d20e6f-155e-444c-b54f-1161b3dff224-scripts" (OuterVolumeSpecName: "scripts") pod "81d20e6f-155e-444c-b54f-1161b3dff224" (UID: "81d20e6f-155e-444c-b54f-1161b3dff224"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:45:50 crc kubenswrapper[4755]: I1210 15:45:50.529865 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81d20e6f-155e-444c-b54f-1161b3dff224-kube-api-access-k6qrt" (OuterVolumeSpecName: "kube-api-access-k6qrt") pod "81d20e6f-155e-444c-b54f-1161b3dff224" (UID: "81d20e6f-155e-444c-b54f-1161b3dff224"). InnerVolumeSpecName "kube-api-access-k6qrt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:45:50 crc kubenswrapper[4755]: I1210 15:45:50.539370 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fad4194d-6d90-49f1-a017-ae4167f764c9" (OuterVolumeSpecName: "glance") pod "81d20e6f-155e-444c-b54f-1161b3dff224" (UID: "81d20e6f-155e-444c-b54f-1161b3dff224"). InnerVolumeSpecName "pvc-fad4194d-6d90-49f1-a017-ae4167f764c9". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 10 15:45:50 crc kubenswrapper[4755]: I1210 15:45:50.548121 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81d20e6f-155e-444c-b54f-1161b3dff224-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "81d20e6f-155e-444c-b54f-1161b3dff224" (UID: "81d20e6f-155e-444c-b54f-1161b3dff224"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:45:50 crc kubenswrapper[4755]: I1210 15:45:50.587660 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81d20e6f-155e-444c-b54f-1161b3dff224-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "81d20e6f-155e-444c-b54f-1161b3dff224" (UID: "81d20e6f-155e-444c-b54f-1161b3dff224"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:45:50 crc kubenswrapper[4755]: I1210 15:45:50.591505 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81d20e6f-155e-444c-b54f-1161b3dff224-config-data" (OuterVolumeSpecName: "config-data") pod "81d20e6f-155e-444c-b54f-1161b3dff224" (UID: "81d20e6f-155e-444c-b54f-1161b3dff224"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:45:50 crc kubenswrapper[4755]: I1210 15:45:50.615814 4755 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81d20e6f-155e-444c-b54f-1161b3dff224-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 15:45:50 crc kubenswrapper[4755]: I1210 15:45:50.615847 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k6qrt\" (UniqueName: \"kubernetes.io/projected/81d20e6f-155e-444c-b54f-1161b3dff224-kube-api-access-k6qrt\") on node \"crc\" DevicePath \"\"" Dec 10 15:45:50 crc kubenswrapper[4755]: I1210 15:45:50.615857 4755 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/81d20e6f-155e-444c-b54f-1161b3dff224-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 10 15:45:50 crc kubenswrapper[4755]: I1210 15:45:50.615890 4755 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-fad4194d-6d90-49f1-a017-ae4167f764c9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fad4194d-6d90-49f1-a017-ae4167f764c9\") on node \"crc\" " Dec 10 15:45:50 crc kubenswrapper[4755]: I1210 15:45:50.615901 4755 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81d20e6f-155e-444c-b54f-1161b3dff224-logs\") on node \"crc\" DevicePath \"\"" Dec 10 15:45:50 crc kubenswrapper[4755]: I1210 15:45:50.615912 4755 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81d20e6f-155e-444c-b54f-1161b3dff224-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:45:50 crc kubenswrapper[4755]: I1210 15:45:50.615920 4755 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81d20e6f-155e-444c-b54f-1161b3dff224-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 15:45:50 crc kubenswrapper[4755]: I1210 15:45:50.653777 4755 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Dec 10 15:45:50 crc kubenswrapper[4755]: I1210 15:45:50.653979 4755 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-fad4194d-6d90-49f1-a017-ae4167f764c9" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fad4194d-6d90-49f1-a017-ae4167f764c9") on node "crc" Dec 10 15:45:50 crc kubenswrapper[4755]: I1210 15:45:50.717372 4755 reconciler_common.go:293] "Volume detached for volume \"pvc-fad4194d-6d90-49f1-a017-ae4167f764c9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fad4194d-6d90-49f1-a017-ae4167f764c9\") on node \"crc\" DevicePath \"\"" Dec 10 15:45:51 crc kubenswrapper[4755]: I1210 15:45:51.019156 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"81d20e6f-155e-444c-b54f-1161b3dff224","Type":"ContainerDied","Data":"0867f02bde645bdf6f0dcc0ee9f857046f49780a95a9c5af19de9e9ded37c273"} Dec 10 15:45:51 crc kubenswrapper[4755]: I1210 15:45:51.019224 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 10 15:45:51 crc kubenswrapper[4755]: I1210 15:45:51.019237 4755 scope.go:117] "RemoveContainer" containerID="e5368def68397d95bf652e17404d4b470cf1f532242f4593d2add2b49a02405a" Dec 10 15:45:51 crc kubenswrapper[4755]: I1210 15:45:51.058717 4755 scope.go:117] "RemoveContainer" containerID="ed9dd5c80054f8faca85036b1dc2e174ad52044a09ee2641742bbc78e5aac6c0" Dec 10 15:45:51 crc kubenswrapper[4755]: I1210 15:45:51.062767 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 10 15:45:51 crc kubenswrapper[4755]: I1210 15:45:51.072563 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 10 15:45:51 crc kubenswrapper[4755]: I1210 15:45:51.100858 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 10 15:45:51 crc kubenswrapper[4755]: E1210 15:45:51.101331 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81d20e6f-155e-444c-b54f-1161b3dff224" containerName="glance-log" Dec 10 15:45:51 crc kubenswrapper[4755]: I1210 15:45:51.101345 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="81d20e6f-155e-444c-b54f-1161b3dff224" containerName="glance-log" Dec 10 15:45:51 crc kubenswrapper[4755]: E1210 15:45:51.101364 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81d20e6f-155e-444c-b54f-1161b3dff224" containerName="glance-httpd" Dec 10 15:45:51 crc kubenswrapper[4755]: I1210 15:45:51.101371 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="81d20e6f-155e-444c-b54f-1161b3dff224" containerName="glance-httpd" Dec 10 15:45:51 crc kubenswrapper[4755]: I1210 15:45:51.101597 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="81d20e6f-155e-444c-b54f-1161b3dff224" containerName="glance-log" Dec 10 15:45:51 crc kubenswrapper[4755]: I1210 15:45:51.101624 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="81d20e6f-155e-444c-b54f-1161b3dff224" containerName="glance-httpd" Dec 10 15:45:51 crc kubenswrapper[4755]: I1210 15:45:51.102820 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 10 15:45:51 crc kubenswrapper[4755]: I1210 15:45:51.104790 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Dec 10 15:45:51 crc kubenswrapper[4755]: I1210 15:45:51.107704 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 10 15:45:51 crc kubenswrapper[4755]: I1210 15:45:51.121609 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 10 15:45:51 crc kubenswrapper[4755]: I1210 15:45:51.229979 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dd0a8c00-fa43-4605-9c34-4f3e86a6e92a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"dd0a8c00-fa43-4605-9c34-4f3e86a6e92a\") " pod="openstack/glance-default-external-api-0" Dec 10 15:45:51 crc kubenswrapper[4755]: I1210 15:45:51.230375 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dd0a8c00-fa43-4605-9c34-4f3e86a6e92a-logs\") pod \"glance-default-external-api-0\" (UID: \"dd0a8c00-fa43-4605-9c34-4f3e86a6e92a\") " pod="openstack/glance-default-external-api-0" Dec 10 15:45:51 crc kubenswrapper[4755]: I1210 15:45:51.230508 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dd0a8c00-fa43-4605-9c34-4f3e86a6e92a-scripts\") pod \"glance-default-external-api-0\" (UID: \"dd0a8c00-fa43-4605-9c34-4f3e86a6e92a\") " pod="openstack/glance-default-external-api-0" Dec 10 15:45:51 crc kubenswrapper[4755]: I1210 15:45:51.230650 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd0a8c00-fa43-4605-9c34-4f3e86a6e92a-config-data\") pod \"glance-default-external-api-0\" (UID: \"dd0a8c00-fa43-4605-9c34-4f3e86a6e92a\") " pod="openstack/glance-default-external-api-0" Dec 10 15:45:51 crc kubenswrapper[4755]: I1210 15:45:51.231029 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd0a8c00-fa43-4605-9c34-4f3e86a6e92a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"dd0a8c00-fa43-4605-9c34-4f3e86a6e92a\") " pod="openstack/glance-default-external-api-0" Dec 10 15:45:51 crc kubenswrapper[4755]: I1210 15:45:51.231236 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-fad4194d-6d90-49f1-a017-ae4167f764c9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fad4194d-6d90-49f1-a017-ae4167f764c9\") pod \"glance-default-external-api-0\" (UID: \"dd0a8c00-fa43-4605-9c34-4f3e86a6e92a\") " pod="openstack/glance-default-external-api-0" Dec 10 15:45:51 crc kubenswrapper[4755]: I1210 15:45:51.231336 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnlbt\" (UniqueName: \"kubernetes.io/projected/dd0a8c00-fa43-4605-9c34-4f3e86a6e92a-kube-api-access-xnlbt\") pod \"glance-default-external-api-0\" (UID: \"dd0a8c00-fa43-4605-9c34-4f3e86a6e92a\") " pod="openstack/glance-default-external-api-0" Dec 10 15:45:51 crc kubenswrapper[4755]: I1210 15:45:51.231511 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd0a8c00-fa43-4605-9c34-4f3e86a6e92a-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"dd0a8c00-fa43-4605-9c34-4f3e86a6e92a\") " pod="openstack/glance-default-external-api-0" Dec 10 15:45:51 crc kubenswrapper[4755]: I1210 15:45:51.333998 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-fad4194d-6d90-49f1-a017-ae4167f764c9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fad4194d-6d90-49f1-a017-ae4167f764c9\") pod \"glance-default-external-api-0\" (UID: \"dd0a8c00-fa43-4605-9c34-4f3e86a6e92a\") " pod="openstack/glance-default-external-api-0" Dec 10 15:45:51 crc kubenswrapper[4755]: I1210 15:45:51.334047 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnlbt\" (UniqueName: \"kubernetes.io/projected/dd0a8c00-fa43-4605-9c34-4f3e86a6e92a-kube-api-access-xnlbt\") pod \"glance-default-external-api-0\" (UID: \"dd0a8c00-fa43-4605-9c34-4f3e86a6e92a\") " pod="openstack/glance-default-external-api-0" Dec 10 15:45:51 crc kubenswrapper[4755]: I1210 15:45:51.334113 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd0a8c00-fa43-4605-9c34-4f3e86a6e92a-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"dd0a8c00-fa43-4605-9c34-4f3e86a6e92a\") " pod="openstack/glance-default-external-api-0" Dec 10 15:45:51 crc kubenswrapper[4755]: I1210 15:45:51.334147 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dd0a8c00-fa43-4605-9c34-4f3e86a6e92a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"dd0a8c00-fa43-4605-9c34-4f3e86a6e92a\") " pod="openstack/glance-default-external-api-0" Dec 10 15:45:51 crc kubenswrapper[4755]: I1210 15:45:51.334166 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dd0a8c00-fa43-4605-9c34-4f3e86a6e92a-logs\") pod \"glance-default-external-api-0\" (UID: \"dd0a8c00-fa43-4605-9c34-4f3e86a6e92a\") " pod="openstack/glance-default-external-api-0" Dec 10 15:45:51 crc kubenswrapper[4755]: I1210 15:45:51.334187 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dd0a8c00-fa43-4605-9c34-4f3e86a6e92a-scripts\") pod \"glance-default-external-api-0\" (UID: \"dd0a8c00-fa43-4605-9c34-4f3e86a6e92a\") " pod="openstack/glance-default-external-api-0" Dec 10 15:45:51 crc kubenswrapper[4755]: I1210 15:45:51.334213 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd0a8c00-fa43-4605-9c34-4f3e86a6e92a-config-data\") pod \"glance-default-external-api-0\" (UID: \"dd0a8c00-fa43-4605-9c34-4f3e86a6e92a\") " pod="openstack/glance-default-external-api-0" Dec 10 15:45:51 crc kubenswrapper[4755]: I1210 15:45:51.334283 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd0a8c00-fa43-4605-9c34-4f3e86a6e92a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"dd0a8c00-fa43-4605-9c34-4f3e86a6e92a\") " pod="openstack/glance-default-external-api-0" Dec 10 15:45:51 crc kubenswrapper[4755]: I1210 15:45:51.335525 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dd0a8c00-fa43-4605-9c34-4f3e86a6e92a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"dd0a8c00-fa43-4605-9c34-4f3e86a6e92a\") " pod="openstack/glance-default-external-api-0" Dec 10 15:45:51 crc kubenswrapper[4755]: I1210 15:45:51.335676 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dd0a8c00-fa43-4605-9c34-4f3e86a6e92a-logs\") pod \"glance-default-external-api-0\" (UID: \"dd0a8c00-fa43-4605-9c34-4f3e86a6e92a\") " pod="openstack/glance-default-external-api-0" Dec 10 15:45:51 crc kubenswrapper[4755]: I1210 15:45:51.340888 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd0a8c00-fa43-4605-9c34-4f3e86a6e92a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"dd0a8c00-fa43-4605-9c34-4f3e86a6e92a\") " pod="openstack/glance-default-external-api-0" Dec 10 15:45:51 crc kubenswrapper[4755]: I1210 15:45:51.344071 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd0a8c00-fa43-4605-9c34-4f3e86a6e92a-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"dd0a8c00-fa43-4605-9c34-4f3e86a6e92a\") " pod="openstack/glance-default-external-api-0" Dec 10 15:45:51 crc kubenswrapper[4755]: I1210 15:45:51.351272 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dd0a8c00-fa43-4605-9c34-4f3e86a6e92a-scripts\") pod \"glance-default-external-api-0\" (UID: \"dd0a8c00-fa43-4605-9c34-4f3e86a6e92a\") " pod="openstack/glance-default-external-api-0" Dec 10 15:45:51 crc kubenswrapper[4755]: I1210 15:45:51.362847 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd0a8c00-fa43-4605-9c34-4f3e86a6e92a-config-data\") pod \"glance-default-external-api-0\" (UID: \"dd0a8c00-fa43-4605-9c34-4f3e86a6e92a\") " pod="openstack/glance-default-external-api-0" Dec 10 15:45:51 crc kubenswrapper[4755]: I1210 15:45:51.367436 4755 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 10 15:45:51 crc kubenswrapper[4755]: I1210 15:45:51.367994 4755 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-fad4194d-6d90-49f1-a017-ae4167f764c9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fad4194d-6d90-49f1-a017-ae4167f764c9\") pod \"glance-default-external-api-0\" (UID: \"dd0a8c00-fa43-4605-9c34-4f3e86a6e92a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/2c29aae4dccb9080bd5d2f8d1cce721d31204eaed02b3364c97d3b8bf6504cd5/globalmount\"" pod="openstack/glance-default-external-api-0" Dec 10 15:45:51 crc kubenswrapper[4755]: I1210 15:45:51.374288 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnlbt\" (UniqueName: \"kubernetes.io/projected/dd0a8c00-fa43-4605-9c34-4f3e86a6e92a-kube-api-access-xnlbt\") pod \"glance-default-external-api-0\" (UID: \"dd0a8c00-fa43-4605-9c34-4f3e86a6e92a\") " pod="openstack/glance-default-external-api-0" Dec 10 15:45:51 crc kubenswrapper[4755]: I1210 15:45:51.432481 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-fad4194d-6d90-49f1-a017-ae4167f764c9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fad4194d-6d90-49f1-a017-ae4167f764c9\") pod \"glance-default-external-api-0\" (UID: \"dd0a8c00-fa43-4605-9c34-4f3e86a6e92a\") " pod="openstack/glance-default-external-api-0" Dec 10 15:45:51 crc kubenswrapper[4755]: I1210 15:45:51.719578 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 10 15:45:51 crc kubenswrapper[4755]: I1210 15:45:51.773157 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81d20e6f-155e-444c-b54f-1161b3dff224" path="/var/lib/kubelet/pods/81d20e6f-155e-444c-b54f-1161b3dff224/volumes" Dec 10 15:45:52 crc kubenswrapper[4755]: I1210 15:45:52.033628 4755 generic.go:334] "Generic (PLEG): container finished" podID="0989d538-4ba6-45bb-847f-d9c55fe8ba5f" containerID="e5806e17688f3be187570afafa92d8b96488e3c12a3190a2c8af430d64c827a7" exitCode=0 Dec 10 15:45:52 crc kubenswrapper[4755]: I1210 15:45:52.033695 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0989d538-4ba6-45bb-847f-d9c55fe8ba5f","Type":"ContainerDied","Data":"e5806e17688f3be187570afafa92d8b96488e3c12a3190a2c8af430d64c827a7"} Dec 10 15:45:52 crc kubenswrapper[4755]: I1210 15:45:52.337356 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 10 15:45:53 crc kubenswrapper[4755]: I1210 15:45:53.047344 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"dd0a8c00-fa43-4605-9c34-4f3e86a6e92a","Type":"ContainerStarted","Data":"ae646a253091c23a1298df03fb782a21acd39d0d2f45ccf6f5f947e296a30ec8"} Dec 10 15:45:53 crc kubenswrapper[4755]: I1210 15:45:53.651915 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 15:45:53 crc kubenswrapper[4755]: I1210 15:45:53.786583 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0989d538-4ba6-45bb-847f-d9c55fe8ba5f-config-data\") pod \"0989d538-4ba6-45bb-847f-d9c55fe8ba5f\" (UID: \"0989d538-4ba6-45bb-847f-d9c55fe8ba5f\") " Dec 10 15:45:53 crc kubenswrapper[4755]: I1210 15:45:53.787314 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0989d538-4ba6-45bb-847f-d9c55fe8ba5f-combined-ca-bundle\") pod \"0989d538-4ba6-45bb-847f-d9c55fe8ba5f\" (UID: \"0989d538-4ba6-45bb-847f-d9c55fe8ba5f\") " Dec 10 15:45:53 crc kubenswrapper[4755]: I1210 15:45:53.787366 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0989d538-4ba6-45bb-847f-d9c55fe8ba5f-scripts\") pod \"0989d538-4ba6-45bb-847f-d9c55fe8ba5f\" (UID: \"0989d538-4ba6-45bb-847f-d9c55fe8ba5f\") " Dec 10 15:45:53 crc kubenswrapper[4755]: I1210 15:45:53.787437 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bz7sh\" (UniqueName: \"kubernetes.io/projected/0989d538-4ba6-45bb-847f-d9c55fe8ba5f-kube-api-access-bz7sh\") pod \"0989d538-4ba6-45bb-847f-d9c55fe8ba5f\" (UID: \"0989d538-4ba6-45bb-847f-d9c55fe8ba5f\") " Dec 10 15:45:53 crc kubenswrapper[4755]: I1210 15:45:53.787488 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0989d538-4ba6-45bb-847f-d9c55fe8ba5f-sg-core-conf-yaml\") pod \"0989d538-4ba6-45bb-847f-d9c55fe8ba5f\" (UID: \"0989d538-4ba6-45bb-847f-d9c55fe8ba5f\") " Dec 10 15:45:53 crc kubenswrapper[4755]: I1210 15:45:53.787543 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0989d538-4ba6-45bb-847f-d9c55fe8ba5f-log-httpd\") pod \"0989d538-4ba6-45bb-847f-d9c55fe8ba5f\" (UID: \"0989d538-4ba6-45bb-847f-d9c55fe8ba5f\") " Dec 10 15:45:53 crc kubenswrapper[4755]: I1210 15:45:53.787634 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0989d538-4ba6-45bb-847f-d9c55fe8ba5f-run-httpd\") pod \"0989d538-4ba6-45bb-847f-d9c55fe8ba5f\" (UID: \"0989d538-4ba6-45bb-847f-d9c55fe8ba5f\") " Dec 10 15:45:53 crc kubenswrapper[4755]: I1210 15:45:53.788879 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0989d538-4ba6-45bb-847f-d9c55fe8ba5f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "0989d538-4ba6-45bb-847f-d9c55fe8ba5f" (UID: "0989d538-4ba6-45bb-847f-d9c55fe8ba5f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:45:53 crc kubenswrapper[4755]: I1210 15:45:53.790574 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0989d538-4ba6-45bb-847f-d9c55fe8ba5f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "0989d538-4ba6-45bb-847f-d9c55fe8ba5f" (UID: "0989d538-4ba6-45bb-847f-d9c55fe8ba5f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:45:53 crc kubenswrapper[4755]: I1210 15:45:53.794450 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0989d538-4ba6-45bb-847f-d9c55fe8ba5f-scripts" (OuterVolumeSpecName: "scripts") pod "0989d538-4ba6-45bb-847f-d9c55fe8ba5f" (UID: "0989d538-4ba6-45bb-847f-d9c55fe8ba5f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:45:53 crc kubenswrapper[4755]: I1210 15:45:53.800190 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0989d538-4ba6-45bb-847f-d9c55fe8ba5f-kube-api-access-bz7sh" (OuterVolumeSpecName: "kube-api-access-bz7sh") pod "0989d538-4ba6-45bb-847f-d9c55fe8ba5f" (UID: "0989d538-4ba6-45bb-847f-d9c55fe8ba5f"). InnerVolumeSpecName "kube-api-access-bz7sh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:45:53 crc kubenswrapper[4755]: I1210 15:45:53.832358 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0989d538-4ba6-45bb-847f-d9c55fe8ba5f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "0989d538-4ba6-45bb-847f-d9c55fe8ba5f" (UID: "0989d538-4ba6-45bb-847f-d9c55fe8ba5f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:45:53 crc kubenswrapper[4755]: I1210 15:45:53.889937 4755 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0989d538-4ba6-45bb-847f-d9c55fe8ba5f-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 15:45:53 crc kubenswrapper[4755]: I1210 15:45:53.889978 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bz7sh\" (UniqueName: \"kubernetes.io/projected/0989d538-4ba6-45bb-847f-d9c55fe8ba5f-kube-api-access-bz7sh\") on node \"crc\" DevicePath \"\"" Dec 10 15:45:53 crc kubenswrapper[4755]: I1210 15:45:53.889993 4755 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0989d538-4ba6-45bb-847f-d9c55fe8ba5f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 10 15:45:53 crc kubenswrapper[4755]: I1210 15:45:53.890004 4755 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0989d538-4ba6-45bb-847f-d9c55fe8ba5f-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 10 15:45:53 crc kubenswrapper[4755]: I1210 15:45:53.890016 4755 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0989d538-4ba6-45bb-847f-d9c55fe8ba5f-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 10 15:45:53 crc kubenswrapper[4755]: I1210 15:45:53.891657 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0989d538-4ba6-45bb-847f-d9c55fe8ba5f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0989d538-4ba6-45bb-847f-d9c55fe8ba5f" (UID: "0989d538-4ba6-45bb-847f-d9c55fe8ba5f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:45:53 crc kubenswrapper[4755]: I1210 15:45:53.907446 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0989d538-4ba6-45bb-847f-d9c55fe8ba5f-config-data" (OuterVolumeSpecName: "config-data") pod "0989d538-4ba6-45bb-847f-d9c55fe8ba5f" (UID: "0989d538-4ba6-45bb-847f-d9c55fe8ba5f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:45:53 crc kubenswrapper[4755]: I1210 15:45:53.991462 4755 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0989d538-4ba6-45bb-847f-d9c55fe8ba5f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:45:53 crc kubenswrapper[4755]: I1210 15:45:53.991510 4755 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0989d538-4ba6-45bb-847f-d9c55fe8ba5f-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 15:45:54 crc kubenswrapper[4755]: I1210 15:45:54.062448 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"dd0a8c00-fa43-4605-9c34-4f3e86a6e92a","Type":"ContainerStarted","Data":"9e28a6f0706a7cbfd84dddadd7a57790c1ca557c55a64bd4d5603f04f4a37745"} Dec 10 15:45:54 crc kubenswrapper[4755]: I1210 15:45:54.062522 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"dd0a8c00-fa43-4605-9c34-4f3e86a6e92a","Type":"ContainerStarted","Data":"0a7630ea4daab2c09769896fd9ef98fde06fd389ee35452fe76e4fef46eabc19"} Dec 10 15:45:54 crc kubenswrapper[4755]: I1210 15:45:54.065988 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0989d538-4ba6-45bb-847f-d9c55fe8ba5f","Type":"ContainerDied","Data":"ac105205efcd4bb29065a8757941ac679248a848ce5eab8e1fe2d12c2a4284d2"} Dec 10 15:45:54 crc kubenswrapper[4755]: I1210 15:45:54.066050 4755 scope.go:117] "RemoveContainer" containerID="cfb83b8e0ea194018b2cfd6745d3704b4bf375330224a3172f05a9c13657c8d1" Dec 10 15:45:54 crc kubenswrapper[4755]: I1210 15:45:54.066065 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 15:45:54 crc kubenswrapper[4755]: I1210 15:45:54.097781 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.097759863 podStartE2EDuration="3.097759863s" podCreationTimestamp="2025-12-10 15:45:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:45:54.088874502 +0000 UTC m=+1350.689758144" watchObservedRunningTime="2025-12-10 15:45:54.097759863 +0000 UTC m=+1350.698643505" Dec 10 15:45:54 crc kubenswrapper[4755]: I1210 15:45:54.101315 4755 scope.go:117] "RemoveContainer" containerID="d3662bbe8fdcebce34085b1bb62308162b14e549f93a7f6c62941ab4598c056b" Dec 10 15:45:54 crc kubenswrapper[4755]: I1210 15:45:54.128075 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 10 15:45:54 crc kubenswrapper[4755]: I1210 15:45:54.134262 4755 scope.go:117] "RemoveContainer" containerID="dc5bcbdb29e717dcb93bd963ac9136ac27ba26b65143d174c1e63eb611fa256e" Dec 10 15:45:54 crc kubenswrapper[4755]: I1210 15:45:54.150640 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 10 15:45:54 crc kubenswrapper[4755]: I1210 15:45:54.169940 4755 scope.go:117] "RemoveContainer" containerID="e5806e17688f3be187570afafa92d8b96488e3c12a3190a2c8af430d64c827a7" Dec 10 15:45:54 crc kubenswrapper[4755]: I1210 15:45:54.178192 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 10 15:45:54 crc kubenswrapper[4755]: E1210 15:45:54.178702 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0989d538-4ba6-45bb-847f-d9c55fe8ba5f" containerName="sg-core" Dec 10 15:45:54 crc kubenswrapper[4755]: I1210 15:45:54.178718 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="0989d538-4ba6-45bb-847f-d9c55fe8ba5f" containerName="sg-core" Dec 10 15:45:54 crc kubenswrapper[4755]: E1210 15:45:54.178758 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0989d538-4ba6-45bb-847f-d9c55fe8ba5f" containerName="ceilometer-notification-agent" Dec 10 15:45:54 crc kubenswrapper[4755]: I1210 15:45:54.178767 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="0989d538-4ba6-45bb-847f-d9c55fe8ba5f" containerName="ceilometer-notification-agent" Dec 10 15:45:54 crc kubenswrapper[4755]: E1210 15:45:54.178784 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0989d538-4ba6-45bb-847f-d9c55fe8ba5f" containerName="ceilometer-central-agent" Dec 10 15:45:54 crc kubenswrapper[4755]: I1210 15:45:54.178791 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="0989d538-4ba6-45bb-847f-d9c55fe8ba5f" containerName="ceilometer-central-agent" Dec 10 15:45:54 crc kubenswrapper[4755]: E1210 15:45:54.178805 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0989d538-4ba6-45bb-847f-d9c55fe8ba5f" containerName="proxy-httpd" Dec 10 15:45:54 crc kubenswrapper[4755]: I1210 15:45:54.178812 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="0989d538-4ba6-45bb-847f-d9c55fe8ba5f" containerName="proxy-httpd" Dec 10 15:45:54 crc kubenswrapper[4755]: I1210 15:45:54.179017 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="0989d538-4ba6-45bb-847f-d9c55fe8ba5f" containerName="ceilometer-central-agent" Dec 10 15:45:54 crc kubenswrapper[4755]: I1210 15:45:54.179043 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="0989d538-4ba6-45bb-847f-d9c55fe8ba5f" containerName="proxy-httpd" Dec 10 15:45:54 crc kubenswrapper[4755]: I1210 15:45:54.179062 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="0989d538-4ba6-45bb-847f-d9c55fe8ba5f" containerName="ceilometer-notification-agent" Dec 10 15:45:54 crc kubenswrapper[4755]: I1210 15:45:54.179075 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="0989d538-4ba6-45bb-847f-d9c55fe8ba5f" containerName="sg-core" Dec 10 15:45:54 crc kubenswrapper[4755]: I1210 15:45:54.181358 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 15:45:54 crc kubenswrapper[4755]: I1210 15:45:54.187167 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 10 15:45:54 crc kubenswrapper[4755]: I1210 15:45:54.202842 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 10 15:45:54 crc kubenswrapper[4755]: I1210 15:45:54.208346 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 10 15:45:54 crc kubenswrapper[4755]: I1210 15:45:54.306627 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1396ebd-2a72-4d57-a5af-a7e1dec09762-config-data\") pod \"ceilometer-0\" (UID: \"b1396ebd-2a72-4d57-a5af-a7e1dec09762\") " pod="openstack/ceilometer-0" Dec 10 15:45:54 crc kubenswrapper[4755]: I1210 15:45:54.306960 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b1396ebd-2a72-4d57-a5af-a7e1dec09762-scripts\") pod \"ceilometer-0\" (UID: \"b1396ebd-2a72-4d57-a5af-a7e1dec09762\") " pod="openstack/ceilometer-0" Dec 10 15:45:54 crc kubenswrapper[4755]: I1210 15:45:54.307065 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6xc7\" (UniqueName: \"kubernetes.io/projected/b1396ebd-2a72-4d57-a5af-a7e1dec09762-kube-api-access-g6xc7\") pod \"ceilometer-0\" (UID: \"b1396ebd-2a72-4d57-a5af-a7e1dec09762\") " pod="openstack/ceilometer-0" Dec 10 15:45:54 crc kubenswrapper[4755]: I1210 15:45:54.307168 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b1396ebd-2a72-4d57-a5af-a7e1dec09762-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b1396ebd-2a72-4d57-a5af-a7e1dec09762\") " pod="openstack/ceilometer-0" Dec 10 15:45:54 crc kubenswrapper[4755]: I1210 15:45:54.307206 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1396ebd-2a72-4d57-a5af-a7e1dec09762-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b1396ebd-2a72-4d57-a5af-a7e1dec09762\") " pod="openstack/ceilometer-0" Dec 10 15:45:54 crc kubenswrapper[4755]: I1210 15:45:54.307274 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b1396ebd-2a72-4d57-a5af-a7e1dec09762-run-httpd\") pod \"ceilometer-0\" (UID: \"b1396ebd-2a72-4d57-a5af-a7e1dec09762\") " pod="openstack/ceilometer-0" Dec 10 15:45:54 crc kubenswrapper[4755]: I1210 15:45:54.307367 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b1396ebd-2a72-4d57-a5af-a7e1dec09762-log-httpd\") pod \"ceilometer-0\" (UID: \"b1396ebd-2a72-4d57-a5af-a7e1dec09762\") " pod="openstack/ceilometer-0" Dec 10 15:45:54 crc kubenswrapper[4755]: I1210 15:45:54.409669 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b1396ebd-2a72-4d57-a5af-a7e1dec09762-scripts\") pod \"ceilometer-0\" (UID: \"b1396ebd-2a72-4d57-a5af-a7e1dec09762\") " pod="openstack/ceilometer-0" Dec 10 15:45:54 crc kubenswrapper[4755]: I1210 15:45:54.409832 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6xc7\" (UniqueName: \"kubernetes.io/projected/b1396ebd-2a72-4d57-a5af-a7e1dec09762-kube-api-access-g6xc7\") pod \"ceilometer-0\" (UID: \"b1396ebd-2a72-4d57-a5af-a7e1dec09762\") " pod="openstack/ceilometer-0" Dec 10 15:45:54 crc kubenswrapper[4755]: I1210 15:45:54.410011 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b1396ebd-2a72-4d57-a5af-a7e1dec09762-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b1396ebd-2a72-4d57-a5af-a7e1dec09762\") " pod="openstack/ceilometer-0" Dec 10 15:45:54 crc kubenswrapper[4755]: I1210 15:45:54.410147 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1396ebd-2a72-4d57-a5af-a7e1dec09762-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b1396ebd-2a72-4d57-a5af-a7e1dec09762\") " pod="openstack/ceilometer-0" Dec 10 15:45:54 crc kubenswrapper[4755]: I1210 15:45:54.410814 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b1396ebd-2a72-4d57-a5af-a7e1dec09762-run-httpd\") pod \"ceilometer-0\" (UID: \"b1396ebd-2a72-4d57-a5af-a7e1dec09762\") " pod="openstack/ceilometer-0" Dec 10 15:45:54 crc kubenswrapper[4755]: I1210 15:45:54.410871 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b1396ebd-2a72-4d57-a5af-a7e1dec09762-run-httpd\") pod \"ceilometer-0\" (UID: \"b1396ebd-2a72-4d57-a5af-a7e1dec09762\") " pod="openstack/ceilometer-0" Dec 10 15:45:54 crc kubenswrapper[4755]: I1210 15:45:54.410917 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b1396ebd-2a72-4d57-a5af-a7e1dec09762-log-httpd\") pod \"ceilometer-0\" (UID: \"b1396ebd-2a72-4d57-a5af-a7e1dec09762\") " pod="openstack/ceilometer-0" Dec 10 15:45:54 crc kubenswrapper[4755]: I1210 15:45:54.411207 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1396ebd-2a72-4d57-a5af-a7e1dec09762-config-data\") pod \"ceilometer-0\" (UID: \"b1396ebd-2a72-4d57-a5af-a7e1dec09762\") " pod="openstack/ceilometer-0" Dec 10 15:45:54 crc kubenswrapper[4755]: I1210 15:45:54.411353 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b1396ebd-2a72-4d57-a5af-a7e1dec09762-log-httpd\") pod \"ceilometer-0\" (UID: \"b1396ebd-2a72-4d57-a5af-a7e1dec09762\") " pod="openstack/ceilometer-0" Dec 10 15:45:54 crc kubenswrapper[4755]: I1210 15:45:54.415032 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b1396ebd-2a72-4d57-a5af-a7e1dec09762-scripts\") pod \"ceilometer-0\" (UID: \"b1396ebd-2a72-4d57-a5af-a7e1dec09762\") " pod="openstack/ceilometer-0" Dec 10 15:45:54 crc kubenswrapper[4755]: I1210 15:45:54.415787 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b1396ebd-2a72-4d57-a5af-a7e1dec09762-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b1396ebd-2a72-4d57-a5af-a7e1dec09762\") " pod="openstack/ceilometer-0" Dec 10 15:45:54 crc kubenswrapper[4755]: I1210 15:45:54.416172 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1396ebd-2a72-4d57-a5af-a7e1dec09762-config-data\") pod \"ceilometer-0\" (UID: \"b1396ebd-2a72-4d57-a5af-a7e1dec09762\") " pod="openstack/ceilometer-0" Dec 10 15:45:54 crc kubenswrapper[4755]: I1210 15:45:54.417625 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1396ebd-2a72-4d57-a5af-a7e1dec09762-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b1396ebd-2a72-4d57-a5af-a7e1dec09762\") " pod="openstack/ceilometer-0" Dec 10 15:45:54 crc kubenswrapper[4755]: I1210 15:45:54.434340 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6xc7\" (UniqueName: \"kubernetes.io/projected/b1396ebd-2a72-4d57-a5af-a7e1dec09762-kube-api-access-g6xc7\") pod \"ceilometer-0\" (UID: \"b1396ebd-2a72-4d57-a5af-a7e1dec09762\") " pod="openstack/ceilometer-0" Dec 10 15:45:54 crc kubenswrapper[4755]: I1210 15:45:54.532086 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 15:45:55 crc kubenswrapper[4755]: I1210 15:45:55.035846 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 10 15:45:55 crc kubenswrapper[4755]: I1210 15:45:55.077256 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b1396ebd-2a72-4d57-a5af-a7e1dec09762","Type":"ContainerStarted","Data":"fd88bb013aaaa0259fcd5ee36aae50dd3bc01e49b793b7ba48c587ee1dd7db71"} Dec 10 15:45:55 crc kubenswrapper[4755]: I1210 15:45:55.768829 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0989d538-4ba6-45bb-847f-d9c55fe8ba5f" path="/var/lib/kubelet/pods/0989d538-4ba6-45bb-847f-d9c55fe8ba5f/volumes" Dec 10 15:45:56 crc kubenswrapper[4755]: I1210 15:45:56.088605 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b1396ebd-2a72-4d57-a5af-a7e1dec09762","Type":"ContainerStarted","Data":"e79c0517fb6b35d3e7a4a4b215d0b2896d4362b97e9a7468bb8e2fb0ddd2e95d"} Dec 10 15:45:56 crc kubenswrapper[4755]: I1210 15:45:56.387396 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 10 15:45:56 crc kubenswrapper[4755]: I1210 15:45:56.387481 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 10 15:45:56 crc kubenswrapper[4755]: I1210 15:45:56.424186 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 10 15:45:56 crc kubenswrapper[4755]: I1210 15:45:56.454611 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 10 15:45:57 crc kubenswrapper[4755]: I1210 15:45:57.102091 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b1396ebd-2a72-4d57-a5af-a7e1dec09762","Type":"ContainerStarted","Data":"ecffe73cf2e6f498cfeb843b545a2ede0c036ea53d76e69ae1d5c65c61ba4ecb"} Dec 10 15:45:57 crc kubenswrapper[4755]: I1210 15:45:57.104197 4755 generic.go:334] "Generic (PLEG): container finished" podID="609b4b0b-1c46-4b66-bfd5-d42a91e325c4" containerID="fb300d4332a4ed727e73ad046dc7f955e46e7275f8df1c4ab6d91bf93c5a1561" exitCode=0 Dec 10 15:45:57 crc kubenswrapper[4755]: I1210 15:45:57.104257 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-f9wl6" event={"ID":"609b4b0b-1c46-4b66-bfd5-d42a91e325c4","Type":"ContainerDied","Data":"fb300d4332a4ed727e73ad046dc7f955e46e7275f8df1c4ab6d91bf93c5a1561"} Dec 10 15:45:57 crc kubenswrapper[4755]: I1210 15:45:57.104885 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 10 15:45:57 crc kubenswrapper[4755]: I1210 15:45:57.105159 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 10 15:45:58 crc kubenswrapper[4755]: I1210 15:45:58.117639 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b1396ebd-2a72-4d57-a5af-a7e1dec09762","Type":"ContainerStarted","Data":"0f4edfeee8205d523aea274300b1c71644096d7e9412cd2c97a5cc72be936cf3"} Dec 10 15:45:58 crc kubenswrapper[4755]: I1210 15:45:58.621748 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-f9wl6" Dec 10 15:45:58 crc kubenswrapper[4755]: I1210 15:45:58.701737 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lj5mj\" (UniqueName: \"kubernetes.io/projected/609b4b0b-1c46-4b66-bfd5-d42a91e325c4-kube-api-access-lj5mj\") pod \"609b4b0b-1c46-4b66-bfd5-d42a91e325c4\" (UID: \"609b4b0b-1c46-4b66-bfd5-d42a91e325c4\") " Dec 10 15:45:58 crc kubenswrapper[4755]: I1210 15:45:58.701875 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/609b4b0b-1c46-4b66-bfd5-d42a91e325c4-scripts\") pod \"609b4b0b-1c46-4b66-bfd5-d42a91e325c4\" (UID: \"609b4b0b-1c46-4b66-bfd5-d42a91e325c4\") " Dec 10 15:45:58 crc kubenswrapper[4755]: I1210 15:45:58.702034 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/609b4b0b-1c46-4b66-bfd5-d42a91e325c4-config-data\") pod \"609b4b0b-1c46-4b66-bfd5-d42a91e325c4\" (UID: \"609b4b0b-1c46-4b66-bfd5-d42a91e325c4\") " Dec 10 15:45:58 crc kubenswrapper[4755]: I1210 15:45:58.702090 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/609b4b0b-1c46-4b66-bfd5-d42a91e325c4-combined-ca-bundle\") pod \"609b4b0b-1c46-4b66-bfd5-d42a91e325c4\" (UID: \"609b4b0b-1c46-4b66-bfd5-d42a91e325c4\") " Dec 10 15:45:58 crc kubenswrapper[4755]: I1210 15:45:58.707302 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/609b4b0b-1c46-4b66-bfd5-d42a91e325c4-kube-api-access-lj5mj" (OuterVolumeSpecName: "kube-api-access-lj5mj") pod "609b4b0b-1c46-4b66-bfd5-d42a91e325c4" (UID: "609b4b0b-1c46-4b66-bfd5-d42a91e325c4"). InnerVolumeSpecName "kube-api-access-lj5mj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:45:58 crc kubenswrapper[4755]: I1210 15:45:58.710069 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/609b4b0b-1c46-4b66-bfd5-d42a91e325c4-scripts" (OuterVolumeSpecName: "scripts") pod "609b4b0b-1c46-4b66-bfd5-d42a91e325c4" (UID: "609b4b0b-1c46-4b66-bfd5-d42a91e325c4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:45:58 crc kubenswrapper[4755]: I1210 15:45:58.751593 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/609b4b0b-1c46-4b66-bfd5-d42a91e325c4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "609b4b0b-1c46-4b66-bfd5-d42a91e325c4" (UID: "609b4b0b-1c46-4b66-bfd5-d42a91e325c4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:45:58 crc kubenswrapper[4755]: I1210 15:45:58.755777 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/609b4b0b-1c46-4b66-bfd5-d42a91e325c4-config-data" (OuterVolumeSpecName: "config-data") pod "609b4b0b-1c46-4b66-bfd5-d42a91e325c4" (UID: "609b4b0b-1c46-4b66-bfd5-d42a91e325c4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:45:58 crc kubenswrapper[4755]: I1210 15:45:58.813774 4755 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/609b4b0b-1c46-4b66-bfd5-d42a91e325c4-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 15:45:58 crc kubenswrapper[4755]: I1210 15:45:58.814121 4755 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/609b4b0b-1c46-4b66-bfd5-d42a91e325c4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:45:58 crc kubenswrapper[4755]: I1210 15:45:58.814137 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lj5mj\" (UniqueName: \"kubernetes.io/projected/609b4b0b-1c46-4b66-bfd5-d42a91e325c4-kube-api-access-lj5mj\") on node \"crc\" DevicePath \"\"" Dec 10 15:45:58 crc kubenswrapper[4755]: I1210 15:45:58.814149 4755 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/609b4b0b-1c46-4b66-bfd5-d42a91e325c4-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 15:45:59 crc kubenswrapper[4755]: I1210 15:45:59.128249 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b1396ebd-2a72-4d57-a5af-a7e1dec09762","Type":"ContainerStarted","Data":"3083ba2a8aff93d8b36061e24ca72c454ab1c160f38a1e844db96e69d8ec60be"} Dec 10 15:45:59 crc kubenswrapper[4755]: I1210 15:45:59.128435 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 10 15:45:59 crc kubenswrapper[4755]: I1210 15:45:59.130362 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-f9wl6" event={"ID":"609b4b0b-1c46-4b66-bfd5-d42a91e325c4","Type":"ContainerDied","Data":"71a246bbb4a6f4adbce9c5fda108267b45b27a7adab880165da8d56c59258bdc"} Dec 10 15:45:59 crc kubenswrapper[4755]: I1210 15:45:59.130401 4755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="71a246bbb4a6f4adbce9c5fda108267b45b27a7adab880165da8d56c59258bdc" Dec 10 15:45:59 crc kubenswrapper[4755]: I1210 15:45:59.130403 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-f9wl6" Dec 10 15:45:59 crc kubenswrapper[4755]: I1210 15:45:59.168714 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.749140549 podStartE2EDuration="5.1686946s" podCreationTimestamp="2025-12-10 15:45:54 +0000 UTC" firstStartedPulling="2025-12-10 15:45:55.03631617 +0000 UTC m=+1351.637199802" lastFinishedPulling="2025-12-10 15:45:58.455870221 +0000 UTC m=+1355.056753853" observedRunningTime="2025-12-10 15:45:59.165843102 +0000 UTC m=+1355.766726734" watchObservedRunningTime="2025-12-10 15:45:59.1686946 +0000 UTC m=+1355.769578232" Dec 10 15:45:59 crc kubenswrapper[4755]: I1210 15:45:59.246849 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 10 15:45:59 crc kubenswrapper[4755]: E1210 15:45:59.247335 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="609b4b0b-1c46-4b66-bfd5-d42a91e325c4" containerName="nova-cell0-conductor-db-sync" Dec 10 15:45:59 crc kubenswrapper[4755]: I1210 15:45:59.247354 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="609b4b0b-1c46-4b66-bfd5-d42a91e325c4" containerName="nova-cell0-conductor-db-sync" Dec 10 15:45:59 crc kubenswrapper[4755]: I1210 15:45:59.247581 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="609b4b0b-1c46-4b66-bfd5-d42a91e325c4" containerName="nova-cell0-conductor-db-sync" Dec 10 15:45:59 crc kubenswrapper[4755]: I1210 15:45:59.248502 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 10 15:45:59 crc kubenswrapper[4755]: I1210 15:45:59.250969 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-xpgvv" Dec 10 15:45:59 crc kubenswrapper[4755]: I1210 15:45:59.251206 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Dec 10 15:45:59 crc kubenswrapper[4755]: I1210 15:45:59.256909 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 10 15:45:59 crc kubenswrapper[4755]: I1210 15:45:59.325074 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgkc6\" (UniqueName: \"kubernetes.io/projected/91d4a135-5dc0-49f2-8dcc-5cb2de6d05d4-kube-api-access-zgkc6\") pod \"nova-cell0-conductor-0\" (UID: \"91d4a135-5dc0-49f2-8dcc-5cb2de6d05d4\") " pod="openstack/nova-cell0-conductor-0" Dec 10 15:45:59 crc kubenswrapper[4755]: I1210 15:45:59.325143 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91d4a135-5dc0-49f2-8dcc-5cb2de6d05d4-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"91d4a135-5dc0-49f2-8dcc-5cb2de6d05d4\") " pod="openstack/nova-cell0-conductor-0" Dec 10 15:45:59 crc kubenswrapper[4755]: I1210 15:45:59.325198 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91d4a135-5dc0-49f2-8dcc-5cb2de6d05d4-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"91d4a135-5dc0-49f2-8dcc-5cb2de6d05d4\") " pod="openstack/nova-cell0-conductor-0" Dec 10 15:45:59 crc kubenswrapper[4755]: I1210 15:45:59.428230 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgkc6\" (UniqueName: \"kubernetes.io/projected/91d4a135-5dc0-49f2-8dcc-5cb2de6d05d4-kube-api-access-zgkc6\") pod \"nova-cell0-conductor-0\" (UID: \"91d4a135-5dc0-49f2-8dcc-5cb2de6d05d4\") " pod="openstack/nova-cell0-conductor-0" Dec 10 15:45:59 crc kubenswrapper[4755]: I1210 15:45:59.428750 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91d4a135-5dc0-49f2-8dcc-5cb2de6d05d4-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"91d4a135-5dc0-49f2-8dcc-5cb2de6d05d4\") " pod="openstack/nova-cell0-conductor-0" Dec 10 15:45:59 crc kubenswrapper[4755]: I1210 15:45:59.429390 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91d4a135-5dc0-49f2-8dcc-5cb2de6d05d4-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"91d4a135-5dc0-49f2-8dcc-5cb2de6d05d4\") " pod="openstack/nova-cell0-conductor-0" Dec 10 15:45:59 crc kubenswrapper[4755]: I1210 15:45:59.433526 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91d4a135-5dc0-49f2-8dcc-5cb2de6d05d4-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"91d4a135-5dc0-49f2-8dcc-5cb2de6d05d4\") " pod="openstack/nova-cell0-conductor-0" Dec 10 15:45:59 crc kubenswrapper[4755]: I1210 15:45:59.434082 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91d4a135-5dc0-49f2-8dcc-5cb2de6d05d4-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"91d4a135-5dc0-49f2-8dcc-5cb2de6d05d4\") " pod="openstack/nova-cell0-conductor-0" Dec 10 15:45:59 crc kubenswrapper[4755]: I1210 15:45:59.445603 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgkc6\" (UniqueName: \"kubernetes.io/projected/91d4a135-5dc0-49f2-8dcc-5cb2de6d05d4-kube-api-access-zgkc6\") pod \"nova-cell0-conductor-0\" (UID: \"91d4a135-5dc0-49f2-8dcc-5cb2de6d05d4\") " pod="openstack/nova-cell0-conductor-0" Dec 10 15:45:59 crc kubenswrapper[4755]: I1210 15:45:59.568344 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 10 15:45:59 crc kubenswrapper[4755]: I1210 15:45:59.668608 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 10 15:45:59 crc kubenswrapper[4755]: I1210 15:45:59.668788 4755 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 10 15:45:59 crc kubenswrapper[4755]: I1210 15:45:59.688934 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 10 15:46:00 crc kubenswrapper[4755]: I1210 15:46:00.119444 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 10 15:46:00 crc kubenswrapper[4755]: W1210 15:46:00.121035 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod91d4a135_5dc0_49f2_8dcc_5cb2de6d05d4.slice/crio-11dc7c49bc8d46f0398eac242a10b7d6a61230099d0d4943c9ce920a957dc387 WatchSource:0}: Error finding container 11dc7c49bc8d46f0398eac242a10b7d6a61230099d0d4943c9ce920a957dc387: Status 404 returned error can't find the container with id 11dc7c49bc8d46f0398eac242a10b7d6a61230099d0d4943c9ce920a957dc387 Dec 10 15:46:00 crc kubenswrapper[4755]: I1210 15:46:00.147579 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"91d4a135-5dc0-49f2-8dcc-5cb2de6d05d4","Type":"ContainerStarted","Data":"11dc7c49bc8d46f0398eac242a10b7d6a61230099d0d4943c9ce920a957dc387"} Dec 10 15:46:01 crc kubenswrapper[4755]: I1210 15:46:01.161345 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"91d4a135-5dc0-49f2-8dcc-5cb2de6d05d4","Type":"ContainerStarted","Data":"4c97f6b1e16714b73af52c73f11a5915e7eae7a2f5a99df29b81e90b58edecba"} Dec 10 15:46:01 crc kubenswrapper[4755]: I1210 15:46:01.161730 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Dec 10 15:46:01 crc kubenswrapper[4755]: I1210 15:46:01.720428 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 10 15:46:01 crc kubenswrapper[4755]: I1210 15:46:01.720599 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 10 15:46:01 crc kubenswrapper[4755]: I1210 15:46:01.755110 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 10 15:46:01 crc kubenswrapper[4755]: I1210 15:46:01.772090 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 10 15:46:01 crc kubenswrapper[4755]: I1210 15:46:01.778295 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.778275695 podStartE2EDuration="2.778275695s" podCreationTimestamp="2025-12-10 15:45:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:46:01.19383673 +0000 UTC m=+1357.794720372" watchObservedRunningTime="2025-12-10 15:46:01.778275695 +0000 UTC m=+1358.379159327" Dec 10 15:46:02 crc kubenswrapper[4755]: I1210 15:46:02.171937 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 10 15:46:02 crc kubenswrapper[4755]: I1210 15:46:02.171980 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 10 15:46:04 crc kubenswrapper[4755]: I1210 15:46:04.353616 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 10 15:46:04 crc kubenswrapper[4755]: I1210 15:46:04.354018 4755 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 10 15:46:04 crc kubenswrapper[4755]: I1210 15:46:04.397170 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 10 15:46:05 crc kubenswrapper[4755]: I1210 15:46:05.151103 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 10 15:46:05 crc kubenswrapper[4755]: I1210 15:46:05.151616 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="91d4a135-5dc0-49f2-8dcc-5cb2de6d05d4" containerName="nova-cell0-conductor-conductor" containerID="cri-o://4c97f6b1e16714b73af52c73f11a5915e7eae7a2f5a99df29b81e90b58edecba" gracePeriod=30 Dec 10 15:46:05 crc kubenswrapper[4755]: E1210 15:46:05.157914 4755 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4c97f6b1e16714b73af52c73f11a5915e7eae7a2f5a99df29b81e90b58edecba" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Dec 10 15:46:05 crc kubenswrapper[4755]: E1210 15:46:05.162818 4755 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4c97f6b1e16714b73af52c73f11a5915e7eae7a2f5a99df29b81e90b58edecba" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Dec 10 15:46:05 crc kubenswrapper[4755]: E1210 15:46:05.165986 4755 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4c97f6b1e16714b73af52c73f11a5915e7eae7a2f5a99df29b81e90b58edecba" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Dec 10 15:46:05 crc kubenswrapper[4755]: E1210 15:46:05.166074 4755 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="91d4a135-5dc0-49f2-8dcc-5cb2de6d05d4" containerName="nova-cell0-conductor-conductor" Dec 10 15:46:06 crc kubenswrapper[4755]: I1210 15:46:06.743004 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 10 15:46:06 crc kubenswrapper[4755]: I1210 15:46:06.879387 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91d4a135-5dc0-49f2-8dcc-5cb2de6d05d4-config-data\") pod \"91d4a135-5dc0-49f2-8dcc-5cb2de6d05d4\" (UID: \"91d4a135-5dc0-49f2-8dcc-5cb2de6d05d4\") " Dec 10 15:46:06 crc kubenswrapper[4755]: I1210 15:46:06.879598 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgkc6\" (UniqueName: \"kubernetes.io/projected/91d4a135-5dc0-49f2-8dcc-5cb2de6d05d4-kube-api-access-zgkc6\") pod \"91d4a135-5dc0-49f2-8dcc-5cb2de6d05d4\" (UID: \"91d4a135-5dc0-49f2-8dcc-5cb2de6d05d4\") " Dec 10 15:46:06 crc kubenswrapper[4755]: I1210 15:46:06.879649 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91d4a135-5dc0-49f2-8dcc-5cb2de6d05d4-combined-ca-bundle\") pod \"91d4a135-5dc0-49f2-8dcc-5cb2de6d05d4\" (UID: \"91d4a135-5dc0-49f2-8dcc-5cb2de6d05d4\") " Dec 10 15:46:06 crc kubenswrapper[4755]: I1210 15:46:06.889712 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91d4a135-5dc0-49f2-8dcc-5cb2de6d05d4-kube-api-access-zgkc6" (OuterVolumeSpecName: "kube-api-access-zgkc6") pod "91d4a135-5dc0-49f2-8dcc-5cb2de6d05d4" (UID: "91d4a135-5dc0-49f2-8dcc-5cb2de6d05d4"). InnerVolumeSpecName "kube-api-access-zgkc6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:46:06 crc kubenswrapper[4755]: I1210 15:46:06.913081 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91d4a135-5dc0-49f2-8dcc-5cb2de6d05d4-config-data" (OuterVolumeSpecName: "config-data") pod "91d4a135-5dc0-49f2-8dcc-5cb2de6d05d4" (UID: "91d4a135-5dc0-49f2-8dcc-5cb2de6d05d4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:46:06 crc kubenswrapper[4755]: I1210 15:46:06.929774 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91d4a135-5dc0-49f2-8dcc-5cb2de6d05d4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "91d4a135-5dc0-49f2-8dcc-5cb2de6d05d4" (UID: "91d4a135-5dc0-49f2-8dcc-5cb2de6d05d4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:46:06 crc kubenswrapper[4755]: I1210 15:46:06.981818 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgkc6\" (UniqueName: \"kubernetes.io/projected/91d4a135-5dc0-49f2-8dcc-5cb2de6d05d4-kube-api-access-zgkc6\") on node \"crc\" DevicePath \"\"" Dec 10 15:46:06 crc kubenswrapper[4755]: I1210 15:46:06.981861 4755 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91d4a135-5dc0-49f2-8dcc-5cb2de6d05d4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:46:06 crc kubenswrapper[4755]: I1210 15:46:06.981871 4755 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91d4a135-5dc0-49f2-8dcc-5cb2de6d05d4-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 15:46:07 crc kubenswrapper[4755]: I1210 15:46:07.226820 4755 generic.go:334] "Generic (PLEG): container finished" podID="91d4a135-5dc0-49f2-8dcc-5cb2de6d05d4" containerID="4c97f6b1e16714b73af52c73f11a5915e7eae7a2f5a99df29b81e90b58edecba" exitCode=0 Dec 10 15:46:07 crc kubenswrapper[4755]: I1210 15:46:07.226859 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"91d4a135-5dc0-49f2-8dcc-5cb2de6d05d4","Type":"ContainerDied","Data":"4c97f6b1e16714b73af52c73f11a5915e7eae7a2f5a99df29b81e90b58edecba"} Dec 10 15:46:07 crc kubenswrapper[4755]: I1210 15:46:07.226884 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"91d4a135-5dc0-49f2-8dcc-5cb2de6d05d4","Type":"ContainerDied","Data":"11dc7c49bc8d46f0398eac242a10b7d6a61230099d0d4943c9ce920a957dc387"} Dec 10 15:46:07 crc kubenswrapper[4755]: I1210 15:46:07.226901 4755 scope.go:117] "RemoveContainer" containerID="4c97f6b1e16714b73af52c73f11a5915e7eae7a2f5a99df29b81e90b58edecba" Dec 10 15:46:07 crc kubenswrapper[4755]: I1210 15:46:07.226896 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 10 15:46:07 crc kubenswrapper[4755]: I1210 15:46:07.260307 4755 scope.go:117] "RemoveContainer" containerID="4c97f6b1e16714b73af52c73f11a5915e7eae7a2f5a99df29b81e90b58edecba" Dec 10 15:46:07 crc kubenswrapper[4755]: E1210 15:46:07.262378 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c97f6b1e16714b73af52c73f11a5915e7eae7a2f5a99df29b81e90b58edecba\": container with ID starting with 4c97f6b1e16714b73af52c73f11a5915e7eae7a2f5a99df29b81e90b58edecba not found: ID does not exist" containerID="4c97f6b1e16714b73af52c73f11a5915e7eae7a2f5a99df29b81e90b58edecba" Dec 10 15:46:07 crc kubenswrapper[4755]: I1210 15:46:07.262569 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c97f6b1e16714b73af52c73f11a5915e7eae7a2f5a99df29b81e90b58edecba"} err="failed to get container status \"4c97f6b1e16714b73af52c73f11a5915e7eae7a2f5a99df29b81e90b58edecba\": rpc error: code = NotFound desc = could not find container \"4c97f6b1e16714b73af52c73f11a5915e7eae7a2f5a99df29b81e90b58edecba\": container with ID starting with 4c97f6b1e16714b73af52c73f11a5915e7eae7a2f5a99df29b81e90b58edecba not found: ID does not exist" Dec 10 15:46:07 crc kubenswrapper[4755]: I1210 15:46:07.264740 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 10 15:46:07 crc kubenswrapper[4755]: I1210 15:46:07.287327 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 10 15:46:07 crc kubenswrapper[4755]: I1210 15:46:07.296035 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 10 15:46:07 crc kubenswrapper[4755]: E1210 15:46:07.296592 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91d4a135-5dc0-49f2-8dcc-5cb2de6d05d4" containerName="nova-cell0-conductor-conductor" Dec 10 15:46:07 crc kubenswrapper[4755]: I1210 15:46:07.296615 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="91d4a135-5dc0-49f2-8dcc-5cb2de6d05d4" containerName="nova-cell0-conductor-conductor" Dec 10 15:46:07 crc kubenswrapper[4755]: I1210 15:46:07.296879 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="91d4a135-5dc0-49f2-8dcc-5cb2de6d05d4" containerName="nova-cell0-conductor-conductor" Dec 10 15:46:07 crc kubenswrapper[4755]: I1210 15:46:07.297874 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 10 15:46:07 crc kubenswrapper[4755]: I1210 15:46:07.300981 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-xpgvv" Dec 10 15:46:07 crc kubenswrapper[4755]: I1210 15:46:07.301231 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Dec 10 15:46:07 crc kubenswrapper[4755]: I1210 15:46:07.314576 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 10 15:46:07 crc kubenswrapper[4755]: I1210 15:46:07.390510 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1be7309b-6fe2-437d-adc5-c5e7f1f351e9-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"1be7309b-6fe2-437d-adc5-c5e7f1f351e9\") " pod="openstack/nova-cell0-conductor-0" Dec 10 15:46:07 crc kubenswrapper[4755]: I1210 15:46:07.390576 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkv8n\" (UniqueName: \"kubernetes.io/projected/1be7309b-6fe2-437d-adc5-c5e7f1f351e9-kube-api-access-fkv8n\") pod \"nova-cell0-conductor-0\" (UID: \"1be7309b-6fe2-437d-adc5-c5e7f1f351e9\") " pod="openstack/nova-cell0-conductor-0" Dec 10 15:46:07 crc kubenswrapper[4755]: I1210 15:46:07.390743 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1be7309b-6fe2-437d-adc5-c5e7f1f351e9-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"1be7309b-6fe2-437d-adc5-c5e7f1f351e9\") " pod="openstack/nova-cell0-conductor-0" Dec 10 15:46:07 crc kubenswrapper[4755]: I1210 15:46:07.491970 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1be7309b-6fe2-437d-adc5-c5e7f1f351e9-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"1be7309b-6fe2-437d-adc5-c5e7f1f351e9\") " pod="openstack/nova-cell0-conductor-0" Dec 10 15:46:07 crc kubenswrapper[4755]: I1210 15:46:07.492030 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkv8n\" (UniqueName: \"kubernetes.io/projected/1be7309b-6fe2-437d-adc5-c5e7f1f351e9-kube-api-access-fkv8n\") pod \"nova-cell0-conductor-0\" (UID: \"1be7309b-6fe2-437d-adc5-c5e7f1f351e9\") " pod="openstack/nova-cell0-conductor-0" Dec 10 15:46:07 crc kubenswrapper[4755]: I1210 15:46:07.492161 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1be7309b-6fe2-437d-adc5-c5e7f1f351e9-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"1be7309b-6fe2-437d-adc5-c5e7f1f351e9\") " pod="openstack/nova-cell0-conductor-0" Dec 10 15:46:07 crc kubenswrapper[4755]: I1210 15:46:07.495902 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1be7309b-6fe2-437d-adc5-c5e7f1f351e9-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"1be7309b-6fe2-437d-adc5-c5e7f1f351e9\") " pod="openstack/nova-cell0-conductor-0" Dec 10 15:46:07 crc kubenswrapper[4755]: I1210 15:46:07.501590 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1be7309b-6fe2-437d-adc5-c5e7f1f351e9-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"1be7309b-6fe2-437d-adc5-c5e7f1f351e9\") " pod="openstack/nova-cell0-conductor-0" Dec 10 15:46:07 crc kubenswrapper[4755]: I1210 15:46:07.517683 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkv8n\" (UniqueName: \"kubernetes.io/projected/1be7309b-6fe2-437d-adc5-c5e7f1f351e9-kube-api-access-fkv8n\") pod \"nova-cell0-conductor-0\" (UID: \"1be7309b-6fe2-437d-adc5-c5e7f1f351e9\") " pod="openstack/nova-cell0-conductor-0" Dec 10 15:46:07 crc kubenswrapper[4755]: I1210 15:46:07.626329 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 10 15:46:07 crc kubenswrapper[4755]: I1210 15:46:07.796886 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91d4a135-5dc0-49f2-8dcc-5cb2de6d05d4" path="/var/lib/kubelet/pods/91d4a135-5dc0-49f2-8dcc-5cb2de6d05d4/volumes" Dec 10 15:46:07 crc kubenswrapper[4755]: I1210 15:46:07.804773 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 10 15:46:07 crc kubenswrapper[4755]: I1210 15:46:07.805290 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b1396ebd-2a72-4d57-a5af-a7e1dec09762" containerName="ceilometer-central-agent" containerID="cri-o://e79c0517fb6b35d3e7a4a4b215d0b2896d4362b97e9a7468bb8e2fb0ddd2e95d" gracePeriod=30 Dec 10 15:46:07 crc kubenswrapper[4755]: I1210 15:46:07.805781 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b1396ebd-2a72-4d57-a5af-a7e1dec09762" containerName="proxy-httpd" containerID="cri-o://3083ba2a8aff93d8b36061e24ca72c454ab1c160f38a1e844db96e69d8ec60be" gracePeriod=30 Dec 10 15:46:07 crc kubenswrapper[4755]: I1210 15:46:07.805965 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b1396ebd-2a72-4d57-a5af-a7e1dec09762" containerName="ceilometer-notification-agent" containerID="cri-o://ecffe73cf2e6f498cfeb843b545a2ede0c036ea53d76e69ae1d5c65c61ba4ecb" gracePeriod=30 Dec 10 15:46:07 crc kubenswrapper[4755]: I1210 15:46:07.806104 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b1396ebd-2a72-4d57-a5af-a7e1dec09762" containerName="sg-core" containerID="cri-o://0f4edfeee8205d523aea274300b1c71644096d7e9412cd2c97a5cc72be936cf3" gracePeriod=30 Dec 10 15:46:08 crc kubenswrapper[4755]: I1210 15:46:08.177714 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 10 15:46:08 crc kubenswrapper[4755]: W1210 15:46:08.177930 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1be7309b_6fe2_437d_adc5_c5e7f1f351e9.slice/crio-7ebf264bcb55d358d59776b0257b474fc49a434579aa587b39c7af6a0da61a5e WatchSource:0}: Error finding container 7ebf264bcb55d358d59776b0257b474fc49a434579aa587b39c7af6a0da61a5e: Status 404 returned error can't find the container with id 7ebf264bcb55d358d59776b0257b474fc49a434579aa587b39c7af6a0da61a5e Dec 10 15:46:08 crc kubenswrapper[4755]: I1210 15:46:08.251328 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"1be7309b-6fe2-437d-adc5-c5e7f1f351e9","Type":"ContainerStarted","Data":"7ebf264bcb55d358d59776b0257b474fc49a434579aa587b39c7af6a0da61a5e"} Dec 10 15:46:08 crc kubenswrapper[4755]: I1210 15:46:08.255355 4755 generic.go:334] "Generic (PLEG): container finished" podID="b1396ebd-2a72-4d57-a5af-a7e1dec09762" containerID="3083ba2a8aff93d8b36061e24ca72c454ab1c160f38a1e844db96e69d8ec60be" exitCode=0 Dec 10 15:46:08 crc kubenswrapper[4755]: I1210 15:46:08.255400 4755 generic.go:334] "Generic (PLEG): container finished" podID="b1396ebd-2a72-4d57-a5af-a7e1dec09762" containerID="0f4edfeee8205d523aea274300b1c71644096d7e9412cd2c97a5cc72be936cf3" exitCode=2 Dec 10 15:46:08 crc kubenswrapper[4755]: I1210 15:46:08.255487 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b1396ebd-2a72-4d57-a5af-a7e1dec09762","Type":"ContainerDied","Data":"3083ba2a8aff93d8b36061e24ca72c454ab1c160f38a1e844db96e69d8ec60be"} Dec 10 15:46:08 crc kubenswrapper[4755]: I1210 15:46:08.255524 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b1396ebd-2a72-4d57-a5af-a7e1dec09762","Type":"ContainerDied","Data":"0f4edfeee8205d523aea274300b1c71644096d7e9412cd2c97a5cc72be936cf3"} Dec 10 15:46:08 crc kubenswrapper[4755]: I1210 15:46:08.960034 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 15:46:09 crc kubenswrapper[4755]: I1210 15:46:09.036551 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1396ebd-2a72-4d57-a5af-a7e1dec09762-combined-ca-bundle\") pod \"b1396ebd-2a72-4d57-a5af-a7e1dec09762\" (UID: \"b1396ebd-2a72-4d57-a5af-a7e1dec09762\") " Dec 10 15:46:09 crc kubenswrapper[4755]: I1210 15:46:09.036593 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b1396ebd-2a72-4d57-a5af-a7e1dec09762-scripts\") pod \"b1396ebd-2a72-4d57-a5af-a7e1dec09762\" (UID: \"b1396ebd-2a72-4d57-a5af-a7e1dec09762\") " Dec 10 15:46:09 crc kubenswrapper[4755]: I1210 15:46:09.036632 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g6xc7\" (UniqueName: \"kubernetes.io/projected/b1396ebd-2a72-4d57-a5af-a7e1dec09762-kube-api-access-g6xc7\") pod \"b1396ebd-2a72-4d57-a5af-a7e1dec09762\" (UID: \"b1396ebd-2a72-4d57-a5af-a7e1dec09762\") " Dec 10 15:46:09 crc kubenswrapper[4755]: I1210 15:46:09.036730 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b1396ebd-2a72-4d57-a5af-a7e1dec09762-log-httpd\") pod \"b1396ebd-2a72-4d57-a5af-a7e1dec09762\" (UID: \"b1396ebd-2a72-4d57-a5af-a7e1dec09762\") " Dec 10 15:46:09 crc kubenswrapper[4755]: I1210 15:46:09.036852 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b1396ebd-2a72-4d57-a5af-a7e1dec09762-sg-core-conf-yaml\") pod \"b1396ebd-2a72-4d57-a5af-a7e1dec09762\" (UID: \"b1396ebd-2a72-4d57-a5af-a7e1dec09762\") " Dec 10 15:46:09 crc kubenswrapper[4755]: I1210 15:46:09.036878 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1396ebd-2a72-4d57-a5af-a7e1dec09762-config-data\") pod \"b1396ebd-2a72-4d57-a5af-a7e1dec09762\" (UID: \"b1396ebd-2a72-4d57-a5af-a7e1dec09762\") " Dec 10 15:46:09 crc kubenswrapper[4755]: I1210 15:46:09.036932 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b1396ebd-2a72-4d57-a5af-a7e1dec09762-run-httpd\") pod \"b1396ebd-2a72-4d57-a5af-a7e1dec09762\" (UID: \"b1396ebd-2a72-4d57-a5af-a7e1dec09762\") " Dec 10 15:46:09 crc kubenswrapper[4755]: I1210 15:46:09.037275 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1396ebd-2a72-4d57-a5af-a7e1dec09762-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b1396ebd-2a72-4d57-a5af-a7e1dec09762" (UID: "b1396ebd-2a72-4d57-a5af-a7e1dec09762"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:46:09 crc kubenswrapper[4755]: I1210 15:46:09.037394 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1396ebd-2a72-4d57-a5af-a7e1dec09762-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b1396ebd-2a72-4d57-a5af-a7e1dec09762" (UID: "b1396ebd-2a72-4d57-a5af-a7e1dec09762"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:46:09 crc kubenswrapper[4755]: I1210 15:46:09.037529 4755 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b1396ebd-2a72-4d57-a5af-a7e1dec09762-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 10 15:46:09 crc kubenswrapper[4755]: I1210 15:46:09.037542 4755 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b1396ebd-2a72-4d57-a5af-a7e1dec09762-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 10 15:46:09 crc kubenswrapper[4755]: I1210 15:46:09.043397 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1396ebd-2a72-4d57-a5af-a7e1dec09762-scripts" (OuterVolumeSpecName: "scripts") pod "b1396ebd-2a72-4d57-a5af-a7e1dec09762" (UID: "b1396ebd-2a72-4d57-a5af-a7e1dec09762"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:46:09 crc kubenswrapper[4755]: I1210 15:46:09.047946 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1396ebd-2a72-4d57-a5af-a7e1dec09762-kube-api-access-g6xc7" (OuterVolumeSpecName: "kube-api-access-g6xc7") pod "b1396ebd-2a72-4d57-a5af-a7e1dec09762" (UID: "b1396ebd-2a72-4d57-a5af-a7e1dec09762"). InnerVolumeSpecName "kube-api-access-g6xc7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:46:09 crc kubenswrapper[4755]: I1210 15:46:09.076838 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1396ebd-2a72-4d57-a5af-a7e1dec09762-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b1396ebd-2a72-4d57-a5af-a7e1dec09762" (UID: "b1396ebd-2a72-4d57-a5af-a7e1dec09762"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:46:09 crc kubenswrapper[4755]: I1210 15:46:09.130907 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1396ebd-2a72-4d57-a5af-a7e1dec09762-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b1396ebd-2a72-4d57-a5af-a7e1dec09762" (UID: "b1396ebd-2a72-4d57-a5af-a7e1dec09762"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:46:09 crc kubenswrapper[4755]: I1210 15:46:09.139911 4755 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b1396ebd-2a72-4d57-a5af-a7e1dec09762-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 10 15:46:09 crc kubenswrapper[4755]: I1210 15:46:09.139950 4755 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1396ebd-2a72-4d57-a5af-a7e1dec09762-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:46:09 crc kubenswrapper[4755]: I1210 15:46:09.139965 4755 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b1396ebd-2a72-4d57-a5af-a7e1dec09762-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 15:46:09 crc kubenswrapper[4755]: I1210 15:46:09.139977 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g6xc7\" (UniqueName: \"kubernetes.io/projected/b1396ebd-2a72-4d57-a5af-a7e1dec09762-kube-api-access-g6xc7\") on node \"crc\" DevicePath \"\"" Dec 10 15:46:09 crc kubenswrapper[4755]: I1210 15:46:09.156303 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1396ebd-2a72-4d57-a5af-a7e1dec09762-config-data" (OuterVolumeSpecName: "config-data") pod "b1396ebd-2a72-4d57-a5af-a7e1dec09762" (UID: "b1396ebd-2a72-4d57-a5af-a7e1dec09762"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:46:09 crc kubenswrapper[4755]: I1210 15:46:09.241662 4755 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1396ebd-2a72-4d57-a5af-a7e1dec09762-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 15:46:09 crc kubenswrapper[4755]: I1210 15:46:09.277870 4755 generic.go:334] "Generic (PLEG): container finished" podID="b1396ebd-2a72-4d57-a5af-a7e1dec09762" containerID="ecffe73cf2e6f498cfeb843b545a2ede0c036ea53d76e69ae1d5c65c61ba4ecb" exitCode=0 Dec 10 15:46:09 crc kubenswrapper[4755]: I1210 15:46:09.277915 4755 generic.go:334] "Generic (PLEG): container finished" podID="b1396ebd-2a72-4d57-a5af-a7e1dec09762" containerID="e79c0517fb6b35d3e7a4a4b215d0b2896d4362b97e9a7468bb8e2fb0ddd2e95d" exitCode=0 Dec 10 15:46:09 crc kubenswrapper[4755]: I1210 15:46:09.277942 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 15:46:09 crc kubenswrapper[4755]: I1210 15:46:09.278044 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b1396ebd-2a72-4d57-a5af-a7e1dec09762","Type":"ContainerDied","Data":"ecffe73cf2e6f498cfeb843b545a2ede0c036ea53d76e69ae1d5c65c61ba4ecb"} Dec 10 15:46:09 crc kubenswrapper[4755]: I1210 15:46:09.278082 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b1396ebd-2a72-4d57-a5af-a7e1dec09762","Type":"ContainerDied","Data":"e79c0517fb6b35d3e7a4a4b215d0b2896d4362b97e9a7468bb8e2fb0ddd2e95d"} Dec 10 15:46:09 crc kubenswrapper[4755]: I1210 15:46:09.278119 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b1396ebd-2a72-4d57-a5af-a7e1dec09762","Type":"ContainerDied","Data":"fd88bb013aaaa0259fcd5ee36aae50dd3bc01e49b793b7ba48c587ee1dd7db71"} Dec 10 15:46:09 crc kubenswrapper[4755]: I1210 15:46:09.278141 4755 scope.go:117] "RemoveContainer" containerID="3083ba2a8aff93d8b36061e24ca72c454ab1c160f38a1e844db96e69d8ec60be" Dec 10 15:46:09 crc kubenswrapper[4755]: I1210 15:46:09.283882 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"1be7309b-6fe2-437d-adc5-c5e7f1f351e9","Type":"ContainerStarted","Data":"9f02cb48aa66dd2689e0d145d3ce2bb1b47c7c6b84c899464784b82f51f3a8b2"} Dec 10 15:46:09 crc kubenswrapper[4755]: I1210 15:46:09.285079 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Dec 10 15:46:09 crc kubenswrapper[4755]: I1210 15:46:09.318848 4755 scope.go:117] "RemoveContainer" containerID="0f4edfeee8205d523aea274300b1c71644096d7e9412cd2c97a5cc72be936cf3" Dec 10 15:46:09 crc kubenswrapper[4755]: I1210 15:46:09.325221 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.325196477 podStartE2EDuration="2.325196477s" podCreationTimestamp="2025-12-10 15:46:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:46:09.302291504 +0000 UTC m=+1365.903175136" watchObservedRunningTime="2025-12-10 15:46:09.325196477 +0000 UTC m=+1365.926080109" Dec 10 15:46:09 crc kubenswrapper[4755]: I1210 15:46:09.340922 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 10 15:46:09 crc kubenswrapper[4755]: I1210 15:46:09.352924 4755 scope.go:117] "RemoveContainer" containerID="ecffe73cf2e6f498cfeb843b545a2ede0c036ea53d76e69ae1d5c65c61ba4ecb" Dec 10 15:46:09 crc kubenswrapper[4755]: I1210 15:46:09.363504 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 10 15:46:09 crc kubenswrapper[4755]: I1210 15:46:09.388057 4755 scope.go:117] "RemoveContainer" containerID="e79c0517fb6b35d3e7a4a4b215d0b2896d4362b97e9a7468bb8e2fb0ddd2e95d" Dec 10 15:46:09 crc kubenswrapper[4755]: I1210 15:46:09.394452 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 10 15:46:09 crc kubenswrapper[4755]: E1210 15:46:09.396897 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1396ebd-2a72-4d57-a5af-a7e1dec09762" containerName="proxy-httpd" Dec 10 15:46:09 crc kubenswrapper[4755]: I1210 15:46:09.396941 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1396ebd-2a72-4d57-a5af-a7e1dec09762" containerName="proxy-httpd" Dec 10 15:46:09 crc kubenswrapper[4755]: E1210 15:46:09.396963 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1396ebd-2a72-4d57-a5af-a7e1dec09762" containerName="ceilometer-central-agent" Dec 10 15:46:09 crc kubenswrapper[4755]: I1210 15:46:09.396971 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1396ebd-2a72-4d57-a5af-a7e1dec09762" containerName="ceilometer-central-agent" Dec 10 15:46:09 crc kubenswrapper[4755]: E1210 15:46:09.396999 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1396ebd-2a72-4d57-a5af-a7e1dec09762" containerName="ceilometer-notification-agent" Dec 10 15:46:09 crc kubenswrapper[4755]: I1210 15:46:09.397007 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1396ebd-2a72-4d57-a5af-a7e1dec09762" containerName="ceilometer-notification-agent" Dec 10 15:46:09 crc kubenswrapper[4755]: E1210 15:46:09.397026 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1396ebd-2a72-4d57-a5af-a7e1dec09762" containerName="sg-core" Dec 10 15:46:09 crc kubenswrapper[4755]: I1210 15:46:09.397034 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1396ebd-2a72-4d57-a5af-a7e1dec09762" containerName="sg-core" Dec 10 15:46:09 crc kubenswrapper[4755]: I1210 15:46:09.397282 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1396ebd-2a72-4d57-a5af-a7e1dec09762" containerName="proxy-httpd" Dec 10 15:46:09 crc kubenswrapper[4755]: I1210 15:46:09.397302 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1396ebd-2a72-4d57-a5af-a7e1dec09762" containerName="sg-core" Dec 10 15:46:09 crc kubenswrapper[4755]: I1210 15:46:09.397320 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1396ebd-2a72-4d57-a5af-a7e1dec09762" containerName="ceilometer-notification-agent" Dec 10 15:46:09 crc kubenswrapper[4755]: I1210 15:46:09.397343 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1396ebd-2a72-4d57-a5af-a7e1dec09762" containerName="ceilometer-central-agent" Dec 10 15:46:09 crc kubenswrapper[4755]: I1210 15:46:09.399558 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 15:46:09 crc kubenswrapper[4755]: I1210 15:46:09.402076 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 10 15:46:09 crc kubenswrapper[4755]: I1210 15:46:09.402354 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 10 15:46:09 crc kubenswrapper[4755]: I1210 15:46:09.414617 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 10 15:46:09 crc kubenswrapper[4755]: I1210 15:46:09.423633 4755 scope.go:117] "RemoveContainer" containerID="3083ba2a8aff93d8b36061e24ca72c454ab1c160f38a1e844db96e69d8ec60be" Dec 10 15:46:09 crc kubenswrapper[4755]: E1210 15:46:09.424093 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3083ba2a8aff93d8b36061e24ca72c454ab1c160f38a1e844db96e69d8ec60be\": container with ID starting with 3083ba2a8aff93d8b36061e24ca72c454ab1c160f38a1e844db96e69d8ec60be not found: ID does not exist" containerID="3083ba2a8aff93d8b36061e24ca72c454ab1c160f38a1e844db96e69d8ec60be" Dec 10 15:46:09 crc kubenswrapper[4755]: I1210 15:46:09.424130 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3083ba2a8aff93d8b36061e24ca72c454ab1c160f38a1e844db96e69d8ec60be"} err="failed to get container status \"3083ba2a8aff93d8b36061e24ca72c454ab1c160f38a1e844db96e69d8ec60be\": rpc error: code = NotFound desc = could not find container \"3083ba2a8aff93d8b36061e24ca72c454ab1c160f38a1e844db96e69d8ec60be\": container with ID starting with 3083ba2a8aff93d8b36061e24ca72c454ab1c160f38a1e844db96e69d8ec60be not found: ID does not exist" Dec 10 15:46:09 crc kubenswrapper[4755]: I1210 15:46:09.424160 4755 scope.go:117] "RemoveContainer" containerID="0f4edfeee8205d523aea274300b1c71644096d7e9412cd2c97a5cc72be936cf3" Dec 10 15:46:09 crc kubenswrapper[4755]: E1210 15:46:09.424763 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f4edfeee8205d523aea274300b1c71644096d7e9412cd2c97a5cc72be936cf3\": container with ID starting with 0f4edfeee8205d523aea274300b1c71644096d7e9412cd2c97a5cc72be936cf3 not found: ID does not exist" containerID="0f4edfeee8205d523aea274300b1c71644096d7e9412cd2c97a5cc72be936cf3" Dec 10 15:46:09 crc kubenswrapper[4755]: I1210 15:46:09.424809 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f4edfeee8205d523aea274300b1c71644096d7e9412cd2c97a5cc72be936cf3"} err="failed to get container status \"0f4edfeee8205d523aea274300b1c71644096d7e9412cd2c97a5cc72be936cf3\": rpc error: code = NotFound desc = could not find container \"0f4edfeee8205d523aea274300b1c71644096d7e9412cd2c97a5cc72be936cf3\": container with ID starting with 0f4edfeee8205d523aea274300b1c71644096d7e9412cd2c97a5cc72be936cf3 not found: ID does not exist" Dec 10 15:46:09 crc kubenswrapper[4755]: I1210 15:46:09.424837 4755 scope.go:117] "RemoveContainer" containerID="ecffe73cf2e6f498cfeb843b545a2ede0c036ea53d76e69ae1d5c65c61ba4ecb" Dec 10 15:46:09 crc kubenswrapper[4755]: E1210 15:46:09.425621 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ecffe73cf2e6f498cfeb843b545a2ede0c036ea53d76e69ae1d5c65c61ba4ecb\": container with ID starting with ecffe73cf2e6f498cfeb843b545a2ede0c036ea53d76e69ae1d5c65c61ba4ecb not found: ID does not exist" containerID="ecffe73cf2e6f498cfeb843b545a2ede0c036ea53d76e69ae1d5c65c61ba4ecb" Dec 10 15:46:09 crc kubenswrapper[4755]: I1210 15:46:09.425728 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ecffe73cf2e6f498cfeb843b545a2ede0c036ea53d76e69ae1d5c65c61ba4ecb"} err="failed to get container status \"ecffe73cf2e6f498cfeb843b545a2ede0c036ea53d76e69ae1d5c65c61ba4ecb\": rpc error: code = NotFound desc = could not find container \"ecffe73cf2e6f498cfeb843b545a2ede0c036ea53d76e69ae1d5c65c61ba4ecb\": container with ID starting with ecffe73cf2e6f498cfeb843b545a2ede0c036ea53d76e69ae1d5c65c61ba4ecb not found: ID does not exist" Dec 10 15:46:09 crc kubenswrapper[4755]: I1210 15:46:09.425827 4755 scope.go:117] "RemoveContainer" containerID="e79c0517fb6b35d3e7a4a4b215d0b2896d4362b97e9a7468bb8e2fb0ddd2e95d" Dec 10 15:46:09 crc kubenswrapper[4755]: E1210 15:46:09.428186 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e79c0517fb6b35d3e7a4a4b215d0b2896d4362b97e9a7468bb8e2fb0ddd2e95d\": container with ID starting with e79c0517fb6b35d3e7a4a4b215d0b2896d4362b97e9a7468bb8e2fb0ddd2e95d not found: ID does not exist" containerID="e79c0517fb6b35d3e7a4a4b215d0b2896d4362b97e9a7468bb8e2fb0ddd2e95d" Dec 10 15:46:09 crc kubenswrapper[4755]: I1210 15:46:09.428295 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e79c0517fb6b35d3e7a4a4b215d0b2896d4362b97e9a7468bb8e2fb0ddd2e95d"} err="failed to get container status \"e79c0517fb6b35d3e7a4a4b215d0b2896d4362b97e9a7468bb8e2fb0ddd2e95d\": rpc error: code = NotFound desc = could not find container \"e79c0517fb6b35d3e7a4a4b215d0b2896d4362b97e9a7468bb8e2fb0ddd2e95d\": container with ID starting with e79c0517fb6b35d3e7a4a4b215d0b2896d4362b97e9a7468bb8e2fb0ddd2e95d not found: ID does not exist" Dec 10 15:46:09 crc kubenswrapper[4755]: I1210 15:46:09.428381 4755 scope.go:117] "RemoveContainer" containerID="3083ba2a8aff93d8b36061e24ca72c454ab1c160f38a1e844db96e69d8ec60be" Dec 10 15:46:09 crc kubenswrapper[4755]: I1210 15:46:09.428768 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3083ba2a8aff93d8b36061e24ca72c454ab1c160f38a1e844db96e69d8ec60be"} err="failed to get container status \"3083ba2a8aff93d8b36061e24ca72c454ab1c160f38a1e844db96e69d8ec60be\": rpc error: code = NotFound desc = could not find container \"3083ba2a8aff93d8b36061e24ca72c454ab1c160f38a1e844db96e69d8ec60be\": container with ID starting with 3083ba2a8aff93d8b36061e24ca72c454ab1c160f38a1e844db96e69d8ec60be not found: ID does not exist" Dec 10 15:46:09 crc kubenswrapper[4755]: I1210 15:46:09.428871 4755 scope.go:117] "RemoveContainer" containerID="0f4edfeee8205d523aea274300b1c71644096d7e9412cd2c97a5cc72be936cf3" Dec 10 15:46:09 crc kubenswrapper[4755]: I1210 15:46:09.429110 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f4edfeee8205d523aea274300b1c71644096d7e9412cd2c97a5cc72be936cf3"} err="failed to get container status \"0f4edfeee8205d523aea274300b1c71644096d7e9412cd2c97a5cc72be936cf3\": rpc error: code = NotFound desc = could not find container \"0f4edfeee8205d523aea274300b1c71644096d7e9412cd2c97a5cc72be936cf3\": container with ID starting with 0f4edfeee8205d523aea274300b1c71644096d7e9412cd2c97a5cc72be936cf3 not found: ID does not exist" Dec 10 15:46:09 crc kubenswrapper[4755]: I1210 15:46:09.429206 4755 scope.go:117] "RemoveContainer" containerID="ecffe73cf2e6f498cfeb843b545a2ede0c036ea53d76e69ae1d5c65c61ba4ecb" Dec 10 15:46:09 crc kubenswrapper[4755]: I1210 15:46:09.430341 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ecffe73cf2e6f498cfeb843b545a2ede0c036ea53d76e69ae1d5c65c61ba4ecb"} err="failed to get container status \"ecffe73cf2e6f498cfeb843b545a2ede0c036ea53d76e69ae1d5c65c61ba4ecb\": rpc error: code = NotFound desc = could not find container \"ecffe73cf2e6f498cfeb843b545a2ede0c036ea53d76e69ae1d5c65c61ba4ecb\": container with ID starting with ecffe73cf2e6f498cfeb843b545a2ede0c036ea53d76e69ae1d5c65c61ba4ecb not found: ID does not exist" Dec 10 15:46:09 crc kubenswrapper[4755]: I1210 15:46:09.430416 4755 scope.go:117] "RemoveContainer" containerID="e79c0517fb6b35d3e7a4a4b215d0b2896d4362b97e9a7468bb8e2fb0ddd2e95d" Dec 10 15:46:09 crc kubenswrapper[4755]: I1210 15:46:09.432789 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e79c0517fb6b35d3e7a4a4b215d0b2896d4362b97e9a7468bb8e2fb0ddd2e95d"} err="failed to get container status \"e79c0517fb6b35d3e7a4a4b215d0b2896d4362b97e9a7468bb8e2fb0ddd2e95d\": rpc error: code = NotFound desc = could not find container \"e79c0517fb6b35d3e7a4a4b215d0b2896d4362b97e9a7468bb8e2fb0ddd2e95d\": container with ID starting with e79c0517fb6b35d3e7a4a4b215d0b2896d4362b97e9a7468bb8e2fb0ddd2e95d not found: ID does not exist" Dec 10 15:46:09 crc kubenswrapper[4755]: I1210 15:46:09.547213 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1eafd33a-2808-4b35-b947-a7a60e905060-run-httpd\") pod \"ceilometer-0\" (UID: \"1eafd33a-2808-4b35-b947-a7a60e905060\") " pod="openstack/ceilometer-0" Dec 10 15:46:09 crc kubenswrapper[4755]: I1210 15:46:09.547299 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cq28s\" (UniqueName: \"kubernetes.io/projected/1eafd33a-2808-4b35-b947-a7a60e905060-kube-api-access-cq28s\") pod \"ceilometer-0\" (UID: \"1eafd33a-2808-4b35-b947-a7a60e905060\") " pod="openstack/ceilometer-0" Dec 10 15:46:09 crc kubenswrapper[4755]: I1210 15:46:09.547333 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1eafd33a-2808-4b35-b947-a7a60e905060-log-httpd\") pod \"ceilometer-0\" (UID: \"1eafd33a-2808-4b35-b947-a7a60e905060\") " pod="openstack/ceilometer-0" Dec 10 15:46:09 crc kubenswrapper[4755]: I1210 15:46:09.547405 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1eafd33a-2808-4b35-b947-a7a60e905060-config-data\") pod \"ceilometer-0\" (UID: \"1eafd33a-2808-4b35-b947-a7a60e905060\") " pod="openstack/ceilometer-0" Dec 10 15:46:09 crc kubenswrapper[4755]: I1210 15:46:09.547426 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1eafd33a-2808-4b35-b947-a7a60e905060-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1eafd33a-2808-4b35-b947-a7a60e905060\") " pod="openstack/ceilometer-0" Dec 10 15:46:09 crc kubenswrapper[4755]: I1210 15:46:09.547450 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1eafd33a-2808-4b35-b947-a7a60e905060-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1eafd33a-2808-4b35-b947-a7a60e905060\") " pod="openstack/ceilometer-0" Dec 10 15:46:09 crc kubenswrapper[4755]: I1210 15:46:09.547514 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1eafd33a-2808-4b35-b947-a7a60e905060-scripts\") pod \"ceilometer-0\" (UID: \"1eafd33a-2808-4b35-b947-a7a60e905060\") " pod="openstack/ceilometer-0" Dec 10 15:46:09 crc kubenswrapper[4755]: I1210 15:46:09.649389 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1eafd33a-2808-4b35-b947-a7a60e905060-scripts\") pod \"ceilometer-0\" (UID: \"1eafd33a-2808-4b35-b947-a7a60e905060\") " pod="openstack/ceilometer-0" Dec 10 15:46:09 crc kubenswrapper[4755]: I1210 15:46:09.649606 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1eafd33a-2808-4b35-b947-a7a60e905060-run-httpd\") pod \"ceilometer-0\" (UID: \"1eafd33a-2808-4b35-b947-a7a60e905060\") " pod="openstack/ceilometer-0" Dec 10 15:46:09 crc kubenswrapper[4755]: I1210 15:46:09.649673 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cq28s\" (UniqueName: \"kubernetes.io/projected/1eafd33a-2808-4b35-b947-a7a60e905060-kube-api-access-cq28s\") pod \"ceilometer-0\" (UID: \"1eafd33a-2808-4b35-b947-a7a60e905060\") " pod="openstack/ceilometer-0" Dec 10 15:46:09 crc kubenswrapper[4755]: I1210 15:46:09.649708 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1eafd33a-2808-4b35-b947-a7a60e905060-log-httpd\") pod \"ceilometer-0\" (UID: \"1eafd33a-2808-4b35-b947-a7a60e905060\") " pod="openstack/ceilometer-0" Dec 10 15:46:09 crc kubenswrapper[4755]: I1210 15:46:09.649793 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1eafd33a-2808-4b35-b947-a7a60e905060-config-data\") pod \"ceilometer-0\" (UID: \"1eafd33a-2808-4b35-b947-a7a60e905060\") " pod="openstack/ceilometer-0" Dec 10 15:46:09 crc kubenswrapper[4755]: I1210 15:46:09.649826 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1eafd33a-2808-4b35-b947-a7a60e905060-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1eafd33a-2808-4b35-b947-a7a60e905060\") " pod="openstack/ceilometer-0" Dec 10 15:46:09 crc kubenswrapper[4755]: I1210 15:46:09.649862 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1eafd33a-2808-4b35-b947-a7a60e905060-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1eafd33a-2808-4b35-b947-a7a60e905060\") " pod="openstack/ceilometer-0" Dec 10 15:46:09 crc kubenswrapper[4755]: I1210 15:46:09.650394 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1eafd33a-2808-4b35-b947-a7a60e905060-log-httpd\") pod \"ceilometer-0\" (UID: \"1eafd33a-2808-4b35-b947-a7a60e905060\") " pod="openstack/ceilometer-0" Dec 10 15:46:09 crc kubenswrapper[4755]: I1210 15:46:09.650502 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1eafd33a-2808-4b35-b947-a7a60e905060-run-httpd\") pod \"ceilometer-0\" (UID: \"1eafd33a-2808-4b35-b947-a7a60e905060\") " pod="openstack/ceilometer-0" Dec 10 15:46:09 crc kubenswrapper[4755]: I1210 15:46:09.652736 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1eafd33a-2808-4b35-b947-a7a60e905060-scripts\") pod \"ceilometer-0\" (UID: \"1eafd33a-2808-4b35-b947-a7a60e905060\") " pod="openstack/ceilometer-0" Dec 10 15:46:09 crc kubenswrapper[4755]: I1210 15:46:09.654225 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1eafd33a-2808-4b35-b947-a7a60e905060-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1eafd33a-2808-4b35-b947-a7a60e905060\") " pod="openstack/ceilometer-0" Dec 10 15:46:09 crc kubenswrapper[4755]: I1210 15:46:09.655618 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1eafd33a-2808-4b35-b947-a7a60e905060-config-data\") pod \"ceilometer-0\" (UID: \"1eafd33a-2808-4b35-b947-a7a60e905060\") " pod="openstack/ceilometer-0" Dec 10 15:46:09 crc kubenswrapper[4755]: I1210 15:46:09.665431 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1eafd33a-2808-4b35-b947-a7a60e905060-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1eafd33a-2808-4b35-b947-a7a60e905060\") " pod="openstack/ceilometer-0" Dec 10 15:46:09 crc kubenswrapper[4755]: I1210 15:46:09.669321 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cq28s\" (UniqueName: \"kubernetes.io/projected/1eafd33a-2808-4b35-b947-a7a60e905060-kube-api-access-cq28s\") pod \"ceilometer-0\" (UID: \"1eafd33a-2808-4b35-b947-a7a60e905060\") " pod="openstack/ceilometer-0" Dec 10 15:46:09 crc kubenswrapper[4755]: I1210 15:46:09.725667 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 15:46:09 crc kubenswrapper[4755]: I1210 15:46:09.772096 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1396ebd-2a72-4d57-a5af-a7e1dec09762" path="/var/lib/kubelet/pods/b1396ebd-2a72-4d57-a5af-a7e1dec09762/volumes" Dec 10 15:46:10 crc kubenswrapper[4755]: W1210 15:46:10.241063 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1eafd33a_2808_4b35_b947_a7a60e905060.slice/crio-cd3fa9fb4f5730b8ee92866126a8ed81845a7b19b8992447b62eca4166fa1f22 WatchSource:0}: Error finding container cd3fa9fb4f5730b8ee92866126a8ed81845a7b19b8992447b62eca4166fa1f22: Status 404 returned error can't find the container with id cd3fa9fb4f5730b8ee92866126a8ed81845a7b19b8992447b62eca4166fa1f22 Dec 10 15:46:10 crc kubenswrapper[4755]: I1210 15:46:10.246216 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 10 15:46:10 crc kubenswrapper[4755]: I1210 15:46:10.296766 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1eafd33a-2808-4b35-b947-a7a60e905060","Type":"ContainerStarted","Data":"cd3fa9fb4f5730b8ee92866126a8ed81845a7b19b8992447b62eca4166fa1f22"} Dec 10 15:46:10 crc kubenswrapper[4755]: I1210 15:46:10.359128 4755 patch_prober.go:28] interesting pod/machine-config-daemon-ggt8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 15:46:10 crc kubenswrapper[4755]: I1210 15:46:10.359206 4755 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 15:46:11 crc kubenswrapper[4755]: I1210 15:46:11.310937 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1eafd33a-2808-4b35-b947-a7a60e905060","Type":"ContainerStarted","Data":"96b996c0e138ca9310c156598483c370545afbf4a92c9cc09db0447edc81faac"} Dec 10 15:46:12 crc kubenswrapper[4755]: I1210 15:46:12.324009 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1eafd33a-2808-4b35-b947-a7a60e905060","Type":"ContainerStarted","Data":"18416dbacce5aa6468f363c38fa0c79eaf3627ed5f08a7471c3a48ba24e063d4"} Dec 10 15:46:13 crc kubenswrapper[4755]: I1210 15:46:13.337332 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1eafd33a-2808-4b35-b947-a7a60e905060","Type":"ContainerStarted","Data":"41e0df8f17e958fc25020aaf5d78783fb92230f74c32e6ee83a255045e7dd8cb"} Dec 10 15:46:14 crc kubenswrapper[4755]: I1210 15:46:14.354075 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1eafd33a-2808-4b35-b947-a7a60e905060","Type":"ContainerStarted","Data":"19606ce160e6b91b398b11d4357f9766b677c5c385b73e29c58c919fe5b0c455"} Dec 10 15:46:14 crc kubenswrapper[4755]: I1210 15:46:14.354720 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 10 15:46:14 crc kubenswrapper[4755]: I1210 15:46:14.381749 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.039691585 podStartE2EDuration="5.381725162s" podCreationTimestamp="2025-12-10 15:46:09 +0000 UTC" firstStartedPulling="2025-12-10 15:46:10.244046948 +0000 UTC m=+1366.844930590" lastFinishedPulling="2025-12-10 15:46:13.586080525 +0000 UTC m=+1370.186964167" observedRunningTime="2025-12-10 15:46:14.375682647 +0000 UTC m=+1370.976566279" watchObservedRunningTime="2025-12-10 15:46:14.381725162 +0000 UTC m=+1370.982608804" Dec 10 15:46:17 crc kubenswrapper[4755]: I1210 15:46:17.184837 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-wxn4w"] Dec 10 15:46:17 crc kubenswrapper[4755]: I1210 15:46:17.187666 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wxn4w" Dec 10 15:46:17 crc kubenswrapper[4755]: I1210 15:46:17.260291 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wxn4w"] Dec 10 15:46:17 crc kubenswrapper[4755]: I1210 15:46:17.311654 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjzmr\" (UniqueName: \"kubernetes.io/projected/4cb6ed71-48b0-45c8-a470-4b6441c7bff5-kube-api-access-rjzmr\") pod \"redhat-operators-wxn4w\" (UID: \"4cb6ed71-48b0-45c8-a470-4b6441c7bff5\") " pod="openshift-marketplace/redhat-operators-wxn4w" Dec 10 15:46:17 crc kubenswrapper[4755]: I1210 15:46:17.311768 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cb6ed71-48b0-45c8-a470-4b6441c7bff5-catalog-content\") pod \"redhat-operators-wxn4w\" (UID: \"4cb6ed71-48b0-45c8-a470-4b6441c7bff5\") " pod="openshift-marketplace/redhat-operators-wxn4w" Dec 10 15:46:17 crc kubenswrapper[4755]: I1210 15:46:17.311893 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cb6ed71-48b0-45c8-a470-4b6441c7bff5-utilities\") pod \"redhat-operators-wxn4w\" (UID: \"4cb6ed71-48b0-45c8-a470-4b6441c7bff5\") " pod="openshift-marketplace/redhat-operators-wxn4w" Dec 10 15:46:17 crc kubenswrapper[4755]: I1210 15:46:17.414615 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjzmr\" (UniqueName: \"kubernetes.io/projected/4cb6ed71-48b0-45c8-a470-4b6441c7bff5-kube-api-access-rjzmr\") pod \"redhat-operators-wxn4w\" (UID: \"4cb6ed71-48b0-45c8-a470-4b6441c7bff5\") " pod="openshift-marketplace/redhat-operators-wxn4w" Dec 10 15:46:17 crc kubenswrapper[4755]: I1210 15:46:17.415041 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cb6ed71-48b0-45c8-a470-4b6441c7bff5-catalog-content\") pod \"redhat-operators-wxn4w\" (UID: \"4cb6ed71-48b0-45c8-a470-4b6441c7bff5\") " pod="openshift-marketplace/redhat-operators-wxn4w" Dec 10 15:46:17 crc kubenswrapper[4755]: I1210 15:46:17.415237 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cb6ed71-48b0-45c8-a470-4b6441c7bff5-utilities\") pod \"redhat-operators-wxn4w\" (UID: \"4cb6ed71-48b0-45c8-a470-4b6441c7bff5\") " pod="openshift-marketplace/redhat-operators-wxn4w" Dec 10 15:46:17 crc kubenswrapper[4755]: I1210 15:46:17.415811 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cb6ed71-48b0-45c8-a470-4b6441c7bff5-utilities\") pod \"redhat-operators-wxn4w\" (UID: \"4cb6ed71-48b0-45c8-a470-4b6441c7bff5\") " pod="openshift-marketplace/redhat-operators-wxn4w" Dec 10 15:46:17 crc kubenswrapper[4755]: I1210 15:46:17.416252 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cb6ed71-48b0-45c8-a470-4b6441c7bff5-catalog-content\") pod \"redhat-operators-wxn4w\" (UID: \"4cb6ed71-48b0-45c8-a470-4b6441c7bff5\") " pod="openshift-marketplace/redhat-operators-wxn4w" Dec 10 15:46:17 crc kubenswrapper[4755]: I1210 15:46:17.498980 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjzmr\" (UniqueName: \"kubernetes.io/projected/4cb6ed71-48b0-45c8-a470-4b6441c7bff5-kube-api-access-rjzmr\") pod \"redhat-operators-wxn4w\" (UID: \"4cb6ed71-48b0-45c8-a470-4b6441c7bff5\") " pod="openshift-marketplace/redhat-operators-wxn4w" Dec 10 15:46:17 crc kubenswrapper[4755]: I1210 15:46:17.508956 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wxn4w" Dec 10 15:46:17 crc kubenswrapper[4755]: I1210 15:46:17.702693 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Dec 10 15:46:18 crc kubenswrapper[4755]: I1210 15:46:18.165172 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wxn4w"] Dec 10 15:46:18 crc kubenswrapper[4755]: I1210 15:46:18.344344 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-h7t25"] Dec 10 15:46:18 crc kubenswrapper[4755]: I1210 15:46:18.346086 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-h7t25" Dec 10 15:46:18 crc kubenswrapper[4755]: I1210 15:46:18.349222 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Dec 10 15:46:18 crc kubenswrapper[4755]: I1210 15:46:18.349452 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Dec 10 15:46:18 crc kubenswrapper[4755]: I1210 15:46:18.384585 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-h7t25"] Dec 10 15:46:18 crc kubenswrapper[4755]: I1210 15:46:18.456217 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c537dd47-5926-4c76-9a78-af49c9418027-scripts\") pod \"nova-cell0-cell-mapping-h7t25\" (UID: \"c537dd47-5926-4c76-9a78-af49c9418027\") " pod="openstack/nova-cell0-cell-mapping-h7t25" Dec 10 15:46:18 crc kubenswrapper[4755]: I1210 15:46:18.456297 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9rvg\" (UniqueName: \"kubernetes.io/projected/c537dd47-5926-4c76-9a78-af49c9418027-kube-api-access-d9rvg\") pod \"nova-cell0-cell-mapping-h7t25\" (UID: \"c537dd47-5926-4c76-9a78-af49c9418027\") " pod="openstack/nova-cell0-cell-mapping-h7t25" Dec 10 15:46:18 crc kubenswrapper[4755]: I1210 15:46:18.456337 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c537dd47-5926-4c76-9a78-af49c9418027-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-h7t25\" (UID: \"c537dd47-5926-4c76-9a78-af49c9418027\") " pod="openstack/nova-cell0-cell-mapping-h7t25" Dec 10 15:46:18 crc kubenswrapper[4755]: I1210 15:46:18.456650 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c537dd47-5926-4c76-9a78-af49c9418027-config-data\") pod \"nova-cell0-cell-mapping-h7t25\" (UID: \"c537dd47-5926-4c76-9a78-af49c9418027\") " pod="openstack/nova-cell0-cell-mapping-h7t25" Dec 10 15:46:18 crc kubenswrapper[4755]: I1210 15:46:18.472968 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wxn4w" event={"ID":"4cb6ed71-48b0-45c8-a470-4b6441c7bff5","Type":"ContainerStarted","Data":"aefea556f3904abbf91f85784deee7d232bdbc4be754fa983bc40cb0a4978e11"} Dec 10 15:46:18 crc kubenswrapper[4755]: I1210 15:46:18.559348 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 10 15:46:18 crc kubenswrapper[4755]: I1210 15:46:18.559718 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c537dd47-5926-4c76-9a78-af49c9418027-scripts\") pod \"nova-cell0-cell-mapping-h7t25\" (UID: \"c537dd47-5926-4c76-9a78-af49c9418027\") " pod="openstack/nova-cell0-cell-mapping-h7t25" Dec 10 15:46:18 crc kubenswrapper[4755]: I1210 15:46:18.559770 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9rvg\" (UniqueName: \"kubernetes.io/projected/c537dd47-5926-4c76-9a78-af49c9418027-kube-api-access-d9rvg\") pod \"nova-cell0-cell-mapping-h7t25\" (UID: \"c537dd47-5926-4c76-9a78-af49c9418027\") " pod="openstack/nova-cell0-cell-mapping-h7t25" Dec 10 15:46:18 crc kubenswrapper[4755]: I1210 15:46:18.559792 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c537dd47-5926-4c76-9a78-af49c9418027-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-h7t25\" (UID: \"c537dd47-5926-4c76-9a78-af49c9418027\") " pod="openstack/nova-cell0-cell-mapping-h7t25" Dec 10 15:46:18 crc kubenswrapper[4755]: I1210 15:46:18.559957 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c537dd47-5926-4c76-9a78-af49c9418027-config-data\") pod \"nova-cell0-cell-mapping-h7t25\" (UID: \"c537dd47-5926-4c76-9a78-af49c9418027\") " pod="openstack/nova-cell0-cell-mapping-h7t25" Dec 10 15:46:18 crc kubenswrapper[4755]: I1210 15:46:18.561262 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 10 15:46:18 crc kubenswrapper[4755]: I1210 15:46:18.568025 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c537dd47-5926-4c76-9a78-af49c9418027-scripts\") pod \"nova-cell0-cell-mapping-h7t25\" (UID: \"c537dd47-5926-4c76-9a78-af49c9418027\") " pod="openstack/nova-cell0-cell-mapping-h7t25" Dec 10 15:46:18 crc kubenswrapper[4755]: I1210 15:46:18.577534 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 10 15:46:18 crc kubenswrapper[4755]: I1210 15:46:18.583332 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c537dd47-5926-4c76-9a78-af49c9418027-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-h7t25\" (UID: \"c537dd47-5926-4c76-9a78-af49c9418027\") " pod="openstack/nova-cell0-cell-mapping-h7t25" Dec 10 15:46:18 crc kubenswrapper[4755]: I1210 15:46:18.583796 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 10 15:46:18 crc kubenswrapper[4755]: I1210 15:46:18.585820 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c537dd47-5926-4c76-9a78-af49c9418027-config-data\") pod \"nova-cell0-cell-mapping-h7t25\" (UID: \"c537dd47-5926-4c76-9a78-af49c9418027\") " pod="openstack/nova-cell0-cell-mapping-h7t25" Dec 10 15:46:18 crc kubenswrapper[4755]: I1210 15:46:18.596039 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9rvg\" (UniqueName: \"kubernetes.io/projected/c537dd47-5926-4c76-9a78-af49c9418027-kube-api-access-d9rvg\") pod \"nova-cell0-cell-mapping-h7t25\" (UID: \"c537dd47-5926-4c76-9a78-af49c9418027\") " pod="openstack/nova-cell0-cell-mapping-h7t25" Dec 10 15:46:18 crc kubenswrapper[4755]: I1210 15:46:18.631616 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 10 15:46:18 crc kubenswrapper[4755]: I1210 15:46:18.633013 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 10 15:46:18 crc kubenswrapper[4755]: I1210 15:46:18.646554 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Dec 10 15:46:18 crc kubenswrapper[4755]: I1210 15:46:18.661535 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b6fac70-baa4-451b-92bc-3b26e2fc08ec-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4b6fac70-baa4-451b-92bc-3b26e2fc08ec\") " pod="openstack/nova-api-0" Dec 10 15:46:18 crc kubenswrapper[4755]: I1210 15:46:18.661608 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b6fac70-baa4-451b-92bc-3b26e2fc08ec-logs\") pod \"nova-api-0\" (UID: \"4b6fac70-baa4-451b-92bc-3b26e2fc08ec\") " pod="openstack/nova-api-0" Dec 10 15:46:18 crc kubenswrapper[4755]: I1210 15:46:18.661662 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14f7a209-c2fe-4945-9793-a8e4fd08083d-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"14f7a209-c2fe-4945-9793-a8e4fd08083d\") " pod="openstack/nova-cell1-novncproxy-0" Dec 10 15:46:18 crc kubenswrapper[4755]: I1210 15:46:18.661704 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b6fac70-baa4-451b-92bc-3b26e2fc08ec-config-data\") pod \"nova-api-0\" (UID: \"4b6fac70-baa4-451b-92bc-3b26e2fc08ec\") " pod="openstack/nova-api-0" Dec 10 15:46:18 crc kubenswrapper[4755]: I1210 15:46:18.661723 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wq7wc\" (UniqueName: \"kubernetes.io/projected/4b6fac70-baa4-451b-92bc-3b26e2fc08ec-kube-api-access-wq7wc\") pod \"nova-api-0\" (UID: \"4b6fac70-baa4-451b-92bc-3b26e2fc08ec\") " pod="openstack/nova-api-0" Dec 10 15:46:18 crc kubenswrapper[4755]: I1210 15:46:18.661761 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5qgk\" (UniqueName: \"kubernetes.io/projected/14f7a209-c2fe-4945-9793-a8e4fd08083d-kube-api-access-n5qgk\") pod \"nova-cell1-novncproxy-0\" (UID: \"14f7a209-c2fe-4945-9793-a8e4fd08083d\") " pod="openstack/nova-cell1-novncproxy-0" Dec 10 15:46:18 crc kubenswrapper[4755]: I1210 15:46:18.661803 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14f7a209-c2fe-4945-9793-a8e4fd08083d-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"14f7a209-c2fe-4945-9793-a8e4fd08083d\") " pod="openstack/nova-cell1-novncproxy-0" Dec 10 15:46:18 crc kubenswrapper[4755]: I1210 15:46:18.665942 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 10 15:46:18 crc kubenswrapper[4755]: I1210 15:46:18.668738 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-h7t25" Dec 10 15:46:18 crc kubenswrapper[4755]: I1210 15:46:18.743485 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 10 15:46:18 crc kubenswrapper[4755]: I1210 15:46:18.745688 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 10 15:46:18 crc kubenswrapper[4755]: I1210 15:46:18.758506 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 10 15:46:18 crc kubenswrapper[4755]: I1210 15:46:18.758807 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 10 15:46:18 crc kubenswrapper[4755]: I1210 15:46:18.764550 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5qgk\" (UniqueName: \"kubernetes.io/projected/14f7a209-c2fe-4945-9793-a8e4fd08083d-kube-api-access-n5qgk\") pod \"nova-cell1-novncproxy-0\" (UID: \"14f7a209-c2fe-4945-9793-a8e4fd08083d\") " pod="openstack/nova-cell1-novncproxy-0" Dec 10 15:46:18 crc kubenswrapper[4755]: I1210 15:46:18.764638 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14f7a209-c2fe-4945-9793-a8e4fd08083d-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"14f7a209-c2fe-4945-9793-a8e4fd08083d\") " pod="openstack/nova-cell1-novncproxy-0" Dec 10 15:46:18 crc kubenswrapper[4755]: I1210 15:46:18.764690 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b6fac70-baa4-451b-92bc-3b26e2fc08ec-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4b6fac70-baa4-451b-92bc-3b26e2fc08ec\") " pod="openstack/nova-api-0" Dec 10 15:46:18 crc kubenswrapper[4755]: I1210 15:46:18.764759 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b6fac70-baa4-451b-92bc-3b26e2fc08ec-logs\") pod \"nova-api-0\" (UID: \"4b6fac70-baa4-451b-92bc-3b26e2fc08ec\") " pod="openstack/nova-api-0" Dec 10 15:46:18 crc kubenswrapper[4755]: I1210 15:46:18.764834 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14f7a209-c2fe-4945-9793-a8e4fd08083d-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"14f7a209-c2fe-4945-9793-a8e4fd08083d\") " pod="openstack/nova-cell1-novncproxy-0" Dec 10 15:46:18 crc kubenswrapper[4755]: I1210 15:46:18.764897 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b6fac70-baa4-451b-92bc-3b26e2fc08ec-config-data\") pod \"nova-api-0\" (UID: \"4b6fac70-baa4-451b-92bc-3b26e2fc08ec\") " pod="openstack/nova-api-0" Dec 10 15:46:18 crc kubenswrapper[4755]: I1210 15:46:18.764925 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wq7wc\" (UniqueName: \"kubernetes.io/projected/4b6fac70-baa4-451b-92bc-3b26e2fc08ec-kube-api-access-wq7wc\") pod \"nova-api-0\" (UID: \"4b6fac70-baa4-451b-92bc-3b26e2fc08ec\") " pod="openstack/nova-api-0" Dec 10 15:46:18 crc kubenswrapper[4755]: I1210 15:46:18.766964 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b6fac70-baa4-451b-92bc-3b26e2fc08ec-logs\") pod \"nova-api-0\" (UID: \"4b6fac70-baa4-451b-92bc-3b26e2fc08ec\") " pod="openstack/nova-api-0" Dec 10 15:46:18 crc kubenswrapper[4755]: I1210 15:46:18.773092 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14f7a209-c2fe-4945-9793-a8e4fd08083d-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"14f7a209-c2fe-4945-9793-a8e4fd08083d\") " pod="openstack/nova-cell1-novncproxy-0" Dec 10 15:46:18 crc kubenswrapper[4755]: I1210 15:46:18.779331 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b6fac70-baa4-451b-92bc-3b26e2fc08ec-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4b6fac70-baa4-451b-92bc-3b26e2fc08ec\") " pod="openstack/nova-api-0" Dec 10 15:46:18 crc kubenswrapper[4755]: I1210 15:46:18.779617 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b6fac70-baa4-451b-92bc-3b26e2fc08ec-config-data\") pod \"nova-api-0\" (UID: \"4b6fac70-baa4-451b-92bc-3b26e2fc08ec\") " pod="openstack/nova-api-0" Dec 10 15:46:18 crc kubenswrapper[4755]: I1210 15:46:18.779972 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14f7a209-c2fe-4945-9793-a8e4fd08083d-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"14f7a209-c2fe-4945-9793-a8e4fd08083d\") " pod="openstack/nova-cell1-novncproxy-0" Dec 10 15:46:18 crc kubenswrapper[4755]: I1210 15:46:18.828425 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5qgk\" (UniqueName: \"kubernetes.io/projected/14f7a209-c2fe-4945-9793-a8e4fd08083d-kube-api-access-n5qgk\") pod \"nova-cell1-novncproxy-0\" (UID: \"14f7a209-c2fe-4945-9793-a8e4fd08083d\") " pod="openstack/nova-cell1-novncproxy-0" Dec 10 15:46:18 crc kubenswrapper[4755]: I1210 15:46:18.832057 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wq7wc\" (UniqueName: \"kubernetes.io/projected/4b6fac70-baa4-451b-92bc-3b26e2fc08ec-kube-api-access-wq7wc\") pod \"nova-api-0\" (UID: \"4b6fac70-baa4-451b-92bc-3b26e2fc08ec\") " pod="openstack/nova-api-0" Dec 10 15:46:18 crc kubenswrapper[4755]: I1210 15:46:18.859217 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 10 15:46:18 crc kubenswrapper[4755]: I1210 15:46:18.868037 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae2b700f-93b0-4324-bac8-677b580a18a7-config-data\") pod \"nova-scheduler-0\" (UID: \"ae2b700f-93b0-4324-bac8-677b580a18a7\") " pod="openstack/nova-scheduler-0" Dec 10 15:46:18 crc kubenswrapper[4755]: I1210 15:46:18.868141 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae2b700f-93b0-4324-bac8-677b580a18a7-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ae2b700f-93b0-4324-bac8-677b580a18a7\") " pod="openstack/nova-scheduler-0" Dec 10 15:46:18 crc kubenswrapper[4755]: I1210 15:46:18.868315 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzpr7\" (UniqueName: \"kubernetes.io/projected/ae2b700f-93b0-4324-bac8-677b580a18a7-kube-api-access-jzpr7\") pod \"nova-scheduler-0\" (UID: \"ae2b700f-93b0-4324-bac8-677b580a18a7\") " pod="openstack/nova-scheduler-0" Dec 10 15:46:18 crc kubenswrapper[4755]: I1210 15:46:18.888539 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 10 15:46:18 crc kubenswrapper[4755]: I1210 15:46:18.888549 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 10 15:46:18 crc kubenswrapper[4755]: I1210 15:46:18.891813 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 10 15:46:18 crc kubenswrapper[4755]: I1210 15:46:18.898860 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 10 15:46:18 crc kubenswrapper[4755]: I1210 15:46:18.932531 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 10 15:46:18 crc kubenswrapper[4755]: I1210 15:46:18.970913 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae2b700f-93b0-4324-bac8-677b580a18a7-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ae2b700f-93b0-4324-bac8-677b580a18a7\") " pod="openstack/nova-scheduler-0" Dec 10 15:46:18 crc kubenswrapper[4755]: I1210 15:46:18.971008 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d26dac70-f3b8-4932-992f-5b440275b499-config-data\") pod \"nova-metadata-0\" (UID: \"d26dac70-f3b8-4932-992f-5b440275b499\") " pod="openstack/nova-metadata-0" Dec 10 15:46:18 crc kubenswrapper[4755]: I1210 15:46:18.971057 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d26dac70-f3b8-4932-992f-5b440275b499-logs\") pod \"nova-metadata-0\" (UID: \"d26dac70-f3b8-4932-992f-5b440275b499\") " pod="openstack/nova-metadata-0" Dec 10 15:46:18 crc kubenswrapper[4755]: I1210 15:46:18.971159 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzpr7\" (UniqueName: \"kubernetes.io/projected/ae2b700f-93b0-4324-bac8-677b580a18a7-kube-api-access-jzpr7\") pod \"nova-scheduler-0\" (UID: \"ae2b700f-93b0-4324-bac8-677b580a18a7\") " pod="openstack/nova-scheduler-0" Dec 10 15:46:18 crc kubenswrapper[4755]: I1210 15:46:18.971279 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d26dac70-f3b8-4932-992f-5b440275b499-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d26dac70-f3b8-4932-992f-5b440275b499\") " pod="openstack/nova-metadata-0" Dec 10 15:46:18 crc kubenswrapper[4755]: I1210 15:46:18.971307 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae2b700f-93b0-4324-bac8-677b580a18a7-config-data\") pod \"nova-scheduler-0\" (UID: \"ae2b700f-93b0-4324-bac8-677b580a18a7\") " pod="openstack/nova-scheduler-0" Dec 10 15:46:18 crc kubenswrapper[4755]: I1210 15:46:18.971371 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6nmc\" (UniqueName: \"kubernetes.io/projected/d26dac70-f3b8-4932-992f-5b440275b499-kube-api-access-r6nmc\") pod \"nova-metadata-0\" (UID: \"d26dac70-f3b8-4932-992f-5b440275b499\") " pod="openstack/nova-metadata-0" Dec 10 15:46:19 crc kubenswrapper[4755]: I1210 15:46:18.998990 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-884c8b8f5-p2gb9"] Dec 10 15:46:19 crc kubenswrapper[4755]: I1210 15:46:19.002441 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-884c8b8f5-p2gb9" Dec 10 15:46:19 crc kubenswrapper[4755]: I1210 15:46:19.008587 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-884c8b8f5-p2gb9"] Dec 10 15:46:19 crc kubenswrapper[4755]: I1210 15:46:19.025444 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae2b700f-93b0-4324-bac8-677b580a18a7-config-data\") pod \"nova-scheduler-0\" (UID: \"ae2b700f-93b0-4324-bac8-677b580a18a7\") " pod="openstack/nova-scheduler-0" Dec 10 15:46:19 crc kubenswrapper[4755]: I1210 15:46:19.025842 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae2b700f-93b0-4324-bac8-677b580a18a7-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ae2b700f-93b0-4324-bac8-677b580a18a7\") " pod="openstack/nova-scheduler-0" Dec 10 15:46:19 crc kubenswrapper[4755]: I1210 15:46:19.026065 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzpr7\" (UniqueName: \"kubernetes.io/projected/ae2b700f-93b0-4324-bac8-677b580a18a7-kube-api-access-jzpr7\") pod \"nova-scheduler-0\" (UID: \"ae2b700f-93b0-4324-bac8-677b580a18a7\") " pod="openstack/nova-scheduler-0" Dec 10 15:46:19 crc kubenswrapper[4755]: I1210 15:46:19.081620 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b7f7f25-62c8-45da-bbf6-1759da909d8b-config\") pod \"dnsmasq-dns-884c8b8f5-p2gb9\" (UID: \"7b7f7f25-62c8-45da-bbf6-1759da909d8b\") " pod="openstack/dnsmasq-dns-884c8b8f5-p2gb9" Dec 10 15:46:19 crc kubenswrapper[4755]: I1210 15:46:19.082226 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d26dac70-f3b8-4932-992f-5b440275b499-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d26dac70-f3b8-4932-992f-5b440275b499\") " pod="openstack/nova-metadata-0" Dec 10 15:46:19 crc kubenswrapper[4755]: I1210 15:46:19.082255 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frkhj\" (UniqueName: \"kubernetes.io/projected/7b7f7f25-62c8-45da-bbf6-1759da909d8b-kube-api-access-frkhj\") pod \"dnsmasq-dns-884c8b8f5-p2gb9\" (UID: \"7b7f7f25-62c8-45da-bbf6-1759da909d8b\") " pod="openstack/dnsmasq-dns-884c8b8f5-p2gb9" Dec 10 15:46:19 crc kubenswrapper[4755]: I1210 15:46:19.082293 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6nmc\" (UniqueName: \"kubernetes.io/projected/d26dac70-f3b8-4932-992f-5b440275b499-kube-api-access-r6nmc\") pod \"nova-metadata-0\" (UID: \"d26dac70-f3b8-4932-992f-5b440275b499\") " pod="openstack/nova-metadata-0" Dec 10 15:46:19 crc kubenswrapper[4755]: I1210 15:46:19.082328 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7b7f7f25-62c8-45da-bbf6-1759da909d8b-ovsdbserver-nb\") pod \"dnsmasq-dns-884c8b8f5-p2gb9\" (UID: \"7b7f7f25-62c8-45da-bbf6-1759da909d8b\") " pod="openstack/dnsmasq-dns-884c8b8f5-p2gb9" Dec 10 15:46:19 crc kubenswrapper[4755]: I1210 15:46:19.082393 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7b7f7f25-62c8-45da-bbf6-1759da909d8b-dns-swift-storage-0\") pod \"dnsmasq-dns-884c8b8f5-p2gb9\" (UID: \"7b7f7f25-62c8-45da-bbf6-1759da909d8b\") " pod="openstack/dnsmasq-dns-884c8b8f5-p2gb9" Dec 10 15:46:19 crc kubenswrapper[4755]: I1210 15:46:19.082455 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d26dac70-f3b8-4932-992f-5b440275b499-config-data\") pod \"nova-metadata-0\" (UID: \"d26dac70-f3b8-4932-992f-5b440275b499\") " pod="openstack/nova-metadata-0" Dec 10 15:46:19 crc kubenswrapper[4755]: I1210 15:46:19.082494 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7b7f7f25-62c8-45da-bbf6-1759da909d8b-dns-svc\") pod \"dnsmasq-dns-884c8b8f5-p2gb9\" (UID: \"7b7f7f25-62c8-45da-bbf6-1759da909d8b\") " pod="openstack/dnsmasq-dns-884c8b8f5-p2gb9" Dec 10 15:46:19 crc kubenswrapper[4755]: I1210 15:46:19.082543 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d26dac70-f3b8-4932-992f-5b440275b499-logs\") pod \"nova-metadata-0\" (UID: \"d26dac70-f3b8-4932-992f-5b440275b499\") " pod="openstack/nova-metadata-0" Dec 10 15:46:19 crc kubenswrapper[4755]: I1210 15:46:19.082570 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7b7f7f25-62c8-45da-bbf6-1759da909d8b-ovsdbserver-sb\") pod \"dnsmasq-dns-884c8b8f5-p2gb9\" (UID: \"7b7f7f25-62c8-45da-bbf6-1759da909d8b\") " pod="openstack/dnsmasq-dns-884c8b8f5-p2gb9" Dec 10 15:46:19 crc kubenswrapper[4755]: I1210 15:46:19.085581 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d26dac70-f3b8-4932-992f-5b440275b499-logs\") pod \"nova-metadata-0\" (UID: \"d26dac70-f3b8-4932-992f-5b440275b499\") " pod="openstack/nova-metadata-0" Dec 10 15:46:19 crc kubenswrapper[4755]: I1210 15:46:19.104323 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d26dac70-f3b8-4932-992f-5b440275b499-config-data\") pod \"nova-metadata-0\" (UID: \"d26dac70-f3b8-4932-992f-5b440275b499\") " pod="openstack/nova-metadata-0" Dec 10 15:46:19 crc kubenswrapper[4755]: I1210 15:46:19.136909 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d26dac70-f3b8-4932-992f-5b440275b499-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d26dac70-f3b8-4932-992f-5b440275b499\") " pod="openstack/nova-metadata-0" Dec 10 15:46:19 crc kubenswrapper[4755]: I1210 15:46:19.148797 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6nmc\" (UniqueName: \"kubernetes.io/projected/d26dac70-f3b8-4932-992f-5b440275b499-kube-api-access-r6nmc\") pod \"nova-metadata-0\" (UID: \"d26dac70-f3b8-4932-992f-5b440275b499\") " pod="openstack/nova-metadata-0" Dec 10 15:46:19 crc kubenswrapper[4755]: I1210 15:46:19.186856 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7b7f7f25-62c8-45da-bbf6-1759da909d8b-dns-swift-storage-0\") pod \"dnsmasq-dns-884c8b8f5-p2gb9\" (UID: \"7b7f7f25-62c8-45da-bbf6-1759da909d8b\") " pod="openstack/dnsmasq-dns-884c8b8f5-p2gb9" Dec 10 15:46:19 crc kubenswrapper[4755]: I1210 15:46:19.186963 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7b7f7f25-62c8-45da-bbf6-1759da909d8b-dns-svc\") pod \"dnsmasq-dns-884c8b8f5-p2gb9\" (UID: \"7b7f7f25-62c8-45da-bbf6-1759da909d8b\") " pod="openstack/dnsmasq-dns-884c8b8f5-p2gb9" Dec 10 15:46:19 crc kubenswrapper[4755]: I1210 15:46:19.187021 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7b7f7f25-62c8-45da-bbf6-1759da909d8b-ovsdbserver-sb\") pod \"dnsmasq-dns-884c8b8f5-p2gb9\" (UID: \"7b7f7f25-62c8-45da-bbf6-1759da909d8b\") " pod="openstack/dnsmasq-dns-884c8b8f5-p2gb9" Dec 10 15:46:19 crc kubenswrapper[4755]: I1210 15:46:19.187099 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b7f7f25-62c8-45da-bbf6-1759da909d8b-config\") pod \"dnsmasq-dns-884c8b8f5-p2gb9\" (UID: \"7b7f7f25-62c8-45da-bbf6-1759da909d8b\") " pod="openstack/dnsmasq-dns-884c8b8f5-p2gb9" Dec 10 15:46:19 crc kubenswrapper[4755]: I1210 15:46:19.187177 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frkhj\" (UniqueName: \"kubernetes.io/projected/7b7f7f25-62c8-45da-bbf6-1759da909d8b-kube-api-access-frkhj\") pod \"dnsmasq-dns-884c8b8f5-p2gb9\" (UID: \"7b7f7f25-62c8-45da-bbf6-1759da909d8b\") " pod="openstack/dnsmasq-dns-884c8b8f5-p2gb9" Dec 10 15:46:19 crc kubenswrapper[4755]: I1210 15:46:19.187230 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7b7f7f25-62c8-45da-bbf6-1759da909d8b-ovsdbserver-nb\") pod \"dnsmasq-dns-884c8b8f5-p2gb9\" (UID: \"7b7f7f25-62c8-45da-bbf6-1759da909d8b\") " pod="openstack/dnsmasq-dns-884c8b8f5-p2gb9" Dec 10 15:46:19 crc kubenswrapper[4755]: I1210 15:46:19.188374 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7b7f7f25-62c8-45da-bbf6-1759da909d8b-ovsdbserver-nb\") pod \"dnsmasq-dns-884c8b8f5-p2gb9\" (UID: \"7b7f7f25-62c8-45da-bbf6-1759da909d8b\") " pod="openstack/dnsmasq-dns-884c8b8f5-p2gb9" Dec 10 15:46:19 crc kubenswrapper[4755]: I1210 15:46:19.189053 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7b7f7f25-62c8-45da-bbf6-1759da909d8b-dns-swift-storage-0\") pod \"dnsmasq-dns-884c8b8f5-p2gb9\" (UID: \"7b7f7f25-62c8-45da-bbf6-1759da909d8b\") " pod="openstack/dnsmasq-dns-884c8b8f5-p2gb9" Dec 10 15:46:19 crc kubenswrapper[4755]: I1210 15:46:19.195300 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b7f7f25-62c8-45da-bbf6-1759da909d8b-config\") pod \"dnsmasq-dns-884c8b8f5-p2gb9\" (UID: \"7b7f7f25-62c8-45da-bbf6-1759da909d8b\") " pod="openstack/dnsmasq-dns-884c8b8f5-p2gb9" Dec 10 15:46:19 crc kubenswrapper[4755]: I1210 15:46:19.195358 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7b7f7f25-62c8-45da-bbf6-1759da909d8b-dns-svc\") pod \"dnsmasq-dns-884c8b8f5-p2gb9\" (UID: \"7b7f7f25-62c8-45da-bbf6-1759da909d8b\") " pod="openstack/dnsmasq-dns-884c8b8f5-p2gb9" Dec 10 15:46:19 crc kubenswrapper[4755]: I1210 15:46:19.195926 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7b7f7f25-62c8-45da-bbf6-1759da909d8b-ovsdbserver-sb\") pod \"dnsmasq-dns-884c8b8f5-p2gb9\" (UID: \"7b7f7f25-62c8-45da-bbf6-1759da909d8b\") " pod="openstack/dnsmasq-dns-884c8b8f5-p2gb9" Dec 10 15:46:19 crc kubenswrapper[4755]: I1210 15:46:19.215022 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frkhj\" (UniqueName: \"kubernetes.io/projected/7b7f7f25-62c8-45da-bbf6-1759da909d8b-kube-api-access-frkhj\") pod \"dnsmasq-dns-884c8b8f5-p2gb9\" (UID: \"7b7f7f25-62c8-45da-bbf6-1759da909d8b\") " pod="openstack/dnsmasq-dns-884c8b8f5-p2gb9" Dec 10 15:46:19 crc kubenswrapper[4755]: I1210 15:46:19.326652 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 10 15:46:19 crc kubenswrapper[4755]: I1210 15:46:19.344019 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 10 15:46:19 crc kubenswrapper[4755]: I1210 15:46:19.382991 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-884c8b8f5-p2gb9" Dec 10 15:46:19 crc kubenswrapper[4755]: I1210 15:46:19.565514 4755 generic.go:334] "Generic (PLEG): container finished" podID="4cb6ed71-48b0-45c8-a470-4b6441c7bff5" containerID="0e102a3cd3d4f61e9aba3a0e51bed4a02589683422eeb7418b48d093f13ab8dd" exitCode=0 Dec 10 15:46:19 crc kubenswrapper[4755]: I1210 15:46:19.565575 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wxn4w" event={"ID":"4cb6ed71-48b0-45c8-a470-4b6441c7bff5","Type":"ContainerDied","Data":"0e102a3cd3d4f61e9aba3a0e51bed4a02589683422eeb7418b48d093f13ab8dd"} Dec 10 15:46:19 crc kubenswrapper[4755]: I1210 15:46:19.659545 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-h7t25"] Dec 10 15:46:19 crc kubenswrapper[4755]: W1210 15:46:19.723730 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc537dd47_5926_4c76_9a78_af49c9418027.slice/crio-8c7b30529ba83a7192a818fe605378cfffa545b1df97a723a7218e4a6f78362f WatchSource:0}: Error finding container 8c7b30529ba83a7192a818fe605378cfffa545b1df97a723a7218e4a6f78362f: Status 404 returned error can't find the container with id 8c7b30529ba83a7192a818fe605378cfffa545b1df97a723a7218e4a6f78362f Dec 10 15:46:19 crc kubenswrapper[4755]: I1210 15:46:19.856298 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 10 15:46:20 crc kubenswrapper[4755]: I1210 15:46:20.090298 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 10 15:46:20 crc kubenswrapper[4755]: W1210 15:46:20.092676 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod14f7a209_c2fe_4945_9793_a8e4fd08083d.slice/crio-60d5215934b7e8424501c321b3add65e4d684a454bc6986017aea2368c834218 WatchSource:0}: Error finding container 60d5215934b7e8424501c321b3add65e4d684a454bc6986017aea2368c834218: Status 404 returned error can't find the container with id 60d5215934b7e8424501c321b3add65e4d684a454bc6986017aea2368c834218 Dec 10 15:46:20 crc kubenswrapper[4755]: I1210 15:46:20.234650 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-vx69q"] Dec 10 15:46:20 crc kubenswrapper[4755]: I1210 15:46:20.236395 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-vx69q" Dec 10 15:46:20 crc kubenswrapper[4755]: I1210 15:46:20.240540 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Dec 10 15:46:20 crc kubenswrapper[4755]: I1210 15:46:20.243691 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Dec 10 15:46:20 crc kubenswrapper[4755]: I1210 15:46:20.245155 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-vx69q"] Dec 10 15:46:20 crc kubenswrapper[4755]: I1210 15:46:20.326538 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/790c3b51-ebb0-4e09-83ed-ecc8cf5c7701-scripts\") pod \"nova-cell1-conductor-db-sync-vx69q\" (UID: \"790c3b51-ebb0-4e09-83ed-ecc8cf5c7701\") " pod="openstack/nova-cell1-conductor-db-sync-vx69q" Dec 10 15:46:20 crc kubenswrapper[4755]: I1210 15:46:20.326648 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/790c3b51-ebb0-4e09-83ed-ecc8cf5c7701-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-vx69q\" (UID: \"790c3b51-ebb0-4e09-83ed-ecc8cf5c7701\") " pod="openstack/nova-cell1-conductor-db-sync-vx69q" Dec 10 15:46:20 crc kubenswrapper[4755]: I1210 15:46:20.326701 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/790c3b51-ebb0-4e09-83ed-ecc8cf5c7701-config-data\") pod \"nova-cell1-conductor-db-sync-vx69q\" (UID: \"790c3b51-ebb0-4e09-83ed-ecc8cf5c7701\") " pod="openstack/nova-cell1-conductor-db-sync-vx69q" Dec 10 15:46:20 crc kubenswrapper[4755]: I1210 15:46:20.326801 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2fvg\" (UniqueName: \"kubernetes.io/projected/790c3b51-ebb0-4e09-83ed-ecc8cf5c7701-kube-api-access-z2fvg\") pod \"nova-cell1-conductor-db-sync-vx69q\" (UID: \"790c3b51-ebb0-4e09-83ed-ecc8cf5c7701\") " pod="openstack/nova-cell1-conductor-db-sync-vx69q" Dec 10 15:46:20 crc kubenswrapper[4755]: I1210 15:46:20.395206 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 10 15:46:20 crc kubenswrapper[4755]: I1210 15:46:20.429401 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2fvg\" (UniqueName: \"kubernetes.io/projected/790c3b51-ebb0-4e09-83ed-ecc8cf5c7701-kube-api-access-z2fvg\") pod \"nova-cell1-conductor-db-sync-vx69q\" (UID: \"790c3b51-ebb0-4e09-83ed-ecc8cf5c7701\") " pod="openstack/nova-cell1-conductor-db-sync-vx69q" Dec 10 15:46:20 crc kubenswrapper[4755]: I1210 15:46:20.429584 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/790c3b51-ebb0-4e09-83ed-ecc8cf5c7701-scripts\") pod \"nova-cell1-conductor-db-sync-vx69q\" (UID: \"790c3b51-ebb0-4e09-83ed-ecc8cf5c7701\") " pod="openstack/nova-cell1-conductor-db-sync-vx69q" Dec 10 15:46:20 crc kubenswrapper[4755]: I1210 15:46:20.429687 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/790c3b51-ebb0-4e09-83ed-ecc8cf5c7701-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-vx69q\" (UID: \"790c3b51-ebb0-4e09-83ed-ecc8cf5c7701\") " pod="openstack/nova-cell1-conductor-db-sync-vx69q" Dec 10 15:46:20 crc kubenswrapper[4755]: I1210 15:46:20.429729 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/790c3b51-ebb0-4e09-83ed-ecc8cf5c7701-config-data\") pod \"nova-cell1-conductor-db-sync-vx69q\" (UID: \"790c3b51-ebb0-4e09-83ed-ecc8cf5c7701\") " pod="openstack/nova-cell1-conductor-db-sync-vx69q" Dec 10 15:46:20 crc kubenswrapper[4755]: I1210 15:46:20.439042 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/790c3b51-ebb0-4e09-83ed-ecc8cf5c7701-scripts\") pod \"nova-cell1-conductor-db-sync-vx69q\" (UID: \"790c3b51-ebb0-4e09-83ed-ecc8cf5c7701\") " pod="openstack/nova-cell1-conductor-db-sync-vx69q" Dec 10 15:46:20 crc kubenswrapper[4755]: I1210 15:46:20.439074 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/790c3b51-ebb0-4e09-83ed-ecc8cf5c7701-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-vx69q\" (UID: \"790c3b51-ebb0-4e09-83ed-ecc8cf5c7701\") " pod="openstack/nova-cell1-conductor-db-sync-vx69q" Dec 10 15:46:20 crc kubenswrapper[4755]: I1210 15:46:20.457201 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/790c3b51-ebb0-4e09-83ed-ecc8cf5c7701-config-data\") pod \"nova-cell1-conductor-db-sync-vx69q\" (UID: \"790c3b51-ebb0-4e09-83ed-ecc8cf5c7701\") " pod="openstack/nova-cell1-conductor-db-sync-vx69q" Dec 10 15:46:20 crc kubenswrapper[4755]: I1210 15:46:20.469722 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2fvg\" (UniqueName: \"kubernetes.io/projected/790c3b51-ebb0-4e09-83ed-ecc8cf5c7701-kube-api-access-z2fvg\") pod \"nova-cell1-conductor-db-sync-vx69q\" (UID: \"790c3b51-ebb0-4e09-83ed-ecc8cf5c7701\") " pod="openstack/nova-cell1-conductor-db-sync-vx69q" Dec 10 15:46:20 crc kubenswrapper[4755]: I1210 15:46:20.558058 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-884c8b8f5-p2gb9"] Dec 10 15:46:20 crc kubenswrapper[4755]: I1210 15:46:20.564435 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-vx69q" Dec 10 15:46:20 crc kubenswrapper[4755]: I1210 15:46:20.586739 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 10 15:46:20 crc kubenswrapper[4755]: I1210 15:46:20.637521 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4b6fac70-baa4-451b-92bc-3b26e2fc08ec","Type":"ContainerStarted","Data":"b6f8fdb4bd106e211ca783603e8e7eb370c05a3d3e995d2380d98af5901668ff"} Dec 10 15:46:20 crc kubenswrapper[4755]: I1210 15:46:20.644819 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"14f7a209-c2fe-4945-9793-a8e4fd08083d","Type":"ContainerStarted","Data":"60d5215934b7e8424501c321b3add65e4d684a454bc6986017aea2368c834218"} Dec 10 15:46:20 crc kubenswrapper[4755]: I1210 15:46:20.646294 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d26dac70-f3b8-4932-992f-5b440275b499","Type":"ContainerStarted","Data":"eb7bfc95371bc13baeeed88f56e58d775e1443c1a42f92dcb4414c645c0c28b9"} Dec 10 15:46:20 crc kubenswrapper[4755]: I1210 15:46:20.648405 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-h7t25" event={"ID":"c537dd47-5926-4c76-9a78-af49c9418027","Type":"ContainerStarted","Data":"1c1d036baaea994e62ab2ed46c5a1bfeea6ecdd9d8722ff1fc7402c8729a6add"} Dec 10 15:46:20 crc kubenswrapper[4755]: I1210 15:46:20.648431 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-h7t25" event={"ID":"c537dd47-5926-4c76-9a78-af49c9418027","Type":"ContainerStarted","Data":"8c7b30529ba83a7192a818fe605378cfffa545b1df97a723a7218e4a6f78362f"} Dec 10 15:46:20 crc kubenswrapper[4755]: I1210 15:46:20.699122 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-h7t25" podStartSLOduration=2.699102322 podStartE2EDuration="2.699102322s" podCreationTimestamp="2025-12-10 15:46:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:46:20.663038973 +0000 UTC m=+1377.263922605" watchObservedRunningTime="2025-12-10 15:46:20.699102322 +0000 UTC m=+1377.299985954" Dec 10 15:46:21 crc kubenswrapper[4755]: I1210 15:46:21.256009 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-vx69q"] Dec 10 15:46:21 crc kubenswrapper[4755]: I1210 15:46:21.666305 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-vx69q" event={"ID":"790c3b51-ebb0-4e09-83ed-ecc8cf5c7701","Type":"ContainerStarted","Data":"1bcf89598c6d7fd6ee49ea88b74b2bf5a037da185e4ec1f1c2ebad0b1f9487c9"} Dec 10 15:46:21 crc kubenswrapper[4755]: I1210 15:46:21.666695 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-vx69q" event={"ID":"790c3b51-ebb0-4e09-83ed-ecc8cf5c7701","Type":"ContainerStarted","Data":"6a24900d1ffa69db85629aa44df6fd4dfcb7c8f215f2a414265406a5c2157c54"} Dec 10 15:46:21 crc kubenswrapper[4755]: I1210 15:46:21.675395 4755 generic.go:334] "Generic (PLEG): container finished" podID="7b7f7f25-62c8-45da-bbf6-1759da909d8b" containerID="b3d90b386da5b188f2d52f5c9967bb3e12220508e9c08388fa572c06cf2a30d3" exitCode=0 Dec 10 15:46:21 crc kubenswrapper[4755]: I1210 15:46:21.675527 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-884c8b8f5-p2gb9" event={"ID":"7b7f7f25-62c8-45da-bbf6-1759da909d8b","Type":"ContainerDied","Data":"b3d90b386da5b188f2d52f5c9967bb3e12220508e9c08388fa572c06cf2a30d3"} Dec 10 15:46:21 crc kubenswrapper[4755]: I1210 15:46:21.675562 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-884c8b8f5-p2gb9" event={"ID":"7b7f7f25-62c8-45da-bbf6-1759da909d8b","Type":"ContainerStarted","Data":"34073b35aba2112689ea93167b786303d9de8163c23538a6a30116afc0c6e4b9"} Dec 10 15:46:21 crc kubenswrapper[4755]: I1210 15:46:21.688236 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-vx69q" podStartSLOduration=1.6882148510000001 podStartE2EDuration="1.688214851s" podCreationTimestamp="2025-12-10 15:46:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:46:21.686121404 +0000 UTC m=+1378.287005036" watchObservedRunningTime="2025-12-10 15:46:21.688214851 +0000 UTC m=+1378.289098483" Dec 10 15:46:21 crc kubenswrapper[4755]: I1210 15:46:21.689643 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ae2b700f-93b0-4324-bac8-677b580a18a7","Type":"ContainerStarted","Data":"4e902f20e4967238d1e2540ee6adf4d978e765231c9fd64d7e932d6d9fdfefc3"} Dec 10 15:46:22 crc kubenswrapper[4755]: I1210 15:46:22.730211 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-884c8b8f5-p2gb9" event={"ID":"7b7f7f25-62c8-45da-bbf6-1759da909d8b","Type":"ContainerStarted","Data":"99ffa58cf1ff350d6602a854711010bed17523f78745126a9386419d3f526f23"} Dec 10 15:46:22 crc kubenswrapper[4755]: I1210 15:46:22.730528 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-884c8b8f5-p2gb9" Dec 10 15:46:22 crc kubenswrapper[4755]: I1210 15:46:22.759090 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-884c8b8f5-p2gb9" podStartSLOduration=4.759072548 podStartE2EDuration="4.759072548s" podCreationTimestamp="2025-12-10 15:46:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:46:22.754815963 +0000 UTC m=+1379.355699615" watchObservedRunningTime="2025-12-10 15:46:22.759072548 +0000 UTC m=+1379.359956180" Dec 10 15:46:22 crc kubenswrapper[4755]: I1210 15:46:22.831516 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 10 15:46:22 crc kubenswrapper[4755]: I1210 15:46:22.859531 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 10 15:46:27 crc kubenswrapper[4755]: I1210 15:46:27.798963 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="14f7a209-c2fe-4945-9793-a8e4fd08083d" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://aa5021f8698e43a56b0cfc0b6fcdc2691145563b34c15d8345d9ed779a19f060" gracePeriod=30 Dec 10 15:46:27 crc kubenswrapper[4755]: I1210 15:46:27.799290 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"14f7a209-c2fe-4945-9793-a8e4fd08083d","Type":"ContainerStarted","Data":"aa5021f8698e43a56b0cfc0b6fcdc2691145563b34c15d8345d9ed779a19f060"} Dec 10 15:46:27 crc kubenswrapper[4755]: I1210 15:46:27.802966 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d26dac70-f3b8-4932-992f-5b440275b499","Type":"ContainerStarted","Data":"3c0e87a837697ed64204a2a6397c97d6db921c4b0792510533f503f232ef7eea"} Dec 10 15:46:27 crc kubenswrapper[4755]: I1210 15:46:27.817764 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ae2b700f-93b0-4324-bac8-677b580a18a7","Type":"ContainerStarted","Data":"26f0e397131eb766311575b702ff80fdd6af99fe2c76eae931e9487599cec731"} Dec 10 15:46:27 crc kubenswrapper[4755]: I1210 15:46:27.820720 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4b6fac70-baa4-451b-92bc-3b26e2fc08ec","Type":"ContainerStarted","Data":"cc0745a88f315f80b29eaf0855931a650ff72954d0e9b1c7b64d46c0c660482f"} Dec 10 15:46:27 crc kubenswrapper[4755]: I1210 15:46:27.827298 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.67975507 podStartE2EDuration="9.8272781s" podCreationTimestamp="2025-12-10 15:46:18 +0000 UTC" firstStartedPulling="2025-12-10 15:46:20.110029791 +0000 UTC m=+1376.710913423" lastFinishedPulling="2025-12-10 15:46:26.257552821 +0000 UTC m=+1382.858436453" observedRunningTime="2025-12-10 15:46:27.815957453 +0000 UTC m=+1384.416841085" watchObservedRunningTime="2025-12-10 15:46:27.8272781 +0000 UTC m=+1384.428161742" Dec 10 15:46:27 crc kubenswrapper[4755]: I1210 15:46:27.844018 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=4.318010705 podStartE2EDuration="9.843999874s" podCreationTimestamp="2025-12-10 15:46:18 +0000 UTC" firstStartedPulling="2025-12-10 15:46:20.731283355 +0000 UTC m=+1377.332166987" lastFinishedPulling="2025-12-10 15:46:26.257272524 +0000 UTC m=+1382.858156156" observedRunningTime="2025-12-10 15:46:27.837521488 +0000 UTC m=+1384.438405120" watchObservedRunningTime="2025-12-10 15:46:27.843999874 +0000 UTC m=+1384.444883506" Dec 10 15:46:28 crc kubenswrapper[4755]: I1210 15:46:28.840813 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4b6fac70-baa4-451b-92bc-3b26e2fc08ec","Type":"ContainerStarted","Data":"5811125e9148c582d43b521ecbd0c2cb46010cb7aef72726330fc31d991e6aa1"} Dec 10 15:46:28 crc kubenswrapper[4755]: I1210 15:46:28.854814 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="d26dac70-f3b8-4932-992f-5b440275b499" containerName="nova-metadata-log" containerID="cri-o://3c0e87a837697ed64204a2a6397c97d6db921c4b0792510533f503f232ef7eea" gracePeriod=30 Dec 10 15:46:28 crc kubenswrapper[4755]: I1210 15:46:28.856913 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="d26dac70-f3b8-4932-992f-5b440275b499" containerName="nova-metadata-metadata" containerID="cri-o://bffa13a630422977abb1512b43d88223d3f379b890cf334357963920f2d840b3" gracePeriod=30 Dec 10 15:46:28 crc kubenswrapper[4755]: I1210 15:46:28.857761 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d26dac70-f3b8-4932-992f-5b440275b499","Type":"ContainerStarted","Data":"bffa13a630422977abb1512b43d88223d3f379b890cf334357963920f2d840b3"} Dec 10 15:46:28 crc kubenswrapper[4755]: I1210 15:46:28.867663 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 10 15:46:28 crc kubenswrapper[4755]: I1210 15:46:28.867719 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 10 15:46:28 crc kubenswrapper[4755]: I1210 15:46:28.889499 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Dec 10 15:46:28 crc kubenswrapper[4755]: I1210 15:46:28.896309 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=4.545857481 podStartE2EDuration="10.896286228s" podCreationTimestamp="2025-12-10 15:46:18 +0000 UTC" firstStartedPulling="2025-12-10 15:46:19.90733295 +0000 UTC m=+1376.508216582" lastFinishedPulling="2025-12-10 15:46:26.257761697 +0000 UTC m=+1382.858645329" observedRunningTime="2025-12-10 15:46:28.871262658 +0000 UTC m=+1385.472146290" watchObservedRunningTime="2025-12-10 15:46:28.896286228 +0000 UTC m=+1385.497169860" Dec 10 15:46:28 crc kubenswrapper[4755]: I1210 15:46:28.907716 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=5.018775118 podStartE2EDuration="10.907694717s" podCreationTimestamp="2025-12-10 15:46:18 +0000 UTC" firstStartedPulling="2025-12-10 15:46:20.383635159 +0000 UTC m=+1376.984518791" lastFinishedPulling="2025-12-10 15:46:26.272554758 +0000 UTC m=+1382.873438390" observedRunningTime="2025-12-10 15:46:28.893018609 +0000 UTC m=+1385.493902241" watchObservedRunningTime="2025-12-10 15:46:28.907694717 +0000 UTC m=+1385.508578349" Dec 10 15:46:29 crc kubenswrapper[4755]: I1210 15:46:29.328116 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 10 15:46:29 crc kubenswrapper[4755]: I1210 15:46:29.328159 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 10 15:46:29 crc kubenswrapper[4755]: I1210 15:46:29.344676 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 10 15:46:29 crc kubenswrapper[4755]: I1210 15:46:29.344749 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 10 15:46:29 crc kubenswrapper[4755]: I1210 15:46:29.387603 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 10 15:46:29 crc kubenswrapper[4755]: I1210 15:46:29.388655 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-884c8b8f5-p2gb9" Dec 10 15:46:29 crc kubenswrapper[4755]: I1210 15:46:29.491261 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58bd69657f-n5hkt"] Dec 10 15:46:29 crc kubenswrapper[4755]: I1210 15:46:29.496847 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-58bd69657f-n5hkt" podUID="45648d9c-bd22-443a-bf3f-8c08998388ec" containerName="dnsmasq-dns" containerID="cri-o://20930194f79f2f7e19899ac550e4647fc661cb704ee91cde2a2d2036c5cc0cd8" gracePeriod=10 Dec 10 15:46:29 crc kubenswrapper[4755]: I1210 15:46:29.831866 4755 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-58bd69657f-n5hkt" podUID="45648d9c-bd22-443a-bf3f-8c08998388ec" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.185:5353: connect: connection refused" Dec 10 15:46:29 crc kubenswrapper[4755]: I1210 15:46:29.869209 4755 generic.go:334] "Generic (PLEG): container finished" podID="d26dac70-f3b8-4932-992f-5b440275b499" containerID="bffa13a630422977abb1512b43d88223d3f379b890cf334357963920f2d840b3" exitCode=0 Dec 10 15:46:29 crc kubenswrapper[4755]: I1210 15:46:29.869238 4755 generic.go:334] "Generic (PLEG): container finished" podID="d26dac70-f3b8-4932-992f-5b440275b499" containerID="3c0e87a837697ed64204a2a6397c97d6db921c4b0792510533f503f232ef7eea" exitCode=143 Dec 10 15:46:29 crc kubenswrapper[4755]: I1210 15:46:29.870134 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d26dac70-f3b8-4932-992f-5b440275b499","Type":"ContainerDied","Data":"bffa13a630422977abb1512b43d88223d3f379b890cf334357963920f2d840b3"} Dec 10 15:46:29 crc kubenswrapper[4755]: I1210 15:46:29.870171 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d26dac70-f3b8-4932-992f-5b440275b499","Type":"ContainerDied","Data":"3c0e87a837697ed64204a2a6397c97d6db921c4b0792510533f503f232ef7eea"} Dec 10 15:46:29 crc kubenswrapper[4755]: I1210 15:46:29.913362 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 10 15:46:29 crc kubenswrapper[4755]: I1210 15:46:29.946249 4755 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="4b6fac70-baa4-451b-92bc-3b26e2fc08ec" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.211:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 10 15:46:29 crc kubenswrapper[4755]: I1210 15:46:29.946358 4755 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="4b6fac70-baa4-451b-92bc-3b26e2fc08ec" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.211:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 10 15:46:31 crc kubenswrapper[4755]: I1210 15:46:31.899063 4755 generic.go:334] "Generic (PLEG): container finished" podID="45648d9c-bd22-443a-bf3f-8c08998388ec" containerID="20930194f79f2f7e19899ac550e4647fc661cb704ee91cde2a2d2036c5cc0cd8" exitCode=0 Dec 10 15:46:31 crc kubenswrapper[4755]: I1210 15:46:31.899153 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58bd69657f-n5hkt" event={"ID":"45648d9c-bd22-443a-bf3f-8c08998388ec","Type":"ContainerDied","Data":"20930194f79f2f7e19899ac550e4647fc661cb704ee91cde2a2d2036c5cc0cd8"} Dec 10 15:46:32 crc kubenswrapper[4755]: I1210 15:46:32.910439 4755 generic.go:334] "Generic (PLEG): container finished" podID="c537dd47-5926-4c76-9a78-af49c9418027" containerID="1c1d036baaea994e62ab2ed46c5a1bfeea6ecdd9d8722ff1fc7402c8729a6add" exitCode=0 Dec 10 15:46:32 crc kubenswrapper[4755]: I1210 15:46:32.910516 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-h7t25" event={"ID":"c537dd47-5926-4c76-9a78-af49c9418027","Type":"ContainerDied","Data":"1c1d036baaea994e62ab2ed46c5a1bfeea6ecdd9d8722ff1fc7402c8729a6add"} Dec 10 15:46:34 crc kubenswrapper[4755]: I1210 15:46:34.832105 4755 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-58bd69657f-n5hkt" podUID="45648d9c-bd22-443a-bf3f-8c08998388ec" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.185:5353: connect: connection refused" Dec 10 15:46:35 crc kubenswrapper[4755]: I1210 15:46:35.798981 4755 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="48b9cc99-2595-445c-aca6-b13972e95324" containerName="galera" probeResult="failure" output="command timed out" Dec 10 15:46:35 crc kubenswrapper[4755]: I1210 15:46:35.807170 4755 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="48b9cc99-2595-445c-aca6-b13972e95324" containerName="galera" probeResult="failure" output="command timed out" Dec 10 15:46:36 crc kubenswrapper[4755]: I1210 15:46:36.298741 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-h7t25" Dec 10 15:46:36 crc kubenswrapper[4755]: I1210 15:46:36.437550 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c537dd47-5926-4c76-9a78-af49c9418027-scripts\") pod \"c537dd47-5926-4c76-9a78-af49c9418027\" (UID: \"c537dd47-5926-4c76-9a78-af49c9418027\") " Dec 10 15:46:36 crc kubenswrapper[4755]: I1210 15:46:36.437703 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c537dd47-5926-4c76-9a78-af49c9418027-combined-ca-bundle\") pod \"c537dd47-5926-4c76-9a78-af49c9418027\" (UID: \"c537dd47-5926-4c76-9a78-af49c9418027\") " Dec 10 15:46:36 crc kubenswrapper[4755]: I1210 15:46:36.437802 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d9rvg\" (UniqueName: \"kubernetes.io/projected/c537dd47-5926-4c76-9a78-af49c9418027-kube-api-access-d9rvg\") pod \"c537dd47-5926-4c76-9a78-af49c9418027\" (UID: \"c537dd47-5926-4c76-9a78-af49c9418027\") " Dec 10 15:46:36 crc kubenswrapper[4755]: I1210 15:46:36.438007 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c537dd47-5926-4c76-9a78-af49c9418027-config-data\") pod \"c537dd47-5926-4c76-9a78-af49c9418027\" (UID: \"c537dd47-5926-4c76-9a78-af49c9418027\") " Dec 10 15:46:36 crc kubenswrapper[4755]: I1210 15:46:36.461525 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c537dd47-5926-4c76-9a78-af49c9418027-scripts" (OuterVolumeSpecName: "scripts") pod "c537dd47-5926-4c76-9a78-af49c9418027" (UID: "c537dd47-5926-4c76-9a78-af49c9418027"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:46:36 crc kubenswrapper[4755]: I1210 15:46:36.467346 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c537dd47-5926-4c76-9a78-af49c9418027-kube-api-access-d9rvg" (OuterVolumeSpecName: "kube-api-access-d9rvg") pod "c537dd47-5926-4c76-9a78-af49c9418027" (UID: "c537dd47-5926-4c76-9a78-af49c9418027"). InnerVolumeSpecName "kube-api-access-d9rvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:46:36 crc kubenswrapper[4755]: I1210 15:46:36.480756 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c537dd47-5926-4c76-9a78-af49c9418027-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c537dd47-5926-4c76-9a78-af49c9418027" (UID: "c537dd47-5926-4c76-9a78-af49c9418027"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:46:36 crc kubenswrapper[4755]: I1210 15:46:36.489794 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c537dd47-5926-4c76-9a78-af49c9418027-config-data" (OuterVolumeSpecName: "config-data") pod "c537dd47-5926-4c76-9a78-af49c9418027" (UID: "c537dd47-5926-4c76-9a78-af49c9418027"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:46:36 crc kubenswrapper[4755]: I1210 15:46:36.541077 4755 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c537dd47-5926-4c76-9a78-af49c9418027-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 15:46:36 crc kubenswrapper[4755]: I1210 15:46:36.541121 4755 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c537dd47-5926-4c76-9a78-af49c9418027-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 15:46:36 crc kubenswrapper[4755]: I1210 15:46:36.541139 4755 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c537dd47-5926-4c76-9a78-af49c9418027-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:46:36 crc kubenswrapper[4755]: I1210 15:46:36.541160 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d9rvg\" (UniqueName: \"kubernetes.io/projected/c537dd47-5926-4c76-9a78-af49c9418027-kube-api-access-d9rvg\") on node \"crc\" DevicePath \"\"" Dec 10 15:46:36 crc kubenswrapper[4755]: I1210 15:46:36.968873 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-h7t25" event={"ID":"c537dd47-5926-4c76-9a78-af49c9418027","Type":"ContainerDied","Data":"8c7b30529ba83a7192a818fe605378cfffa545b1df97a723a7218e4a6f78362f"} Dec 10 15:46:36 crc kubenswrapper[4755]: I1210 15:46:36.968924 4755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c7b30529ba83a7192a818fe605378cfffa545b1df97a723a7218e4a6f78362f" Dec 10 15:46:36 crc kubenswrapper[4755]: I1210 15:46:36.968960 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-h7t25" Dec 10 15:46:37 crc kubenswrapper[4755]: I1210 15:46:37.488064 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 10 15:46:37 crc kubenswrapper[4755]: I1210 15:46:37.488625 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="4b6fac70-baa4-451b-92bc-3b26e2fc08ec" containerName="nova-api-log" containerID="cri-o://cc0745a88f315f80b29eaf0855931a650ff72954d0e9b1c7b64d46c0c660482f" gracePeriod=30 Dec 10 15:46:37 crc kubenswrapper[4755]: I1210 15:46:37.488759 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="4b6fac70-baa4-451b-92bc-3b26e2fc08ec" containerName="nova-api-api" containerID="cri-o://5811125e9148c582d43b521ecbd0c2cb46010cb7aef72726330fc31d991e6aa1" gracePeriod=30 Dec 10 15:46:37 crc kubenswrapper[4755]: I1210 15:46:37.528010 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 10 15:46:37 crc kubenswrapper[4755]: I1210 15:46:37.528283 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="ae2b700f-93b0-4324-bac8-677b580a18a7" containerName="nova-scheduler-scheduler" containerID="cri-o://26f0e397131eb766311575b702ff80fdd6af99fe2c76eae931e9487599cec731" gracePeriod=30 Dec 10 15:46:37 crc kubenswrapper[4755]: I1210 15:46:37.980783 4755 generic.go:334] "Generic (PLEG): container finished" podID="4b6fac70-baa4-451b-92bc-3b26e2fc08ec" containerID="cc0745a88f315f80b29eaf0855931a650ff72954d0e9b1c7b64d46c0c660482f" exitCode=143 Dec 10 15:46:37 crc kubenswrapper[4755]: I1210 15:46:37.980840 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4b6fac70-baa4-451b-92bc-3b26e2fc08ec","Type":"ContainerDied","Data":"cc0745a88f315f80b29eaf0855931a650ff72954d0e9b1c7b64d46c0c660482f"} Dec 10 15:46:39 crc kubenswrapper[4755]: E1210 15:46:39.333026 4755 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="26f0e397131eb766311575b702ff80fdd6af99fe2c76eae931e9487599cec731" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 10 15:46:39 crc kubenswrapper[4755]: E1210 15:46:39.336617 4755 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="26f0e397131eb766311575b702ff80fdd6af99fe2c76eae931e9487599cec731" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 10 15:46:39 crc kubenswrapper[4755]: E1210 15:46:39.338695 4755 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="26f0e397131eb766311575b702ff80fdd6af99fe2c76eae931e9487599cec731" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 10 15:46:39 crc kubenswrapper[4755]: E1210 15:46:39.338773 4755 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="ae2b700f-93b0-4324-bac8-677b580a18a7" containerName="nova-scheduler-scheduler" Dec 10 15:46:40 crc kubenswrapper[4755]: I1210 15:46:40.216862 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 10 15:46:40 crc kubenswrapper[4755]: I1210 15:46:40.359427 4755 patch_prober.go:28] interesting pod/machine-config-daemon-ggt8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 15:46:40 crc kubenswrapper[4755]: I1210 15:46:40.359513 4755 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 15:46:40 crc kubenswrapper[4755]: I1210 15:46:40.359561 4755 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" Dec 10 15:46:40 crc kubenswrapper[4755]: I1210 15:46:40.360423 4755 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b9b7f6e29c3e4593fa445fe830b0d353f5a037cd1634fd06b5f6ef129b3368c3"} pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 10 15:46:40 crc kubenswrapper[4755]: I1210 15:46:40.360521 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" containerName="machine-config-daemon" containerID="cri-o://b9b7f6e29c3e4593fa445fe830b0d353f5a037cd1634fd06b5f6ef129b3368c3" gracePeriod=600 Dec 10 15:46:41 crc kubenswrapper[4755]: I1210 15:46:41.009873 4755 generic.go:334] "Generic (PLEG): container finished" podID="b132a8b9-1c99-414d-8773-229bf36b305d" containerID="b9b7f6e29c3e4593fa445fe830b0d353f5a037cd1634fd06b5f6ef129b3368c3" exitCode=0 Dec 10 15:46:41 crc kubenswrapper[4755]: I1210 15:46:41.009914 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" event={"ID":"b132a8b9-1c99-414d-8773-229bf36b305d","Type":"ContainerDied","Data":"b9b7f6e29c3e4593fa445fe830b0d353f5a037cd1634fd06b5f6ef129b3368c3"} Dec 10 15:46:41 crc kubenswrapper[4755]: I1210 15:46:41.009983 4755 scope.go:117] "RemoveContainer" containerID="cf5ba83fd616480d24cf584cf15a0ce95565ee5fa4662cb49e23ad86486c0d52" Dec 10 15:46:41 crc kubenswrapper[4755]: I1210 15:46:41.424792 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 10 15:46:41 crc kubenswrapper[4755]: I1210 15:46:41.433004 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58bd69657f-n5hkt" Dec 10 15:46:41 crc kubenswrapper[4755]: I1210 15:46:41.548865 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45648d9c-bd22-443a-bf3f-8c08998388ec-config\") pod \"45648d9c-bd22-443a-bf3f-8c08998388ec\" (UID: \"45648d9c-bd22-443a-bf3f-8c08998388ec\") " Dec 10 15:46:41 crc kubenswrapper[4755]: I1210 15:46:41.549030 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r6nmc\" (UniqueName: \"kubernetes.io/projected/d26dac70-f3b8-4932-992f-5b440275b499-kube-api-access-r6nmc\") pod \"d26dac70-f3b8-4932-992f-5b440275b499\" (UID: \"d26dac70-f3b8-4932-992f-5b440275b499\") " Dec 10 15:46:41 crc kubenswrapper[4755]: I1210 15:46:41.549071 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d26dac70-f3b8-4932-992f-5b440275b499-combined-ca-bundle\") pod \"d26dac70-f3b8-4932-992f-5b440275b499\" (UID: \"d26dac70-f3b8-4932-992f-5b440275b499\") " Dec 10 15:46:41 crc kubenswrapper[4755]: I1210 15:46:41.549121 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d26dac70-f3b8-4932-992f-5b440275b499-config-data\") pod \"d26dac70-f3b8-4932-992f-5b440275b499\" (UID: \"d26dac70-f3b8-4932-992f-5b440275b499\") " Dec 10 15:46:41 crc kubenswrapper[4755]: I1210 15:46:41.549184 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d26dac70-f3b8-4932-992f-5b440275b499-logs\") pod \"d26dac70-f3b8-4932-992f-5b440275b499\" (UID: \"d26dac70-f3b8-4932-992f-5b440275b499\") " Dec 10 15:46:41 crc kubenswrapper[4755]: I1210 15:46:41.549264 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/45648d9c-bd22-443a-bf3f-8c08998388ec-ovsdbserver-sb\") pod \"45648d9c-bd22-443a-bf3f-8c08998388ec\" (UID: \"45648d9c-bd22-443a-bf3f-8c08998388ec\") " Dec 10 15:46:41 crc kubenswrapper[4755]: I1210 15:46:41.549305 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/45648d9c-bd22-443a-bf3f-8c08998388ec-dns-svc\") pod \"45648d9c-bd22-443a-bf3f-8c08998388ec\" (UID: \"45648d9c-bd22-443a-bf3f-8c08998388ec\") " Dec 10 15:46:41 crc kubenswrapper[4755]: I1210 15:46:41.549325 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zmr6g\" (UniqueName: \"kubernetes.io/projected/45648d9c-bd22-443a-bf3f-8c08998388ec-kube-api-access-zmr6g\") pod \"45648d9c-bd22-443a-bf3f-8c08998388ec\" (UID: \"45648d9c-bd22-443a-bf3f-8c08998388ec\") " Dec 10 15:46:41 crc kubenswrapper[4755]: I1210 15:46:41.549383 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/45648d9c-bd22-443a-bf3f-8c08998388ec-dns-swift-storage-0\") pod \"45648d9c-bd22-443a-bf3f-8c08998388ec\" (UID: \"45648d9c-bd22-443a-bf3f-8c08998388ec\") " Dec 10 15:46:41 crc kubenswrapper[4755]: I1210 15:46:41.549414 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/45648d9c-bd22-443a-bf3f-8c08998388ec-ovsdbserver-nb\") pod \"45648d9c-bd22-443a-bf3f-8c08998388ec\" (UID: \"45648d9c-bd22-443a-bf3f-8c08998388ec\") " Dec 10 15:46:41 crc kubenswrapper[4755]: I1210 15:46:41.549959 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d26dac70-f3b8-4932-992f-5b440275b499-logs" (OuterVolumeSpecName: "logs") pod "d26dac70-f3b8-4932-992f-5b440275b499" (UID: "d26dac70-f3b8-4932-992f-5b440275b499"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:46:41 crc kubenswrapper[4755]: I1210 15:46:41.591688 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d26dac70-f3b8-4932-992f-5b440275b499-kube-api-access-r6nmc" (OuterVolumeSpecName: "kube-api-access-r6nmc") pod "d26dac70-f3b8-4932-992f-5b440275b499" (UID: "d26dac70-f3b8-4932-992f-5b440275b499"). InnerVolumeSpecName "kube-api-access-r6nmc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:46:41 crc kubenswrapper[4755]: I1210 15:46:41.591895 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45648d9c-bd22-443a-bf3f-8c08998388ec-kube-api-access-zmr6g" (OuterVolumeSpecName: "kube-api-access-zmr6g") pod "45648d9c-bd22-443a-bf3f-8c08998388ec" (UID: "45648d9c-bd22-443a-bf3f-8c08998388ec"). InnerVolumeSpecName "kube-api-access-zmr6g". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:46:41 crc kubenswrapper[4755]: I1210 15:46:41.619586 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d26dac70-f3b8-4932-992f-5b440275b499-config-data" (OuterVolumeSpecName: "config-data") pod "d26dac70-f3b8-4932-992f-5b440275b499" (UID: "d26dac70-f3b8-4932-992f-5b440275b499"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:46:41 crc kubenswrapper[4755]: I1210 15:46:41.621852 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d26dac70-f3b8-4932-992f-5b440275b499-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d26dac70-f3b8-4932-992f-5b440275b499" (UID: "d26dac70-f3b8-4932-992f-5b440275b499"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:46:41 crc kubenswrapper[4755]: I1210 15:46:41.655501 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r6nmc\" (UniqueName: \"kubernetes.io/projected/d26dac70-f3b8-4932-992f-5b440275b499-kube-api-access-r6nmc\") on node \"crc\" DevicePath \"\"" Dec 10 15:46:41 crc kubenswrapper[4755]: I1210 15:46:41.655538 4755 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d26dac70-f3b8-4932-992f-5b440275b499-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:46:41 crc kubenswrapper[4755]: I1210 15:46:41.655549 4755 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d26dac70-f3b8-4932-992f-5b440275b499-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 15:46:41 crc kubenswrapper[4755]: I1210 15:46:41.655560 4755 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d26dac70-f3b8-4932-992f-5b440275b499-logs\") on node \"crc\" DevicePath \"\"" Dec 10 15:46:41 crc kubenswrapper[4755]: I1210 15:46:41.655573 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zmr6g\" (UniqueName: \"kubernetes.io/projected/45648d9c-bd22-443a-bf3f-8c08998388ec-kube-api-access-zmr6g\") on node \"crc\" DevicePath \"\"" Dec 10 15:46:41 crc kubenswrapper[4755]: I1210 15:46:41.706407 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45648d9c-bd22-443a-bf3f-8c08998388ec-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "45648d9c-bd22-443a-bf3f-8c08998388ec" (UID: "45648d9c-bd22-443a-bf3f-8c08998388ec"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:46:41 crc kubenswrapper[4755]: I1210 15:46:41.706660 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45648d9c-bd22-443a-bf3f-8c08998388ec-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "45648d9c-bd22-443a-bf3f-8c08998388ec" (UID: "45648d9c-bd22-443a-bf3f-8c08998388ec"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:46:41 crc kubenswrapper[4755]: I1210 15:46:41.727866 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45648d9c-bd22-443a-bf3f-8c08998388ec-config" (OuterVolumeSpecName: "config") pod "45648d9c-bd22-443a-bf3f-8c08998388ec" (UID: "45648d9c-bd22-443a-bf3f-8c08998388ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:46:41 crc kubenswrapper[4755]: I1210 15:46:41.728880 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45648d9c-bd22-443a-bf3f-8c08998388ec-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "45648d9c-bd22-443a-bf3f-8c08998388ec" (UID: "45648d9c-bd22-443a-bf3f-8c08998388ec"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:46:41 crc kubenswrapper[4755]: I1210 15:46:41.742712 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45648d9c-bd22-443a-bf3f-8c08998388ec-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "45648d9c-bd22-443a-bf3f-8c08998388ec" (UID: "45648d9c-bd22-443a-bf3f-8c08998388ec"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:46:41 crc kubenswrapper[4755]: I1210 15:46:41.775035 4755 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/45648d9c-bd22-443a-bf3f-8c08998388ec-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 10 15:46:41 crc kubenswrapper[4755]: I1210 15:46:41.775077 4755 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/45648d9c-bd22-443a-bf3f-8c08998388ec-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 10 15:46:41 crc kubenswrapper[4755]: I1210 15:46:41.775091 4755 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45648d9c-bd22-443a-bf3f-8c08998388ec-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:46:41 crc kubenswrapper[4755]: I1210 15:46:41.775101 4755 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/45648d9c-bd22-443a-bf3f-8c08998388ec-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 10 15:46:41 crc kubenswrapper[4755]: I1210 15:46:41.775114 4755 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/45648d9c-bd22-443a-bf3f-8c08998388ec-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 10 15:46:41 crc kubenswrapper[4755]: I1210 15:46:41.905577 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 10 15:46:42 crc kubenswrapper[4755]: I1210 15:46:42.036334 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d26dac70-f3b8-4932-992f-5b440275b499","Type":"ContainerDied","Data":"eb7bfc95371bc13baeeed88f56e58d775e1443c1a42f92dcb4414c645c0c28b9"} Dec 10 15:46:42 crc kubenswrapper[4755]: I1210 15:46:42.036674 4755 scope.go:117] "RemoveContainer" containerID="bffa13a630422977abb1512b43d88223d3f379b890cf334357963920f2d840b3" Dec 10 15:46:42 crc kubenswrapper[4755]: I1210 15:46:42.036386 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 10 15:46:42 crc kubenswrapper[4755]: I1210 15:46:42.038975 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wxn4w" event={"ID":"4cb6ed71-48b0-45c8-a470-4b6441c7bff5","Type":"ContainerStarted","Data":"93b4acfc4caa646841ae16d8c2c69e54573a0966d606f11bfdd56c1ac1136939"} Dec 10 15:46:42 crc kubenswrapper[4755]: I1210 15:46:42.048720 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58bd69657f-n5hkt" event={"ID":"45648d9c-bd22-443a-bf3f-8c08998388ec","Type":"ContainerDied","Data":"88ba78b8d8c407d0e75953a4eb3d11b94ef799a5f0e8ecbe484ff97d35f0577b"} Dec 10 15:46:42 crc kubenswrapper[4755]: I1210 15:46:42.048747 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58bd69657f-n5hkt" Dec 10 15:46:42 crc kubenswrapper[4755]: I1210 15:46:42.056129 4755 generic.go:334] "Generic (PLEG): container finished" podID="ae2b700f-93b0-4324-bac8-677b580a18a7" containerID="26f0e397131eb766311575b702ff80fdd6af99fe2c76eae931e9487599cec731" exitCode=0 Dec 10 15:46:42 crc kubenswrapper[4755]: I1210 15:46:42.056217 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ae2b700f-93b0-4324-bac8-677b580a18a7","Type":"ContainerDied","Data":"26f0e397131eb766311575b702ff80fdd6af99fe2c76eae931e9487599cec731"} Dec 10 15:46:42 crc kubenswrapper[4755]: I1210 15:46:42.077643 4755 scope.go:117] "RemoveContainer" containerID="3c0e87a837697ed64204a2a6397c97d6db921c4b0792510533f503f232ef7eea" Dec 10 15:46:42 crc kubenswrapper[4755]: I1210 15:46:42.079932 4755 generic.go:334] "Generic (PLEG): container finished" podID="4b6fac70-baa4-451b-92bc-3b26e2fc08ec" containerID="5811125e9148c582d43b521ecbd0c2cb46010cb7aef72726330fc31d991e6aa1" exitCode=0 Dec 10 15:46:42 crc kubenswrapper[4755]: I1210 15:46:42.080017 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 10 15:46:42 crc kubenswrapper[4755]: I1210 15:46:42.080374 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4b6fac70-baa4-451b-92bc-3b26e2fc08ec","Type":"ContainerDied","Data":"5811125e9148c582d43b521ecbd0c2cb46010cb7aef72726330fc31d991e6aa1"} Dec 10 15:46:42 crc kubenswrapper[4755]: I1210 15:46:42.080419 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4b6fac70-baa4-451b-92bc-3b26e2fc08ec","Type":"ContainerDied","Data":"b6f8fdb4bd106e211ca783603e8e7eb370c05a3d3e995d2380d98af5901668ff"} Dec 10 15:46:42 crc kubenswrapper[4755]: I1210 15:46:42.082337 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b6fac70-baa4-451b-92bc-3b26e2fc08ec-config-data\") pod \"4b6fac70-baa4-451b-92bc-3b26e2fc08ec\" (UID: \"4b6fac70-baa4-451b-92bc-3b26e2fc08ec\") " Dec 10 15:46:42 crc kubenswrapper[4755]: I1210 15:46:42.082388 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wq7wc\" (UniqueName: \"kubernetes.io/projected/4b6fac70-baa4-451b-92bc-3b26e2fc08ec-kube-api-access-wq7wc\") pod \"4b6fac70-baa4-451b-92bc-3b26e2fc08ec\" (UID: \"4b6fac70-baa4-451b-92bc-3b26e2fc08ec\") " Dec 10 15:46:42 crc kubenswrapper[4755]: I1210 15:46:42.082447 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b6fac70-baa4-451b-92bc-3b26e2fc08ec-combined-ca-bundle\") pod \"4b6fac70-baa4-451b-92bc-3b26e2fc08ec\" (UID: \"4b6fac70-baa4-451b-92bc-3b26e2fc08ec\") " Dec 10 15:46:42 crc kubenswrapper[4755]: I1210 15:46:42.082493 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b6fac70-baa4-451b-92bc-3b26e2fc08ec-logs\") pod \"4b6fac70-baa4-451b-92bc-3b26e2fc08ec\" (UID: \"4b6fac70-baa4-451b-92bc-3b26e2fc08ec\") " Dec 10 15:46:42 crc kubenswrapper[4755]: I1210 15:46:42.083516 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b6fac70-baa4-451b-92bc-3b26e2fc08ec-logs" (OuterVolumeSpecName: "logs") pod "4b6fac70-baa4-451b-92bc-3b26e2fc08ec" (UID: "4b6fac70-baa4-451b-92bc-3b26e2fc08ec"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:46:42 crc kubenswrapper[4755]: I1210 15:46:42.087491 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" event={"ID":"b132a8b9-1c99-414d-8773-229bf36b305d","Type":"ContainerStarted","Data":"008e8b27aa48ce8c618284ca4dccd38cf79c00478318f7aaaa34c326eeb5ea52"} Dec 10 15:46:42 crc kubenswrapper[4755]: I1210 15:46:42.106583 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b6fac70-baa4-451b-92bc-3b26e2fc08ec-kube-api-access-wq7wc" (OuterVolumeSpecName: "kube-api-access-wq7wc") pod "4b6fac70-baa4-451b-92bc-3b26e2fc08ec" (UID: "4b6fac70-baa4-451b-92bc-3b26e2fc08ec"). InnerVolumeSpecName "kube-api-access-wq7wc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:46:42 crc kubenswrapper[4755]: I1210 15:46:42.116174 4755 scope.go:117] "RemoveContainer" containerID="20930194f79f2f7e19899ac550e4647fc661cb704ee91cde2a2d2036c5cc0cd8" Dec 10 15:46:42 crc kubenswrapper[4755]: I1210 15:46:42.157360 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b6fac70-baa4-451b-92bc-3b26e2fc08ec-config-data" (OuterVolumeSpecName: "config-data") pod "4b6fac70-baa4-451b-92bc-3b26e2fc08ec" (UID: "4b6fac70-baa4-451b-92bc-3b26e2fc08ec"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:46:42 crc kubenswrapper[4755]: I1210 15:46:42.158417 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 10 15:46:42 crc kubenswrapper[4755]: I1210 15:46:42.162167 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b6fac70-baa4-451b-92bc-3b26e2fc08ec-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4b6fac70-baa4-451b-92bc-3b26e2fc08ec" (UID: "4b6fac70-baa4-451b-92bc-3b26e2fc08ec"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:46:42 crc kubenswrapper[4755]: I1210 15:46:42.166134 4755 scope.go:117] "RemoveContainer" containerID="dd434e659137b34209d185951bfbc9ce56017c06fa6214497a254d196c1b779e" Dec 10 15:46:42 crc kubenswrapper[4755]: I1210 15:46:42.175627 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 10 15:46:42 crc kubenswrapper[4755]: I1210 15:46:42.193816 4755 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b6fac70-baa4-451b-92bc-3b26e2fc08ec-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 15:46:42 crc kubenswrapper[4755]: I1210 15:46:42.193851 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wq7wc\" (UniqueName: \"kubernetes.io/projected/4b6fac70-baa4-451b-92bc-3b26e2fc08ec-kube-api-access-wq7wc\") on node \"crc\" DevicePath \"\"" Dec 10 15:46:42 crc kubenswrapper[4755]: I1210 15:46:42.193866 4755 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b6fac70-baa4-451b-92bc-3b26e2fc08ec-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:46:42 crc kubenswrapper[4755]: I1210 15:46:42.193878 4755 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b6fac70-baa4-451b-92bc-3b26e2fc08ec-logs\") on node \"crc\" DevicePath \"\"" Dec 10 15:46:42 crc kubenswrapper[4755]: I1210 15:46:42.201100 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 10 15:46:42 crc kubenswrapper[4755]: I1210 15:46:42.208210 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 10 15:46:42 crc kubenswrapper[4755]: E1210 15:46:42.208708 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45648d9c-bd22-443a-bf3f-8c08998388ec" containerName="dnsmasq-dns" Dec 10 15:46:42 crc kubenswrapper[4755]: I1210 15:46:42.208761 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="45648d9c-bd22-443a-bf3f-8c08998388ec" containerName="dnsmasq-dns" Dec 10 15:46:42 crc kubenswrapper[4755]: E1210 15:46:42.208778 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d26dac70-f3b8-4932-992f-5b440275b499" containerName="nova-metadata-log" Dec 10 15:46:42 crc kubenswrapper[4755]: I1210 15:46:42.208785 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="d26dac70-f3b8-4932-992f-5b440275b499" containerName="nova-metadata-log" Dec 10 15:46:42 crc kubenswrapper[4755]: E1210 15:46:42.208802 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d26dac70-f3b8-4932-992f-5b440275b499" containerName="nova-metadata-metadata" Dec 10 15:46:42 crc kubenswrapper[4755]: I1210 15:46:42.208809 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="d26dac70-f3b8-4932-992f-5b440275b499" containerName="nova-metadata-metadata" Dec 10 15:46:42 crc kubenswrapper[4755]: E1210 15:46:42.208821 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b6fac70-baa4-451b-92bc-3b26e2fc08ec" containerName="nova-api-api" Dec 10 15:46:42 crc kubenswrapper[4755]: I1210 15:46:42.208828 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b6fac70-baa4-451b-92bc-3b26e2fc08ec" containerName="nova-api-api" Dec 10 15:46:42 crc kubenswrapper[4755]: E1210 15:46:42.208848 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45648d9c-bd22-443a-bf3f-8c08998388ec" containerName="init" Dec 10 15:46:42 crc kubenswrapper[4755]: I1210 15:46:42.208853 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="45648d9c-bd22-443a-bf3f-8c08998388ec" containerName="init" Dec 10 15:46:42 crc kubenswrapper[4755]: E1210 15:46:42.208860 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b6fac70-baa4-451b-92bc-3b26e2fc08ec" containerName="nova-api-log" Dec 10 15:46:42 crc kubenswrapper[4755]: I1210 15:46:42.208867 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b6fac70-baa4-451b-92bc-3b26e2fc08ec" containerName="nova-api-log" Dec 10 15:46:42 crc kubenswrapper[4755]: E1210 15:46:42.208877 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c537dd47-5926-4c76-9a78-af49c9418027" containerName="nova-manage" Dec 10 15:46:42 crc kubenswrapper[4755]: I1210 15:46:42.208883 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="c537dd47-5926-4c76-9a78-af49c9418027" containerName="nova-manage" Dec 10 15:46:42 crc kubenswrapper[4755]: E1210 15:46:42.208896 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae2b700f-93b0-4324-bac8-677b580a18a7" containerName="nova-scheduler-scheduler" Dec 10 15:46:42 crc kubenswrapper[4755]: I1210 15:46:42.208903 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae2b700f-93b0-4324-bac8-677b580a18a7" containerName="nova-scheduler-scheduler" Dec 10 15:46:42 crc kubenswrapper[4755]: I1210 15:46:42.209110 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="c537dd47-5926-4c76-9a78-af49c9418027" containerName="nova-manage" Dec 10 15:46:42 crc kubenswrapper[4755]: I1210 15:46:42.209126 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="45648d9c-bd22-443a-bf3f-8c08998388ec" containerName="dnsmasq-dns" Dec 10 15:46:42 crc kubenswrapper[4755]: I1210 15:46:42.209139 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b6fac70-baa4-451b-92bc-3b26e2fc08ec" containerName="nova-api-log" Dec 10 15:46:42 crc kubenswrapper[4755]: I1210 15:46:42.209149 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae2b700f-93b0-4324-bac8-677b580a18a7" containerName="nova-scheduler-scheduler" Dec 10 15:46:42 crc kubenswrapper[4755]: I1210 15:46:42.209155 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b6fac70-baa4-451b-92bc-3b26e2fc08ec" containerName="nova-api-api" Dec 10 15:46:42 crc kubenswrapper[4755]: I1210 15:46:42.209165 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="d26dac70-f3b8-4932-992f-5b440275b499" containerName="nova-metadata-log" Dec 10 15:46:42 crc kubenswrapper[4755]: I1210 15:46:42.209178 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="d26dac70-f3b8-4932-992f-5b440275b499" containerName="nova-metadata-metadata" Dec 10 15:46:42 crc kubenswrapper[4755]: I1210 15:46:42.210884 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 10 15:46:42 crc kubenswrapper[4755]: I1210 15:46:42.220019 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Dec 10 15:46:42 crc kubenswrapper[4755]: I1210 15:46:42.220233 4755 scope.go:117] "RemoveContainer" containerID="5811125e9148c582d43b521ecbd0c2cb46010cb7aef72726330fc31d991e6aa1" Dec 10 15:46:42 crc kubenswrapper[4755]: I1210 15:46:42.220274 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 10 15:46:42 crc kubenswrapper[4755]: I1210 15:46:42.222811 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58bd69657f-n5hkt"] Dec 10 15:46:42 crc kubenswrapper[4755]: I1210 15:46:42.259306 4755 scope.go:117] "RemoveContainer" containerID="cc0745a88f315f80b29eaf0855931a650ff72954d0e9b1c7b64d46c0c660482f" Dec 10 15:46:42 crc kubenswrapper[4755]: I1210 15:46:42.267887 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-58bd69657f-n5hkt"] Dec 10 15:46:42 crc kubenswrapper[4755]: I1210 15:46:42.290771 4755 scope.go:117] "RemoveContainer" containerID="5811125e9148c582d43b521ecbd0c2cb46010cb7aef72726330fc31d991e6aa1" Dec 10 15:46:42 crc kubenswrapper[4755]: I1210 15:46:42.294049 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 10 15:46:42 crc kubenswrapper[4755]: I1210 15:46:42.294897 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae2b700f-93b0-4324-bac8-677b580a18a7-config-data\") pod \"ae2b700f-93b0-4324-bac8-677b580a18a7\" (UID: \"ae2b700f-93b0-4324-bac8-677b580a18a7\") " Dec 10 15:46:42 crc kubenswrapper[4755]: I1210 15:46:42.295022 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae2b700f-93b0-4324-bac8-677b580a18a7-combined-ca-bundle\") pod \"ae2b700f-93b0-4324-bac8-677b580a18a7\" (UID: \"ae2b700f-93b0-4324-bac8-677b580a18a7\") " Dec 10 15:46:42 crc kubenswrapper[4755]: I1210 15:46:42.295125 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jzpr7\" (UniqueName: \"kubernetes.io/projected/ae2b700f-93b0-4324-bac8-677b580a18a7-kube-api-access-jzpr7\") pod \"ae2b700f-93b0-4324-bac8-677b580a18a7\" (UID: \"ae2b700f-93b0-4324-bac8-677b580a18a7\") " Dec 10 15:46:42 crc kubenswrapper[4755]: I1210 15:46:42.295630 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04cde7cd-7f43-4870-a4fc-c8bdb0b4188a-config-data\") pod \"nova-metadata-0\" (UID: \"04cde7cd-7f43-4870-a4fc-c8bdb0b4188a\") " pod="openstack/nova-metadata-0" Dec 10 15:46:42 crc kubenswrapper[4755]: I1210 15:46:42.295680 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04cde7cd-7f43-4870-a4fc-c8bdb0b4188a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"04cde7cd-7f43-4870-a4fc-c8bdb0b4188a\") " pod="openstack/nova-metadata-0" Dec 10 15:46:42 crc kubenswrapper[4755]: I1210 15:46:42.295726 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zzmg\" (UniqueName: \"kubernetes.io/projected/04cde7cd-7f43-4870-a4fc-c8bdb0b4188a-kube-api-access-6zzmg\") pod \"nova-metadata-0\" (UID: \"04cde7cd-7f43-4870-a4fc-c8bdb0b4188a\") " pod="openstack/nova-metadata-0" Dec 10 15:46:42 crc kubenswrapper[4755]: I1210 15:46:42.295786 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04cde7cd-7f43-4870-a4fc-c8bdb0b4188a-logs\") pod \"nova-metadata-0\" (UID: \"04cde7cd-7f43-4870-a4fc-c8bdb0b4188a\") " pod="openstack/nova-metadata-0" Dec 10 15:46:42 crc kubenswrapper[4755]: I1210 15:46:42.295812 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/04cde7cd-7f43-4870-a4fc-c8bdb0b4188a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"04cde7cd-7f43-4870-a4fc-c8bdb0b4188a\") " pod="openstack/nova-metadata-0" Dec 10 15:46:42 crc kubenswrapper[4755]: E1210 15:46:42.317568 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5811125e9148c582d43b521ecbd0c2cb46010cb7aef72726330fc31d991e6aa1\": container with ID starting with 5811125e9148c582d43b521ecbd0c2cb46010cb7aef72726330fc31d991e6aa1 not found: ID does not exist" containerID="5811125e9148c582d43b521ecbd0c2cb46010cb7aef72726330fc31d991e6aa1" Dec 10 15:46:42 crc kubenswrapper[4755]: I1210 15:46:42.317628 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5811125e9148c582d43b521ecbd0c2cb46010cb7aef72726330fc31d991e6aa1"} err="failed to get container status \"5811125e9148c582d43b521ecbd0c2cb46010cb7aef72726330fc31d991e6aa1\": rpc error: code = NotFound desc = could not find container \"5811125e9148c582d43b521ecbd0c2cb46010cb7aef72726330fc31d991e6aa1\": container with ID starting with 5811125e9148c582d43b521ecbd0c2cb46010cb7aef72726330fc31d991e6aa1 not found: ID does not exist" Dec 10 15:46:42 crc kubenswrapper[4755]: I1210 15:46:42.317663 4755 scope.go:117] "RemoveContainer" containerID="cc0745a88f315f80b29eaf0855931a650ff72954d0e9b1c7b64d46c0c660482f" Dec 10 15:46:42 crc kubenswrapper[4755]: E1210 15:46:42.318132 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc0745a88f315f80b29eaf0855931a650ff72954d0e9b1c7b64d46c0c660482f\": container with ID starting with cc0745a88f315f80b29eaf0855931a650ff72954d0e9b1c7b64d46c0c660482f not found: ID does not exist" containerID="cc0745a88f315f80b29eaf0855931a650ff72954d0e9b1c7b64d46c0c660482f" Dec 10 15:46:42 crc kubenswrapper[4755]: I1210 15:46:42.318167 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc0745a88f315f80b29eaf0855931a650ff72954d0e9b1c7b64d46c0c660482f"} err="failed to get container status \"cc0745a88f315f80b29eaf0855931a650ff72954d0e9b1c7b64d46c0c660482f\": rpc error: code = NotFound desc = could not find container \"cc0745a88f315f80b29eaf0855931a650ff72954d0e9b1c7b64d46c0c660482f\": container with ID starting with cc0745a88f315f80b29eaf0855931a650ff72954d0e9b1c7b64d46c0c660482f not found: ID does not exist" Dec 10 15:46:42 crc kubenswrapper[4755]: I1210 15:46:42.332665 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae2b700f-93b0-4324-bac8-677b580a18a7-kube-api-access-jzpr7" (OuterVolumeSpecName: "kube-api-access-jzpr7") pod "ae2b700f-93b0-4324-bac8-677b580a18a7" (UID: "ae2b700f-93b0-4324-bac8-677b580a18a7"). InnerVolumeSpecName "kube-api-access-jzpr7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:46:42 crc kubenswrapper[4755]: I1210 15:46:42.337239 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae2b700f-93b0-4324-bac8-677b580a18a7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ae2b700f-93b0-4324-bac8-677b580a18a7" (UID: "ae2b700f-93b0-4324-bac8-677b580a18a7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:46:42 crc kubenswrapper[4755]: I1210 15:46:42.351178 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae2b700f-93b0-4324-bac8-677b580a18a7-config-data" (OuterVolumeSpecName: "config-data") pod "ae2b700f-93b0-4324-bac8-677b580a18a7" (UID: "ae2b700f-93b0-4324-bac8-677b580a18a7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:46:42 crc kubenswrapper[4755]: I1210 15:46:42.398138 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04cde7cd-7f43-4870-a4fc-c8bdb0b4188a-logs\") pod \"nova-metadata-0\" (UID: \"04cde7cd-7f43-4870-a4fc-c8bdb0b4188a\") " pod="openstack/nova-metadata-0" Dec 10 15:46:42 crc kubenswrapper[4755]: I1210 15:46:42.398300 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/04cde7cd-7f43-4870-a4fc-c8bdb0b4188a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"04cde7cd-7f43-4870-a4fc-c8bdb0b4188a\") " pod="openstack/nova-metadata-0" Dec 10 15:46:42 crc kubenswrapper[4755]: I1210 15:46:42.398509 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04cde7cd-7f43-4870-a4fc-c8bdb0b4188a-config-data\") pod \"nova-metadata-0\" (UID: \"04cde7cd-7f43-4870-a4fc-c8bdb0b4188a\") " pod="openstack/nova-metadata-0" Dec 10 15:46:42 crc kubenswrapper[4755]: I1210 15:46:42.398563 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04cde7cd-7f43-4870-a4fc-c8bdb0b4188a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"04cde7cd-7f43-4870-a4fc-c8bdb0b4188a\") " pod="openstack/nova-metadata-0" Dec 10 15:46:42 crc kubenswrapper[4755]: I1210 15:46:42.398618 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zzmg\" (UniqueName: \"kubernetes.io/projected/04cde7cd-7f43-4870-a4fc-c8bdb0b4188a-kube-api-access-6zzmg\") pod \"nova-metadata-0\" (UID: \"04cde7cd-7f43-4870-a4fc-c8bdb0b4188a\") " pod="openstack/nova-metadata-0" Dec 10 15:46:42 crc kubenswrapper[4755]: I1210 15:46:42.398686 4755 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae2b700f-93b0-4324-bac8-677b580a18a7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:46:42 crc kubenswrapper[4755]: I1210 15:46:42.398710 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jzpr7\" (UniqueName: \"kubernetes.io/projected/ae2b700f-93b0-4324-bac8-677b580a18a7-kube-api-access-jzpr7\") on node \"crc\" DevicePath \"\"" Dec 10 15:46:42 crc kubenswrapper[4755]: I1210 15:46:42.398738 4755 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae2b700f-93b0-4324-bac8-677b580a18a7-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 15:46:42 crc kubenswrapper[4755]: I1210 15:46:42.399490 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04cde7cd-7f43-4870-a4fc-c8bdb0b4188a-logs\") pod \"nova-metadata-0\" (UID: \"04cde7cd-7f43-4870-a4fc-c8bdb0b4188a\") " pod="openstack/nova-metadata-0" Dec 10 15:46:42 crc kubenswrapper[4755]: I1210 15:46:42.403678 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/04cde7cd-7f43-4870-a4fc-c8bdb0b4188a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"04cde7cd-7f43-4870-a4fc-c8bdb0b4188a\") " pod="openstack/nova-metadata-0" Dec 10 15:46:42 crc kubenswrapper[4755]: I1210 15:46:42.405386 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04cde7cd-7f43-4870-a4fc-c8bdb0b4188a-config-data\") pod \"nova-metadata-0\" (UID: \"04cde7cd-7f43-4870-a4fc-c8bdb0b4188a\") " pod="openstack/nova-metadata-0" Dec 10 15:46:42 crc kubenswrapper[4755]: I1210 15:46:42.407425 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04cde7cd-7f43-4870-a4fc-c8bdb0b4188a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"04cde7cd-7f43-4870-a4fc-c8bdb0b4188a\") " pod="openstack/nova-metadata-0" Dec 10 15:46:42 crc kubenswrapper[4755]: I1210 15:46:42.419094 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zzmg\" (UniqueName: \"kubernetes.io/projected/04cde7cd-7f43-4870-a4fc-c8bdb0b4188a-kube-api-access-6zzmg\") pod \"nova-metadata-0\" (UID: \"04cde7cd-7f43-4870-a4fc-c8bdb0b4188a\") " pod="openstack/nova-metadata-0" Dec 10 15:46:42 crc kubenswrapper[4755]: I1210 15:46:42.552431 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 10 15:46:42 crc kubenswrapper[4755]: I1210 15:46:42.567504 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 10 15:46:42 crc kubenswrapper[4755]: I1210 15:46:42.582199 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 10 15:46:42 crc kubenswrapper[4755]: I1210 15:46:42.594691 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 10 15:46:42 crc kubenswrapper[4755]: I1210 15:46:42.596428 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 10 15:46:42 crc kubenswrapper[4755]: I1210 15:46:42.602197 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 10 15:46:42 crc kubenswrapper[4755]: I1210 15:46:42.616865 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 10 15:46:42 crc kubenswrapper[4755]: I1210 15:46:42.714849 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zg7wt\" (UniqueName: \"kubernetes.io/projected/7129b800-f170-4cbf-9b18-26d01b1cdd26-kube-api-access-zg7wt\") pod \"nova-api-0\" (UID: \"7129b800-f170-4cbf-9b18-26d01b1cdd26\") " pod="openstack/nova-api-0" Dec 10 15:46:42 crc kubenswrapper[4755]: I1210 15:46:42.715149 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7129b800-f170-4cbf-9b18-26d01b1cdd26-config-data\") pod \"nova-api-0\" (UID: \"7129b800-f170-4cbf-9b18-26d01b1cdd26\") " pod="openstack/nova-api-0" Dec 10 15:46:42 crc kubenswrapper[4755]: I1210 15:46:42.715544 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7129b800-f170-4cbf-9b18-26d01b1cdd26-logs\") pod \"nova-api-0\" (UID: \"7129b800-f170-4cbf-9b18-26d01b1cdd26\") " pod="openstack/nova-api-0" Dec 10 15:46:42 crc kubenswrapper[4755]: I1210 15:46:42.715596 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7129b800-f170-4cbf-9b18-26d01b1cdd26-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7129b800-f170-4cbf-9b18-26d01b1cdd26\") " pod="openstack/nova-api-0" Dec 10 15:46:42 crc kubenswrapper[4755]: I1210 15:46:42.819132 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zg7wt\" (UniqueName: \"kubernetes.io/projected/7129b800-f170-4cbf-9b18-26d01b1cdd26-kube-api-access-zg7wt\") pod \"nova-api-0\" (UID: \"7129b800-f170-4cbf-9b18-26d01b1cdd26\") " pod="openstack/nova-api-0" Dec 10 15:46:42 crc kubenswrapper[4755]: I1210 15:46:42.819198 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7129b800-f170-4cbf-9b18-26d01b1cdd26-config-data\") pod \"nova-api-0\" (UID: \"7129b800-f170-4cbf-9b18-26d01b1cdd26\") " pod="openstack/nova-api-0" Dec 10 15:46:42 crc kubenswrapper[4755]: I1210 15:46:42.820653 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7129b800-f170-4cbf-9b18-26d01b1cdd26-logs\") pod \"nova-api-0\" (UID: \"7129b800-f170-4cbf-9b18-26d01b1cdd26\") " pod="openstack/nova-api-0" Dec 10 15:46:42 crc kubenswrapper[4755]: I1210 15:46:42.820949 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7129b800-f170-4cbf-9b18-26d01b1cdd26-logs\") pod \"nova-api-0\" (UID: \"7129b800-f170-4cbf-9b18-26d01b1cdd26\") " pod="openstack/nova-api-0" Dec 10 15:46:42 crc kubenswrapper[4755]: I1210 15:46:42.821092 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7129b800-f170-4cbf-9b18-26d01b1cdd26-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7129b800-f170-4cbf-9b18-26d01b1cdd26\") " pod="openstack/nova-api-0" Dec 10 15:46:42 crc kubenswrapper[4755]: I1210 15:46:42.830794 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7129b800-f170-4cbf-9b18-26d01b1cdd26-config-data\") pod \"nova-api-0\" (UID: \"7129b800-f170-4cbf-9b18-26d01b1cdd26\") " pod="openstack/nova-api-0" Dec 10 15:46:42 crc kubenswrapper[4755]: I1210 15:46:42.831333 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7129b800-f170-4cbf-9b18-26d01b1cdd26-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7129b800-f170-4cbf-9b18-26d01b1cdd26\") " pod="openstack/nova-api-0" Dec 10 15:46:42 crc kubenswrapper[4755]: I1210 15:46:42.836698 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zg7wt\" (UniqueName: \"kubernetes.io/projected/7129b800-f170-4cbf-9b18-26d01b1cdd26-kube-api-access-zg7wt\") pod \"nova-api-0\" (UID: \"7129b800-f170-4cbf-9b18-26d01b1cdd26\") " pod="openstack/nova-api-0" Dec 10 15:46:43 crc kubenswrapper[4755]: I1210 15:46:43.035443 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 10 15:46:43 crc kubenswrapper[4755]: I1210 15:46:43.164794 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ae2b700f-93b0-4324-bac8-677b580a18a7","Type":"ContainerDied","Data":"4e902f20e4967238d1e2540ee6adf4d978e765231c9fd64d7e932d6d9fdfefc3"} Dec 10 15:46:43 crc kubenswrapper[4755]: I1210 15:46:43.164846 4755 scope.go:117] "RemoveContainer" containerID="26f0e397131eb766311575b702ff80fdd6af99fe2c76eae931e9487599cec731" Dec 10 15:46:43 crc kubenswrapper[4755]: I1210 15:46:43.164952 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 10 15:46:43 crc kubenswrapper[4755]: I1210 15:46:43.267910 4755 generic.go:334] "Generic (PLEG): container finished" podID="4cb6ed71-48b0-45c8-a470-4b6441c7bff5" containerID="93b4acfc4caa646841ae16d8c2c69e54573a0966d606f11bfdd56c1ac1136939" exitCode=0 Dec 10 15:46:43 crc kubenswrapper[4755]: I1210 15:46:43.267970 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wxn4w" event={"ID":"4cb6ed71-48b0-45c8-a470-4b6441c7bff5","Type":"ContainerDied","Data":"93b4acfc4caa646841ae16d8c2c69e54573a0966d606f11bfdd56c1ac1136939"} Dec 10 15:46:43 crc kubenswrapper[4755]: I1210 15:46:43.291757 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 10 15:46:43 crc kubenswrapper[4755]: I1210 15:46:43.381693 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 10 15:46:43 crc kubenswrapper[4755]: I1210 15:46:43.415406 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Dec 10 15:46:43 crc kubenswrapper[4755]: I1210 15:46:43.438039 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 10 15:46:43 crc kubenswrapper[4755]: I1210 15:46:43.439585 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 10 15:46:43 crc kubenswrapper[4755]: I1210 15:46:43.460886 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 10 15:46:43 crc kubenswrapper[4755]: I1210 15:46:43.463199 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 10 15:46:43 crc kubenswrapper[4755]: I1210 15:46:43.472515 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14ae9d17-ca94-4a4a-8148-f24ad4a8f268-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"14ae9d17-ca94-4a4a-8148-f24ad4a8f268\") " pod="openstack/nova-scheduler-0" Dec 10 15:46:43 crc kubenswrapper[4755]: I1210 15:46:43.472610 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14ae9d17-ca94-4a4a-8148-f24ad4a8f268-config-data\") pod \"nova-scheduler-0\" (UID: \"14ae9d17-ca94-4a4a-8148-f24ad4a8f268\") " pod="openstack/nova-scheduler-0" Dec 10 15:46:43 crc kubenswrapper[4755]: I1210 15:46:43.472731 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjxck\" (UniqueName: \"kubernetes.io/projected/14ae9d17-ca94-4a4a-8148-f24ad4a8f268-kube-api-access-jjxck\") pod \"nova-scheduler-0\" (UID: \"14ae9d17-ca94-4a4a-8148-f24ad4a8f268\") " pod="openstack/nova-scheduler-0" Dec 10 15:46:43 crc kubenswrapper[4755]: I1210 15:46:43.575120 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14ae9d17-ca94-4a4a-8148-f24ad4a8f268-config-data\") pod \"nova-scheduler-0\" (UID: \"14ae9d17-ca94-4a4a-8148-f24ad4a8f268\") " pod="openstack/nova-scheduler-0" Dec 10 15:46:43 crc kubenswrapper[4755]: I1210 15:46:43.575369 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjxck\" (UniqueName: \"kubernetes.io/projected/14ae9d17-ca94-4a4a-8148-f24ad4a8f268-kube-api-access-jjxck\") pod \"nova-scheduler-0\" (UID: \"14ae9d17-ca94-4a4a-8148-f24ad4a8f268\") " pod="openstack/nova-scheduler-0" Dec 10 15:46:43 crc kubenswrapper[4755]: I1210 15:46:43.575457 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14ae9d17-ca94-4a4a-8148-f24ad4a8f268-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"14ae9d17-ca94-4a4a-8148-f24ad4a8f268\") " pod="openstack/nova-scheduler-0" Dec 10 15:46:43 crc kubenswrapper[4755]: I1210 15:46:43.587268 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14ae9d17-ca94-4a4a-8148-f24ad4a8f268-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"14ae9d17-ca94-4a4a-8148-f24ad4a8f268\") " pod="openstack/nova-scheduler-0" Dec 10 15:46:43 crc kubenswrapper[4755]: I1210 15:46:43.596125 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14ae9d17-ca94-4a4a-8148-f24ad4a8f268-config-data\") pod \"nova-scheduler-0\" (UID: \"14ae9d17-ca94-4a4a-8148-f24ad4a8f268\") " pod="openstack/nova-scheduler-0" Dec 10 15:46:43 crc kubenswrapper[4755]: I1210 15:46:43.602668 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjxck\" (UniqueName: \"kubernetes.io/projected/14ae9d17-ca94-4a4a-8148-f24ad4a8f268-kube-api-access-jjxck\") pod \"nova-scheduler-0\" (UID: \"14ae9d17-ca94-4a4a-8148-f24ad4a8f268\") " pod="openstack/nova-scheduler-0" Dec 10 15:46:43 crc kubenswrapper[4755]: I1210 15:46:43.744758 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 10 15:46:43 crc kubenswrapper[4755]: W1210 15:46:43.747280 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7129b800_f170_4cbf_9b18_26d01b1cdd26.slice/crio-365c28a19621b484a94ae54c49e779d5cad9af65c113bbaa0a52d59f9dd2ef18 WatchSource:0}: Error finding container 365c28a19621b484a94ae54c49e779d5cad9af65c113bbaa0a52d59f9dd2ef18: Status 404 returned error can't find the container with id 365c28a19621b484a94ae54c49e779d5cad9af65c113bbaa0a52d59f9dd2ef18 Dec 10 15:46:43 crc kubenswrapper[4755]: I1210 15:46:43.773134 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45648d9c-bd22-443a-bf3f-8c08998388ec" path="/var/lib/kubelet/pods/45648d9c-bd22-443a-bf3f-8c08998388ec/volumes" Dec 10 15:46:43 crc kubenswrapper[4755]: I1210 15:46:43.774206 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b6fac70-baa4-451b-92bc-3b26e2fc08ec" path="/var/lib/kubelet/pods/4b6fac70-baa4-451b-92bc-3b26e2fc08ec/volumes" Dec 10 15:46:43 crc kubenswrapper[4755]: I1210 15:46:43.774986 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae2b700f-93b0-4324-bac8-677b580a18a7" path="/var/lib/kubelet/pods/ae2b700f-93b0-4324-bac8-677b580a18a7/volumes" Dec 10 15:46:43 crc kubenswrapper[4755]: I1210 15:46:43.776312 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d26dac70-f3b8-4932-992f-5b440275b499" path="/var/lib/kubelet/pods/d26dac70-f3b8-4932-992f-5b440275b499/volumes" Dec 10 15:46:43 crc kubenswrapper[4755]: I1210 15:46:43.848741 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 10 15:46:44 crc kubenswrapper[4755]: I1210 15:46:44.291453 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"04cde7cd-7f43-4870-a4fc-c8bdb0b4188a","Type":"ContainerStarted","Data":"6086fa8be8b19c45b98b1004a3d289895d70855527be98b4a9fa4c956dbba37f"} Dec 10 15:46:44 crc kubenswrapper[4755]: I1210 15:46:44.292027 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"04cde7cd-7f43-4870-a4fc-c8bdb0b4188a","Type":"ContainerStarted","Data":"698027df8097caa4f6581591afc1210a9ba63f1ddce2ee55c77905bb52857e00"} Dec 10 15:46:44 crc kubenswrapper[4755]: I1210 15:46:44.294258 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7129b800-f170-4cbf-9b18-26d01b1cdd26","Type":"ContainerStarted","Data":"f8b294449df5ef04262c533d8682ec6bd1dddb8009bbbff61e951331ed15486e"} Dec 10 15:46:44 crc kubenswrapper[4755]: I1210 15:46:44.294298 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7129b800-f170-4cbf-9b18-26d01b1cdd26","Type":"ContainerStarted","Data":"365c28a19621b484a94ae54c49e779d5cad9af65c113bbaa0a52d59f9dd2ef18"} Dec 10 15:46:44 crc kubenswrapper[4755]: I1210 15:46:44.602364 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 10 15:46:44 crc kubenswrapper[4755]: I1210 15:46:44.832178 4755 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-58bd69657f-n5hkt" podUID="45648d9c-bd22-443a-bf3f-8c08998388ec" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.185:5353: i/o timeout" Dec 10 15:46:45 crc kubenswrapper[4755]: I1210 15:46:45.316145 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"04cde7cd-7f43-4870-a4fc-c8bdb0b4188a","Type":"ContainerStarted","Data":"12a08f7fafdc5148848eae58acbfe57f0fdbb16cedeb387bb3f2dfa8499bff19"} Dec 10 15:46:45 crc kubenswrapper[4755]: I1210 15:46:45.325892 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"14ae9d17-ca94-4a4a-8148-f24ad4a8f268","Type":"ContainerStarted","Data":"13e8855d5ddcd82893c087a54ffbfd319e8a9a2e0328720840004fa890f48894"} Dec 10 15:46:45 crc kubenswrapper[4755]: I1210 15:46:45.326175 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"14ae9d17-ca94-4a4a-8148-f24ad4a8f268","Type":"ContainerStarted","Data":"c8ac9fc09600e932189eaf102e994da8ead58aaf2389c682648cfbb1afd55d3b"} Dec 10 15:46:45 crc kubenswrapper[4755]: I1210 15:46:45.353135 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wxn4w" event={"ID":"4cb6ed71-48b0-45c8-a470-4b6441c7bff5","Type":"ContainerStarted","Data":"9d7d034b98b54ecedc10a03548420eee8b73a2a66128a96badee77de13588318"} Dec 10 15:46:45 crc kubenswrapper[4755]: I1210 15:46:45.353497 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.353480124 podStartE2EDuration="3.353480124s" podCreationTimestamp="2025-12-10 15:46:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:46:45.338795966 +0000 UTC m=+1401.939679608" watchObservedRunningTime="2025-12-10 15:46:45.353480124 +0000 UTC m=+1401.954363756" Dec 10 15:46:45 crc kubenswrapper[4755]: I1210 15:46:45.354973 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7129b800-f170-4cbf-9b18-26d01b1cdd26","Type":"ContainerStarted","Data":"68c75d4e1d165ebd8cec04c9eabcdfe2b89031902044b808a45d1a5924fb1ecd"} Dec 10 15:46:45 crc kubenswrapper[4755]: I1210 15:46:45.366975 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.36696098 podStartE2EDuration="2.36696098s" podCreationTimestamp="2025-12-10 15:46:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:46:45.363959059 +0000 UTC m=+1401.964842691" watchObservedRunningTime="2025-12-10 15:46:45.36696098 +0000 UTC m=+1401.967844612" Dec 10 15:46:45 crc kubenswrapper[4755]: I1210 15:46:45.399931 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.399913085 podStartE2EDuration="3.399913085s" podCreationTimestamp="2025-12-10 15:46:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:46:45.396096532 +0000 UTC m=+1401.996980184" watchObservedRunningTime="2025-12-10 15:46:45.399913085 +0000 UTC m=+1402.000796717" Dec 10 15:46:45 crc kubenswrapper[4755]: I1210 15:46:45.427492 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-wxn4w" podStartSLOduration=2.521053185 podStartE2EDuration="28.427448173s" podCreationTimestamp="2025-12-10 15:46:17 +0000 UTC" firstStartedPulling="2025-12-10 15:46:18.512833547 +0000 UTC m=+1375.113717179" lastFinishedPulling="2025-12-10 15:46:44.419228535 +0000 UTC m=+1401.020112167" observedRunningTime="2025-12-10 15:46:45.415108577 +0000 UTC m=+1402.015992219" watchObservedRunningTime="2025-12-10 15:46:45.427448173 +0000 UTC m=+1402.028331795" Dec 10 15:46:45 crc kubenswrapper[4755]: I1210 15:46:45.669500 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 10 15:46:45 crc kubenswrapper[4755]: I1210 15:46:45.669775 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="60343e12-2433-4e98-9759-09d5e2b9d82b" containerName="kube-state-metrics" containerID="cri-o://f459e970f94181348851f1bcbebbaaf108eded56e4be8fbd9873dc691cfcad1e" gracePeriod=30 Dec 10 15:46:46 crc kubenswrapper[4755]: I1210 15:46:46.367766 4755 generic.go:334] "Generic (PLEG): container finished" podID="60343e12-2433-4e98-9759-09d5e2b9d82b" containerID="f459e970f94181348851f1bcbebbaaf108eded56e4be8fbd9873dc691cfcad1e" exitCode=2 Dec 10 15:46:46 crc kubenswrapper[4755]: I1210 15:46:46.367849 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"60343e12-2433-4e98-9759-09d5e2b9d82b","Type":"ContainerDied","Data":"f459e970f94181348851f1bcbebbaaf108eded56e4be8fbd9873dc691cfcad1e"} Dec 10 15:46:47 crc kubenswrapper[4755]: I1210 15:46:47.012565 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 10 15:46:47 crc kubenswrapper[4755]: I1210 15:46:47.169594 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ltt7r\" (UniqueName: \"kubernetes.io/projected/60343e12-2433-4e98-9759-09d5e2b9d82b-kube-api-access-ltt7r\") pod \"60343e12-2433-4e98-9759-09d5e2b9d82b\" (UID: \"60343e12-2433-4e98-9759-09d5e2b9d82b\") " Dec 10 15:46:47 crc kubenswrapper[4755]: I1210 15:46:47.183718 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60343e12-2433-4e98-9759-09d5e2b9d82b-kube-api-access-ltt7r" (OuterVolumeSpecName: "kube-api-access-ltt7r") pod "60343e12-2433-4e98-9759-09d5e2b9d82b" (UID: "60343e12-2433-4e98-9759-09d5e2b9d82b"). InnerVolumeSpecName "kube-api-access-ltt7r". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:46:47 crc kubenswrapper[4755]: I1210 15:46:47.272527 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ltt7r\" (UniqueName: \"kubernetes.io/projected/60343e12-2433-4e98-9759-09d5e2b9d82b-kube-api-access-ltt7r\") on node \"crc\" DevicePath \"\"" Dec 10 15:46:47 crc kubenswrapper[4755]: I1210 15:46:47.378317 4755 generic.go:334] "Generic (PLEG): container finished" podID="790c3b51-ebb0-4e09-83ed-ecc8cf5c7701" containerID="1bcf89598c6d7fd6ee49ea88b74b2bf5a037da185e4ec1f1c2ebad0b1f9487c9" exitCode=0 Dec 10 15:46:47 crc kubenswrapper[4755]: I1210 15:46:47.378353 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-vx69q" event={"ID":"790c3b51-ebb0-4e09-83ed-ecc8cf5c7701","Type":"ContainerDied","Data":"1bcf89598c6d7fd6ee49ea88b74b2bf5a037da185e4ec1f1c2ebad0b1f9487c9"} Dec 10 15:46:47 crc kubenswrapper[4755]: I1210 15:46:47.380081 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"60343e12-2433-4e98-9759-09d5e2b9d82b","Type":"ContainerDied","Data":"4f9ec82ee4f1e523df2e79fb89440f8a104bd0cbc1654fd9de4ae3617654aebd"} Dec 10 15:46:47 crc kubenswrapper[4755]: I1210 15:46:47.380127 4755 scope.go:117] "RemoveContainer" containerID="f459e970f94181348851f1bcbebbaaf108eded56e4be8fbd9873dc691cfcad1e" Dec 10 15:46:47 crc kubenswrapper[4755]: I1210 15:46:47.380148 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 10 15:46:47 crc kubenswrapper[4755]: I1210 15:46:47.447501 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 10 15:46:47 crc kubenswrapper[4755]: I1210 15:46:47.465403 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 10 15:46:47 crc kubenswrapper[4755]: I1210 15:46:47.477060 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Dec 10 15:46:47 crc kubenswrapper[4755]: E1210 15:46:47.477500 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60343e12-2433-4e98-9759-09d5e2b9d82b" containerName="kube-state-metrics" Dec 10 15:46:47 crc kubenswrapper[4755]: I1210 15:46:47.477517 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="60343e12-2433-4e98-9759-09d5e2b9d82b" containerName="kube-state-metrics" Dec 10 15:46:47 crc kubenswrapper[4755]: I1210 15:46:47.477760 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="60343e12-2433-4e98-9759-09d5e2b9d82b" containerName="kube-state-metrics" Dec 10 15:46:47 crc kubenswrapper[4755]: I1210 15:46:47.478535 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 10 15:46:47 crc kubenswrapper[4755]: I1210 15:46:47.486965 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Dec 10 15:46:47 crc kubenswrapper[4755]: I1210 15:46:47.487320 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Dec 10 15:46:47 crc kubenswrapper[4755]: I1210 15:46:47.489989 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 10 15:46:47 crc kubenswrapper[4755]: I1210 15:46:47.510134 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-wxn4w" Dec 10 15:46:47 crc kubenswrapper[4755]: I1210 15:46:47.510168 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-wxn4w" Dec 10 15:46:47 crc kubenswrapper[4755]: I1210 15:46:47.554066 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 10 15:46:47 crc kubenswrapper[4755]: I1210 15:46:47.554111 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 10 15:46:47 crc kubenswrapper[4755]: I1210 15:46:47.584860 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6tnz\" (UniqueName: \"kubernetes.io/projected/6a99a8eb-9c08-42c4-ba66-5c5b641b39b1-kube-api-access-b6tnz\") pod \"kube-state-metrics-0\" (UID: \"6a99a8eb-9c08-42c4-ba66-5c5b641b39b1\") " pod="openstack/kube-state-metrics-0" Dec 10 15:46:47 crc kubenswrapper[4755]: I1210 15:46:47.584951 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/6a99a8eb-9c08-42c4-ba66-5c5b641b39b1-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"6a99a8eb-9c08-42c4-ba66-5c5b641b39b1\") " pod="openstack/kube-state-metrics-0" Dec 10 15:46:47 crc kubenswrapper[4755]: I1210 15:46:47.585089 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a99a8eb-9c08-42c4-ba66-5c5b641b39b1-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"6a99a8eb-9c08-42c4-ba66-5c5b641b39b1\") " pod="openstack/kube-state-metrics-0" Dec 10 15:46:47 crc kubenswrapper[4755]: I1210 15:46:47.585128 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a99a8eb-9c08-42c4-ba66-5c5b641b39b1-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"6a99a8eb-9c08-42c4-ba66-5c5b641b39b1\") " pod="openstack/kube-state-metrics-0" Dec 10 15:46:47 crc kubenswrapper[4755]: I1210 15:46:47.687681 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6tnz\" (UniqueName: \"kubernetes.io/projected/6a99a8eb-9c08-42c4-ba66-5c5b641b39b1-kube-api-access-b6tnz\") pod \"kube-state-metrics-0\" (UID: \"6a99a8eb-9c08-42c4-ba66-5c5b641b39b1\") " pod="openstack/kube-state-metrics-0" Dec 10 15:46:47 crc kubenswrapper[4755]: I1210 15:46:47.687805 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/6a99a8eb-9c08-42c4-ba66-5c5b641b39b1-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"6a99a8eb-9c08-42c4-ba66-5c5b641b39b1\") " pod="openstack/kube-state-metrics-0" Dec 10 15:46:47 crc kubenswrapper[4755]: I1210 15:46:47.687827 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a99a8eb-9c08-42c4-ba66-5c5b641b39b1-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"6a99a8eb-9c08-42c4-ba66-5c5b641b39b1\") " pod="openstack/kube-state-metrics-0" Dec 10 15:46:47 crc kubenswrapper[4755]: I1210 15:46:47.687847 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a99a8eb-9c08-42c4-ba66-5c5b641b39b1-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"6a99a8eb-9c08-42c4-ba66-5c5b641b39b1\") " pod="openstack/kube-state-metrics-0" Dec 10 15:46:47 crc kubenswrapper[4755]: I1210 15:46:47.704191 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a99a8eb-9c08-42c4-ba66-5c5b641b39b1-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"6a99a8eb-9c08-42c4-ba66-5c5b641b39b1\") " pod="openstack/kube-state-metrics-0" Dec 10 15:46:47 crc kubenswrapper[4755]: I1210 15:46:47.707271 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a99a8eb-9c08-42c4-ba66-5c5b641b39b1-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"6a99a8eb-9c08-42c4-ba66-5c5b641b39b1\") " pod="openstack/kube-state-metrics-0" Dec 10 15:46:47 crc kubenswrapper[4755]: I1210 15:46:47.707391 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/6a99a8eb-9c08-42c4-ba66-5c5b641b39b1-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"6a99a8eb-9c08-42c4-ba66-5c5b641b39b1\") " pod="openstack/kube-state-metrics-0" Dec 10 15:46:47 crc kubenswrapper[4755]: I1210 15:46:47.707445 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6tnz\" (UniqueName: \"kubernetes.io/projected/6a99a8eb-9c08-42c4-ba66-5c5b641b39b1-kube-api-access-b6tnz\") pod \"kube-state-metrics-0\" (UID: \"6a99a8eb-9c08-42c4-ba66-5c5b641b39b1\") " pod="openstack/kube-state-metrics-0" Dec 10 15:46:47 crc kubenswrapper[4755]: I1210 15:46:47.769575 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60343e12-2433-4e98-9759-09d5e2b9d82b" path="/var/lib/kubelet/pods/60343e12-2433-4e98-9759-09d5e2b9d82b/volumes" Dec 10 15:46:47 crc kubenswrapper[4755]: I1210 15:46:47.807820 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 10 15:46:48 crc kubenswrapper[4755]: I1210 15:46:48.185086 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 10 15:46:48 crc kubenswrapper[4755]: I1210 15:46:48.185628 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1eafd33a-2808-4b35-b947-a7a60e905060" containerName="ceilometer-central-agent" containerID="cri-o://96b996c0e138ca9310c156598483c370545afbf4a92c9cc09db0447edc81faac" gracePeriod=30 Dec 10 15:46:48 crc kubenswrapper[4755]: I1210 15:46:48.185777 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1eafd33a-2808-4b35-b947-a7a60e905060" containerName="proxy-httpd" containerID="cri-o://19606ce160e6b91b398b11d4357f9766b677c5c385b73e29c58c919fe5b0c455" gracePeriod=30 Dec 10 15:46:48 crc kubenswrapper[4755]: I1210 15:46:48.185815 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1eafd33a-2808-4b35-b947-a7a60e905060" containerName="sg-core" containerID="cri-o://41e0df8f17e958fc25020aaf5d78783fb92230f74c32e6ee83a255045e7dd8cb" gracePeriod=30 Dec 10 15:46:48 crc kubenswrapper[4755]: I1210 15:46:48.185849 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1eafd33a-2808-4b35-b947-a7a60e905060" containerName="ceilometer-notification-agent" containerID="cri-o://18416dbacce5aa6468f363c38fa0c79eaf3627ed5f08a7471c3a48ba24e063d4" gracePeriod=30 Dec 10 15:46:48 crc kubenswrapper[4755]: I1210 15:46:48.595685 4755 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wxn4w" podUID="4cb6ed71-48b0-45c8-a470-4b6441c7bff5" containerName="registry-server" probeResult="failure" output=< Dec 10 15:46:48 crc kubenswrapper[4755]: timeout: failed to connect service ":50051" within 1s Dec 10 15:46:48 crc kubenswrapper[4755]: > Dec 10 15:46:48 crc kubenswrapper[4755]: I1210 15:46:48.746654 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 10 15:46:48 crc kubenswrapper[4755]: I1210 15:46:48.850085 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 10 15:46:49 crc kubenswrapper[4755]: I1210 15:46:49.063954 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-vx69q" Dec 10 15:46:49 crc kubenswrapper[4755]: E1210 15:46:49.097871 4755 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1eafd33a_2808_4b35_b947_a7a60e905060.slice/crio-conmon-96b996c0e138ca9310c156598483c370545afbf4a92c9cc09db0447edc81faac.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1eafd33a_2808_4b35_b947_a7a60e905060.slice/crio-96b996c0e138ca9310c156598483c370545afbf4a92c9cc09db0447edc81faac.scope\": RecentStats: unable to find data in memory cache]" Dec 10 15:46:49 crc kubenswrapper[4755]: I1210 15:46:49.163185 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/790c3b51-ebb0-4e09-83ed-ecc8cf5c7701-scripts\") pod \"790c3b51-ebb0-4e09-83ed-ecc8cf5c7701\" (UID: \"790c3b51-ebb0-4e09-83ed-ecc8cf5c7701\") " Dec 10 15:46:49 crc kubenswrapper[4755]: I1210 15:46:49.163358 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z2fvg\" (UniqueName: \"kubernetes.io/projected/790c3b51-ebb0-4e09-83ed-ecc8cf5c7701-kube-api-access-z2fvg\") pod \"790c3b51-ebb0-4e09-83ed-ecc8cf5c7701\" (UID: \"790c3b51-ebb0-4e09-83ed-ecc8cf5c7701\") " Dec 10 15:46:49 crc kubenswrapper[4755]: I1210 15:46:49.163502 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/790c3b51-ebb0-4e09-83ed-ecc8cf5c7701-combined-ca-bundle\") pod \"790c3b51-ebb0-4e09-83ed-ecc8cf5c7701\" (UID: \"790c3b51-ebb0-4e09-83ed-ecc8cf5c7701\") " Dec 10 15:46:49 crc kubenswrapper[4755]: I1210 15:46:49.163565 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/790c3b51-ebb0-4e09-83ed-ecc8cf5c7701-config-data\") pod \"790c3b51-ebb0-4e09-83ed-ecc8cf5c7701\" (UID: \"790c3b51-ebb0-4e09-83ed-ecc8cf5c7701\") " Dec 10 15:46:49 crc kubenswrapper[4755]: I1210 15:46:49.169367 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/790c3b51-ebb0-4e09-83ed-ecc8cf5c7701-scripts" (OuterVolumeSpecName: "scripts") pod "790c3b51-ebb0-4e09-83ed-ecc8cf5c7701" (UID: "790c3b51-ebb0-4e09-83ed-ecc8cf5c7701"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:46:49 crc kubenswrapper[4755]: I1210 15:46:49.195820 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/790c3b51-ebb0-4e09-83ed-ecc8cf5c7701-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "790c3b51-ebb0-4e09-83ed-ecc8cf5c7701" (UID: "790c3b51-ebb0-4e09-83ed-ecc8cf5c7701"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:46:49 crc kubenswrapper[4755]: I1210 15:46:49.196870 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/790c3b51-ebb0-4e09-83ed-ecc8cf5c7701-kube-api-access-z2fvg" (OuterVolumeSpecName: "kube-api-access-z2fvg") pod "790c3b51-ebb0-4e09-83ed-ecc8cf5c7701" (UID: "790c3b51-ebb0-4e09-83ed-ecc8cf5c7701"). InnerVolumeSpecName "kube-api-access-z2fvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:46:49 crc kubenswrapper[4755]: I1210 15:46:49.206659 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/790c3b51-ebb0-4e09-83ed-ecc8cf5c7701-config-data" (OuterVolumeSpecName: "config-data") pod "790c3b51-ebb0-4e09-83ed-ecc8cf5c7701" (UID: "790c3b51-ebb0-4e09-83ed-ecc8cf5c7701"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:46:49 crc kubenswrapper[4755]: I1210 15:46:49.265758 4755 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/790c3b51-ebb0-4e09-83ed-ecc8cf5c7701-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:46:49 crc kubenswrapper[4755]: I1210 15:46:49.265809 4755 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/790c3b51-ebb0-4e09-83ed-ecc8cf5c7701-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 15:46:49 crc kubenswrapper[4755]: I1210 15:46:49.265821 4755 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/790c3b51-ebb0-4e09-83ed-ecc8cf5c7701-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 15:46:49 crc kubenswrapper[4755]: I1210 15:46:49.265831 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z2fvg\" (UniqueName: \"kubernetes.io/projected/790c3b51-ebb0-4e09-83ed-ecc8cf5c7701-kube-api-access-z2fvg\") on node \"crc\" DevicePath \"\"" Dec 10 15:46:49 crc kubenswrapper[4755]: I1210 15:46:49.476409 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6a99a8eb-9c08-42c4-ba66-5c5b641b39b1","Type":"ContainerStarted","Data":"24aea69ff22f5181b9aa625d06d6edaf3103d060bd06bc77fba80091430a36dc"} Dec 10 15:46:49 crc kubenswrapper[4755]: I1210 15:46:49.480460 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-vx69q" Dec 10 15:46:49 crc kubenswrapper[4755]: I1210 15:46:49.480483 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-vx69q" event={"ID":"790c3b51-ebb0-4e09-83ed-ecc8cf5c7701","Type":"ContainerDied","Data":"6a24900d1ffa69db85629aa44df6fd4dfcb7c8f215f2a414265406a5c2157c54"} Dec 10 15:46:49 crc kubenswrapper[4755]: I1210 15:46:49.480541 4755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a24900d1ffa69db85629aa44df6fd4dfcb7c8f215f2a414265406a5c2157c54" Dec 10 15:46:49 crc kubenswrapper[4755]: I1210 15:46:49.486063 4755 generic.go:334] "Generic (PLEG): container finished" podID="1eafd33a-2808-4b35-b947-a7a60e905060" containerID="19606ce160e6b91b398b11d4357f9766b677c5c385b73e29c58c919fe5b0c455" exitCode=0 Dec 10 15:46:49 crc kubenswrapper[4755]: I1210 15:46:49.486101 4755 generic.go:334] "Generic (PLEG): container finished" podID="1eafd33a-2808-4b35-b947-a7a60e905060" containerID="41e0df8f17e958fc25020aaf5d78783fb92230f74c32e6ee83a255045e7dd8cb" exitCode=2 Dec 10 15:46:49 crc kubenswrapper[4755]: I1210 15:46:49.486113 4755 generic.go:334] "Generic (PLEG): container finished" podID="1eafd33a-2808-4b35-b947-a7a60e905060" containerID="96b996c0e138ca9310c156598483c370545afbf4a92c9cc09db0447edc81faac" exitCode=0 Dec 10 15:46:49 crc kubenswrapper[4755]: I1210 15:46:49.486146 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1eafd33a-2808-4b35-b947-a7a60e905060","Type":"ContainerDied","Data":"19606ce160e6b91b398b11d4357f9766b677c5c385b73e29c58c919fe5b0c455"} Dec 10 15:46:49 crc kubenswrapper[4755]: I1210 15:46:49.486209 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1eafd33a-2808-4b35-b947-a7a60e905060","Type":"ContainerDied","Data":"41e0df8f17e958fc25020aaf5d78783fb92230f74c32e6ee83a255045e7dd8cb"} Dec 10 15:46:49 crc kubenswrapper[4755]: I1210 15:46:49.486227 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1eafd33a-2808-4b35-b947-a7a60e905060","Type":"ContainerDied","Data":"96b996c0e138ca9310c156598483c370545afbf4a92c9cc09db0447edc81faac"} Dec 10 15:46:49 crc kubenswrapper[4755]: I1210 15:46:49.513242 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 10 15:46:49 crc kubenswrapper[4755]: E1210 15:46:49.513717 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="790c3b51-ebb0-4e09-83ed-ecc8cf5c7701" containerName="nova-cell1-conductor-db-sync" Dec 10 15:46:49 crc kubenswrapper[4755]: I1210 15:46:49.513739 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="790c3b51-ebb0-4e09-83ed-ecc8cf5c7701" containerName="nova-cell1-conductor-db-sync" Dec 10 15:46:49 crc kubenswrapper[4755]: I1210 15:46:49.513971 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="790c3b51-ebb0-4e09-83ed-ecc8cf5c7701" containerName="nova-cell1-conductor-db-sync" Dec 10 15:46:49 crc kubenswrapper[4755]: I1210 15:46:49.514761 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 10 15:46:49 crc kubenswrapper[4755]: I1210 15:46:49.526238 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Dec 10 15:46:49 crc kubenswrapper[4755]: I1210 15:46:49.531815 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 10 15:46:49 crc kubenswrapper[4755]: I1210 15:46:49.571052 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dx657\" (UniqueName: \"kubernetes.io/projected/8c0bd361-24a4-4d1f-ba5c-6614b244f726-kube-api-access-dx657\") pod \"nova-cell1-conductor-0\" (UID: \"8c0bd361-24a4-4d1f-ba5c-6614b244f726\") " pod="openstack/nova-cell1-conductor-0" Dec 10 15:46:49 crc kubenswrapper[4755]: I1210 15:46:49.571093 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c0bd361-24a4-4d1f-ba5c-6614b244f726-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"8c0bd361-24a4-4d1f-ba5c-6614b244f726\") " pod="openstack/nova-cell1-conductor-0" Dec 10 15:46:49 crc kubenswrapper[4755]: I1210 15:46:49.571177 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c0bd361-24a4-4d1f-ba5c-6614b244f726-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"8c0bd361-24a4-4d1f-ba5c-6614b244f726\") " pod="openstack/nova-cell1-conductor-0" Dec 10 15:46:49 crc kubenswrapper[4755]: I1210 15:46:49.673143 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c0bd361-24a4-4d1f-ba5c-6614b244f726-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"8c0bd361-24a4-4d1f-ba5c-6614b244f726\") " pod="openstack/nova-cell1-conductor-0" Dec 10 15:46:49 crc kubenswrapper[4755]: I1210 15:46:49.674184 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dx657\" (UniqueName: \"kubernetes.io/projected/8c0bd361-24a4-4d1f-ba5c-6614b244f726-kube-api-access-dx657\") pod \"nova-cell1-conductor-0\" (UID: \"8c0bd361-24a4-4d1f-ba5c-6614b244f726\") " pod="openstack/nova-cell1-conductor-0" Dec 10 15:46:49 crc kubenswrapper[4755]: I1210 15:46:49.674237 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c0bd361-24a4-4d1f-ba5c-6614b244f726-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"8c0bd361-24a4-4d1f-ba5c-6614b244f726\") " pod="openstack/nova-cell1-conductor-0" Dec 10 15:46:49 crc kubenswrapper[4755]: I1210 15:46:49.680648 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c0bd361-24a4-4d1f-ba5c-6614b244f726-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"8c0bd361-24a4-4d1f-ba5c-6614b244f726\") " pod="openstack/nova-cell1-conductor-0" Dec 10 15:46:49 crc kubenswrapper[4755]: I1210 15:46:49.680717 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c0bd361-24a4-4d1f-ba5c-6614b244f726-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"8c0bd361-24a4-4d1f-ba5c-6614b244f726\") " pod="openstack/nova-cell1-conductor-0" Dec 10 15:46:49 crc kubenswrapper[4755]: I1210 15:46:49.694750 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dx657\" (UniqueName: \"kubernetes.io/projected/8c0bd361-24a4-4d1f-ba5c-6614b244f726-kube-api-access-dx657\") pod \"nova-cell1-conductor-0\" (UID: \"8c0bd361-24a4-4d1f-ba5c-6614b244f726\") " pod="openstack/nova-cell1-conductor-0" Dec 10 15:46:49 crc kubenswrapper[4755]: I1210 15:46:49.840295 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 10 15:46:50 crc kubenswrapper[4755]: I1210 15:46:50.386042 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 10 15:46:50 crc kubenswrapper[4755]: W1210 15:46:50.389562 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8c0bd361_24a4_4d1f_ba5c_6614b244f726.slice/crio-a010cf8ad3716e48e941f731c253c533188a1f12dc1cd64be3b0e5bcec5c4f70 WatchSource:0}: Error finding container a010cf8ad3716e48e941f731c253c533188a1f12dc1cd64be3b0e5bcec5c4f70: Status 404 returned error can't find the container with id a010cf8ad3716e48e941f731c253c533188a1f12dc1cd64be3b0e5bcec5c4f70 Dec 10 15:46:50 crc kubenswrapper[4755]: I1210 15:46:50.531348 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6a99a8eb-9c08-42c4-ba66-5c5b641b39b1","Type":"ContainerStarted","Data":"1731d7ad140d1dff3a17ed0acf971dab11d53dc2a787685e3b0bc1ca2723859c"} Dec 10 15:46:50 crc kubenswrapper[4755]: I1210 15:46:50.531450 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Dec 10 15:46:50 crc kubenswrapper[4755]: I1210 15:46:50.535276 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"8c0bd361-24a4-4d1f-ba5c-6614b244f726","Type":"ContainerStarted","Data":"a010cf8ad3716e48e941f731c253c533188a1f12dc1cd64be3b0e5bcec5c4f70"} Dec 10 15:46:50 crc kubenswrapper[4755]: I1210 15:46:50.556096 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=3.08545037 podStartE2EDuration="3.556078885s" podCreationTimestamp="2025-12-10 15:46:47 +0000 UTC" firstStartedPulling="2025-12-10 15:46:48.864668933 +0000 UTC m=+1405.465552565" lastFinishedPulling="2025-12-10 15:46:49.335297448 +0000 UTC m=+1405.936181080" observedRunningTime="2025-12-10 15:46:50.555405236 +0000 UTC m=+1407.156288868" watchObservedRunningTime="2025-12-10 15:46:50.556078885 +0000 UTC m=+1407.156962517" Dec 10 15:46:52 crc kubenswrapper[4755]: I1210 15:46:52.554340 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 10 15:46:52 crc kubenswrapper[4755]: I1210 15:46:52.554863 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 10 15:46:52 crc kubenswrapper[4755]: I1210 15:46:52.561887 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"8c0bd361-24a4-4d1f-ba5c-6614b244f726","Type":"ContainerStarted","Data":"2d5c921ce67a0e084538dffd1f83c546a1ff421d44615fe3be7b52ce3f9d58d6"} Dec 10 15:46:52 crc kubenswrapper[4755]: I1210 15:46:52.563076 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Dec 10 15:46:52 crc kubenswrapper[4755]: I1210 15:46:52.566127 4755 generic.go:334] "Generic (PLEG): container finished" podID="1eafd33a-2808-4b35-b947-a7a60e905060" containerID="18416dbacce5aa6468f363c38fa0c79eaf3627ed5f08a7471c3a48ba24e063d4" exitCode=0 Dec 10 15:46:52 crc kubenswrapper[4755]: I1210 15:46:52.566166 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1eafd33a-2808-4b35-b947-a7a60e905060","Type":"ContainerDied","Data":"18416dbacce5aa6468f363c38fa0c79eaf3627ed5f08a7471c3a48ba24e063d4"} Dec 10 15:46:52 crc kubenswrapper[4755]: I1210 15:46:52.589974 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=3.589950413 podStartE2EDuration="3.589950413s" podCreationTimestamp="2025-12-10 15:46:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:46:52.578954794 +0000 UTC m=+1409.179838426" watchObservedRunningTime="2025-12-10 15:46:52.589950413 +0000 UTC m=+1409.190834045" Dec 10 15:46:52 crc kubenswrapper[4755]: I1210 15:46:52.744603 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 15:46:52 crc kubenswrapper[4755]: I1210 15:46:52.847149 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1eafd33a-2808-4b35-b947-a7a60e905060-config-data\") pod \"1eafd33a-2808-4b35-b947-a7a60e905060\" (UID: \"1eafd33a-2808-4b35-b947-a7a60e905060\") " Dec 10 15:46:52 crc kubenswrapper[4755]: I1210 15:46:52.847287 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1eafd33a-2808-4b35-b947-a7a60e905060-run-httpd\") pod \"1eafd33a-2808-4b35-b947-a7a60e905060\" (UID: \"1eafd33a-2808-4b35-b947-a7a60e905060\") " Dec 10 15:46:52 crc kubenswrapper[4755]: I1210 15:46:52.847324 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1eafd33a-2808-4b35-b947-a7a60e905060-sg-core-conf-yaml\") pod \"1eafd33a-2808-4b35-b947-a7a60e905060\" (UID: \"1eafd33a-2808-4b35-b947-a7a60e905060\") " Dec 10 15:46:52 crc kubenswrapper[4755]: I1210 15:46:52.847421 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cq28s\" (UniqueName: \"kubernetes.io/projected/1eafd33a-2808-4b35-b947-a7a60e905060-kube-api-access-cq28s\") pod \"1eafd33a-2808-4b35-b947-a7a60e905060\" (UID: \"1eafd33a-2808-4b35-b947-a7a60e905060\") " Dec 10 15:46:52 crc kubenswrapper[4755]: I1210 15:46:52.847445 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1eafd33a-2808-4b35-b947-a7a60e905060-scripts\") pod \"1eafd33a-2808-4b35-b947-a7a60e905060\" (UID: \"1eafd33a-2808-4b35-b947-a7a60e905060\") " Dec 10 15:46:52 crc kubenswrapper[4755]: I1210 15:46:52.847558 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1eafd33a-2808-4b35-b947-a7a60e905060-log-httpd\") pod \"1eafd33a-2808-4b35-b947-a7a60e905060\" (UID: \"1eafd33a-2808-4b35-b947-a7a60e905060\") " Dec 10 15:46:52 crc kubenswrapper[4755]: I1210 15:46:52.847583 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1eafd33a-2808-4b35-b947-a7a60e905060-combined-ca-bundle\") pod \"1eafd33a-2808-4b35-b947-a7a60e905060\" (UID: \"1eafd33a-2808-4b35-b947-a7a60e905060\") " Dec 10 15:46:52 crc kubenswrapper[4755]: I1210 15:46:52.848039 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1eafd33a-2808-4b35-b947-a7a60e905060-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "1eafd33a-2808-4b35-b947-a7a60e905060" (UID: "1eafd33a-2808-4b35-b947-a7a60e905060"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:46:52 crc kubenswrapper[4755]: I1210 15:46:52.848062 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1eafd33a-2808-4b35-b947-a7a60e905060-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "1eafd33a-2808-4b35-b947-a7a60e905060" (UID: "1eafd33a-2808-4b35-b947-a7a60e905060"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:46:52 crc kubenswrapper[4755]: I1210 15:46:52.856553 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1eafd33a-2808-4b35-b947-a7a60e905060-scripts" (OuterVolumeSpecName: "scripts") pod "1eafd33a-2808-4b35-b947-a7a60e905060" (UID: "1eafd33a-2808-4b35-b947-a7a60e905060"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:46:52 crc kubenswrapper[4755]: I1210 15:46:52.861284 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1eafd33a-2808-4b35-b947-a7a60e905060-kube-api-access-cq28s" (OuterVolumeSpecName: "kube-api-access-cq28s") pod "1eafd33a-2808-4b35-b947-a7a60e905060" (UID: "1eafd33a-2808-4b35-b947-a7a60e905060"). InnerVolumeSpecName "kube-api-access-cq28s". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:46:52 crc kubenswrapper[4755]: I1210 15:46:52.893560 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1eafd33a-2808-4b35-b947-a7a60e905060-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "1eafd33a-2808-4b35-b947-a7a60e905060" (UID: "1eafd33a-2808-4b35-b947-a7a60e905060"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:46:52 crc kubenswrapper[4755]: I1210 15:46:52.945684 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1eafd33a-2808-4b35-b947-a7a60e905060-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1eafd33a-2808-4b35-b947-a7a60e905060" (UID: "1eafd33a-2808-4b35-b947-a7a60e905060"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:46:52 crc kubenswrapper[4755]: I1210 15:46:52.952898 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cq28s\" (UniqueName: \"kubernetes.io/projected/1eafd33a-2808-4b35-b947-a7a60e905060-kube-api-access-cq28s\") on node \"crc\" DevicePath \"\"" Dec 10 15:46:52 crc kubenswrapper[4755]: I1210 15:46:52.952937 4755 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1eafd33a-2808-4b35-b947-a7a60e905060-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 15:46:52 crc kubenswrapper[4755]: I1210 15:46:52.952947 4755 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1eafd33a-2808-4b35-b947-a7a60e905060-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 10 15:46:52 crc kubenswrapper[4755]: I1210 15:46:52.952959 4755 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1eafd33a-2808-4b35-b947-a7a60e905060-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:46:52 crc kubenswrapper[4755]: I1210 15:46:52.952969 4755 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1eafd33a-2808-4b35-b947-a7a60e905060-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 10 15:46:52 crc kubenswrapper[4755]: I1210 15:46:52.952979 4755 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1eafd33a-2808-4b35-b947-a7a60e905060-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 10 15:46:52 crc kubenswrapper[4755]: I1210 15:46:52.995624 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1eafd33a-2808-4b35-b947-a7a60e905060-config-data" (OuterVolumeSpecName: "config-data") pod "1eafd33a-2808-4b35-b947-a7a60e905060" (UID: "1eafd33a-2808-4b35-b947-a7a60e905060"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:46:53 crc kubenswrapper[4755]: I1210 15:46:53.036276 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 10 15:46:53 crc kubenswrapper[4755]: I1210 15:46:53.036333 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 10 15:46:53 crc kubenswrapper[4755]: I1210 15:46:53.055287 4755 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1eafd33a-2808-4b35-b947-a7a60e905060-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 15:46:53 crc kubenswrapper[4755]: I1210 15:46:53.567724 4755 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="04cde7cd-7f43-4870-a4fc-c8bdb0b4188a" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.217:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 10 15:46:53 crc kubenswrapper[4755]: I1210 15:46:53.567775 4755 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="04cde7cd-7f43-4870-a4fc-c8bdb0b4188a" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.217:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 10 15:46:53 crc kubenswrapper[4755]: I1210 15:46:53.580730 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1eafd33a-2808-4b35-b947-a7a60e905060","Type":"ContainerDied","Data":"cd3fa9fb4f5730b8ee92866126a8ed81845a7b19b8992447b62eca4166fa1f22"} Dec 10 15:46:53 crc kubenswrapper[4755]: I1210 15:46:53.580802 4755 scope.go:117] "RemoveContainer" containerID="19606ce160e6b91b398b11d4357f9766b677c5c385b73e29c58c919fe5b0c455" Dec 10 15:46:53 crc kubenswrapper[4755]: I1210 15:46:53.580756 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 15:46:53 crc kubenswrapper[4755]: I1210 15:46:53.607160 4755 scope.go:117] "RemoveContainer" containerID="41e0df8f17e958fc25020aaf5d78783fb92230f74c32e6ee83a255045e7dd8cb" Dec 10 15:46:53 crc kubenswrapper[4755]: I1210 15:46:53.623262 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 10 15:46:53 crc kubenswrapper[4755]: I1210 15:46:53.640653 4755 scope.go:117] "RemoveContainer" containerID="18416dbacce5aa6468f363c38fa0c79eaf3627ed5f08a7471c3a48ba24e063d4" Dec 10 15:46:53 crc kubenswrapper[4755]: I1210 15:46:53.646319 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 10 15:46:53 crc kubenswrapper[4755]: I1210 15:46:53.664691 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 10 15:46:53 crc kubenswrapper[4755]: E1210 15:46:53.665207 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1eafd33a-2808-4b35-b947-a7a60e905060" containerName="ceilometer-notification-agent" Dec 10 15:46:53 crc kubenswrapper[4755]: I1210 15:46:53.665230 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="1eafd33a-2808-4b35-b947-a7a60e905060" containerName="ceilometer-notification-agent" Dec 10 15:46:53 crc kubenswrapper[4755]: E1210 15:46:53.665254 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1eafd33a-2808-4b35-b947-a7a60e905060" containerName="sg-core" Dec 10 15:46:53 crc kubenswrapper[4755]: I1210 15:46:53.665263 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="1eafd33a-2808-4b35-b947-a7a60e905060" containerName="sg-core" Dec 10 15:46:53 crc kubenswrapper[4755]: E1210 15:46:53.665294 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1eafd33a-2808-4b35-b947-a7a60e905060" containerName="proxy-httpd" Dec 10 15:46:53 crc kubenswrapper[4755]: I1210 15:46:53.665303 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="1eafd33a-2808-4b35-b947-a7a60e905060" containerName="proxy-httpd" Dec 10 15:46:53 crc kubenswrapper[4755]: E1210 15:46:53.665329 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1eafd33a-2808-4b35-b947-a7a60e905060" containerName="ceilometer-central-agent" Dec 10 15:46:53 crc kubenswrapper[4755]: I1210 15:46:53.665338 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="1eafd33a-2808-4b35-b947-a7a60e905060" containerName="ceilometer-central-agent" Dec 10 15:46:53 crc kubenswrapper[4755]: I1210 15:46:53.665591 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="1eafd33a-2808-4b35-b947-a7a60e905060" containerName="proxy-httpd" Dec 10 15:46:53 crc kubenswrapper[4755]: I1210 15:46:53.665622 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="1eafd33a-2808-4b35-b947-a7a60e905060" containerName="ceilometer-notification-agent" Dec 10 15:46:53 crc kubenswrapper[4755]: I1210 15:46:53.665637 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="1eafd33a-2808-4b35-b947-a7a60e905060" containerName="sg-core" Dec 10 15:46:53 crc kubenswrapper[4755]: I1210 15:46:53.665649 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="1eafd33a-2808-4b35-b947-a7a60e905060" containerName="ceilometer-central-agent" Dec 10 15:46:53 crc kubenswrapper[4755]: I1210 15:46:53.672254 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 15:46:53 crc kubenswrapper[4755]: I1210 15:46:53.677015 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 10 15:46:53 crc kubenswrapper[4755]: I1210 15:46:53.677326 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 10 15:46:53 crc kubenswrapper[4755]: I1210 15:46:53.677500 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Dec 10 15:46:53 crc kubenswrapper[4755]: I1210 15:46:53.699438 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 10 15:46:53 crc kubenswrapper[4755]: I1210 15:46:53.742727 4755 scope.go:117] "RemoveContainer" containerID="96b996c0e138ca9310c156598483c370545afbf4a92c9cc09db0447edc81faac" Dec 10 15:46:53 crc kubenswrapper[4755]: I1210 15:46:53.770808 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e8e5f041-243a-466d-9d50-6657195db032-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e8e5f041-243a-466d-9d50-6657195db032\") " pod="openstack/ceilometer-0" Dec 10 15:46:53 crc kubenswrapper[4755]: I1210 15:46:53.770886 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8e5f041-243a-466d-9d50-6657195db032-config-data\") pod \"ceilometer-0\" (UID: \"e8e5f041-243a-466d-9d50-6657195db032\") " pod="openstack/ceilometer-0" Dec 10 15:46:53 crc kubenswrapper[4755]: I1210 15:46:53.771126 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e8e5f041-243a-466d-9d50-6657195db032-scripts\") pod \"ceilometer-0\" (UID: \"e8e5f041-243a-466d-9d50-6657195db032\") " pod="openstack/ceilometer-0" Dec 10 15:46:53 crc kubenswrapper[4755]: I1210 15:46:53.771219 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e8e5f041-243a-466d-9d50-6657195db032-run-httpd\") pod \"ceilometer-0\" (UID: \"e8e5f041-243a-466d-9d50-6657195db032\") " pod="openstack/ceilometer-0" Dec 10 15:46:53 crc kubenswrapper[4755]: I1210 15:46:53.771254 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8e5f041-243a-466d-9d50-6657195db032-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e8e5f041-243a-466d-9d50-6657195db032\") " pod="openstack/ceilometer-0" Dec 10 15:46:53 crc kubenswrapper[4755]: I1210 15:46:53.771342 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e8e5f041-243a-466d-9d50-6657195db032-log-httpd\") pod \"ceilometer-0\" (UID: \"e8e5f041-243a-466d-9d50-6657195db032\") " pod="openstack/ceilometer-0" Dec 10 15:46:53 crc kubenswrapper[4755]: I1210 15:46:53.771427 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8e5f041-243a-466d-9d50-6657195db032-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e8e5f041-243a-466d-9d50-6657195db032\") " pod="openstack/ceilometer-0" Dec 10 15:46:53 crc kubenswrapper[4755]: I1210 15:46:53.771658 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smcvc\" (UniqueName: \"kubernetes.io/projected/e8e5f041-243a-466d-9d50-6657195db032-kube-api-access-smcvc\") pod \"ceilometer-0\" (UID: \"e8e5f041-243a-466d-9d50-6657195db032\") " pod="openstack/ceilometer-0" Dec 10 15:46:53 crc kubenswrapper[4755]: I1210 15:46:53.778965 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1eafd33a-2808-4b35-b947-a7a60e905060" path="/var/lib/kubelet/pods/1eafd33a-2808-4b35-b947-a7a60e905060/volumes" Dec 10 15:46:53 crc kubenswrapper[4755]: I1210 15:46:53.850050 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 10 15:46:53 crc kubenswrapper[4755]: I1210 15:46:53.873769 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e8e5f041-243a-466d-9d50-6657195db032-run-httpd\") pod \"ceilometer-0\" (UID: \"e8e5f041-243a-466d-9d50-6657195db032\") " pod="openstack/ceilometer-0" Dec 10 15:46:53 crc kubenswrapper[4755]: I1210 15:46:53.873825 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8e5f041-243a-466d-9d50-6657195db032-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e8e5f041-243a-466d-9d50-6657195db032\") " pod="openstack/ceilometer-0" Dec 10 15:46:53 crc kubenswrapper[4755]: I1210 15:46:53.873882 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e8e5f041-243a-466d-9d50-6657195db032-log-httpd\") pod \"ceilometer-0\" (UID: \"e8e5f041-243a-466d-9d50-6657195db032\") " pod="openstack/ceilometer-0" Dec 10 15:46:53 crc kubenswrapper[4755]: I1210 15:46:53.873924 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8e5f041-243a-466d-9d50-6657195db032-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e8e5f041-243a-466d-9d50-6657195db032\") " pod="openstack/ceilometer-0" Dec 10 15:46:53 crc kubenswrapper[4755]: I1210 15:46:53.874026 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smcvc\" (UniqueName: \"kubernetes.io/projected/e8e5f041-243a-466d-9d50-6657195db032-kube-api-access-smcvc\") pod \"ceilometer-0\" (UID: \"e8e5f041-243a-466d-9d50-6657195db032\") " pod="openstack/ceilometer-0" Dec 10 15:46:53 crc kubenswrapper[4755]: I1210 15:46:53.874146 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e8e5f041-243a-466d-9d50-6657195db032-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e8e5f041-243a-466d-9d50-6657195db032\") " pod="openstack/ceilometer-0" Dec 10 15:46:53 crc kubenswrapper[4755]: I1210 15:46:53.874188 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8e5f041-243a-466d-9d50-6657195db032-config-data\") pod \"ceilometer-0\" (UID: \"e8e5f041-243a-466d-9d50-6657195db032\") " pod="openstack/ceilometer-0" Dec 10 15:46:53 crc kubenswrapper[4755]: I1210 15:46:53.874211 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e8e5f041-243a-466d-9d50-6657195db032-scripts\") pod \"ceilometer-0\" (UID: \"e8e5f041-243a-466d-9d50-6657195db032\") " pod="openstack/ceilometer-0" Dec 10 15:46:53 crc kubenswrapper[4755]: I1210 15:46:53.874345 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e8e5f041-243a-466d-9d50-6657195db032-run-httpd\") pod \"ceilometer-0\" (UID: \"e8e5f041-243a-466d-9d50-6657195db032\") " pod="openstack/ceilometer-0" Dec 10 15:46:53 crc kubenswrapper[4755]: I1210 15:46:53.874369 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e8e5f041-243a-466d-9d50-6657195db032-log-httpd\") pod \"ceilometer-0\" (UID: \"e8e5f041-243a-466d-9d50-6657195db032\") " pod="openstack/ceilometer-0" Dec 10 15:46:53 crc kubenswrapper[4755]: I1210 15:46:53.886261 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e8e5f041-243a-466d-9d50-6657195db032-scripts\") pod \"ceilometer-0\" (UID: \"e8e5f041-243a-466d-9d50-6657195db032\") " pod="openstack/ceilometer-0" Dec 10 15:46:53 crc kubenswrapper[4755]: I1210 15:46:53.887711 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8e5f041-243a-466d-9d50-6657195db032-config-data\") pod \"ceilometer-0\" (UID: \"e8e5f041-243a-466d-9d50-6657195db032\") " pod="openstack/ceilometer-0" Dec 10 15:46:53 crc kubenswrapper[4755]: I1210 15:46:53.889300 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8e5f041-243a-466d-9d50-6657195db032-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e8e5f041-243a-466d-9d50-6657195db032\") " pod="openstack/ceilometer-0" Dec 10 15:46:53 crc kubenswrapper[4755]: I1210 15:46:53.891918 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8e5f041-243a-466d-9d50-6657195db032-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e8e5f041-243a-466d-9d50-6657195db032\") " pod="openstack/ceilometer-0" Dec 10 15:46:53 crc kubenswrapper[4755]: I1210 15:46:53.895012 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e8e5f041-243a-466d-9d50-6657195db032-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e8e5f041-243a-466d-9d50-6657195db032\") " pod="openstack/ceilometer-0" Dec 10 15:46:53 crc kubenswrapper[4755]: I1210 15:46:53.897237 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 10 15:46:53 crc kubenswrapper[4755]: I1210 15:46:53.912818 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smcvc\" (UniqueName: \"kubernetes.io/projected/e8e5f041-243a-466d-9d50-6657195db032-kube-api-access-smcvc\") pod \"ceilometer-0\" (UID: \"e8e5f041-243a-466d-9d50-6657195db032\") " pod="openstack/ceilometer-0" Dec 10 15:46:54 crc kubenswrapper[4755]: I1210 15:46:54.045016 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 15:46:54 crc kubenswrapper[4755]: I1210 15:46:54.117655 4755 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="7129b800-f170-4cbf-9b18-26d01b1cdd26" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.218:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 10 15:46:54 crc kubenswrapper[4755]: I1210 15:46:54.117789 4755 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="7129b800-f170-4cbf-9b18-26d01b1cdd26" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.218:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 10 15:46:54 crc kubenswrapper[4755]: I1210 15:46:54.647043 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 10 15:46:54 crc kubenswrapper[4755]: W1210 15:46:54.741996 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode8e5f041_243a_466d_9d50_6657195db032.slice/crio-15e38929c2b49f2b72a47595f106277799eac2adf0fc6054a720577439e76801 WatchSource:0}: Error finding container 15e38929c2b49f2b72a47595f106277799eac2adf0fc6054a720577439e76801: Status 404 returned error can't find the container with id 15e38929c2b49f2b72a47595f106277799eac2adf0fc6054a720577439e76801 Dec 10 15:46:54 crc kubenswrapper[4755]: I1210 15:46:54.754049 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 10 15:46:55 crc kubenswrapper[4755]: I1210 15:46:55.621590 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e8e5f041-243a-466d-9d50-6657195db032","Type":"ContainerStarted","Data":"15e38929c2b49f2b72a47595f106277799eac2adf0fc6054a720577439e76801"} Dec 10 15:46:56 crc kubenswrapper[4755]: I1210 15:46:56.633455 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e8e5f041-243a-466d-9d50-6657195db032","Type":"ContainerStarted","Data":"b6fbcba8750747d0d6c7ce84e3ec8b01b7f2bbcac0ca7a9ca1e08d8b7a98f725"} Dec 10 15:46:57 crc kubenswrapper[4755]: I1210 15:46:57.611122 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-wxn4w" Dec 10 15:46:57 crc kubenswrapper[4755]: I1210 15:46:57.688993 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-wxn4w" Dec 10 15:46:57 crc kubenswrapper[4755]: I1210 15:46:57.821074 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Dec 10 15:46:58 crc kubenswrapper[4755]: I1210 15:46:58.303114 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 10 15:46:58 crc kubenswrapper[4755]: I1210 15:46:58.381128 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n5qgk\" (UniqueName: \"kubernetes.io/projected/14f7a209-c2fe-4945-9793-a8e4fd08083d-kube-api-access-n5qgk\") pod \"14f7a209-c2fe-4945-9793-a8e4fd08083d\" (UID: \"14f7a209-c2fe-4945-9793-a8e4fd08083d\") " Dec 10 15:46:58 crc kubenswrapper[4755]: I1210 15:46:58.381376 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14f7a209-c2fe-4945-9793-a8e4fd08083d-config-data\") pod \"14f7a209-c2fe-4945-9793-a8e4fd08083d\" (UID: \"14f7a209-c2fe-4945-9793-a8e4fd08083d\") " Dec 10 15:46:58 crc kubenswrapper[4755]: I1210 15:46:58.381405 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14f7a209-c2fe-4945-9793-a8e4fd08083d-combined-ca-bundle\") pod \"14f7a209-c2fe-4945-9793-a8e4fd08083d\" (UID: \"14f7a209-c2fe-4945-9793-a8e4fd08083d\") " Dec 10 15:46:58 crc kubenswrapper[4755]: I1210 15:46:58.388762 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14f7a209-c2fe-4945-9793-a8e4fd08083d-kube-api-access-n5qgk" (OuterVolumeSpecName: "kube-api-access-n5qgk") pod "14f7a209-c2fe-4945-9793-a8e4fd08083d" (UID: "14f7a209-c2fe-4945-9793-a8e4fd08083d"). InnerVolumeSpecName "kube-api-access-n5qgk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:46:58 crc kubenswrapper[4755]: I1210 15:46:58.418278 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14f7a209-c2fe-4945-9793-a8e4fd08083d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "14f7a209-c2fe-4945-9793-a8e4fd08083d" (UID: "14f7a209-c2fe-4945-9793-a8e4fd08083d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:46:58 crc kubenswrapper[4755]: I1210 15:46:58.434791 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14f7a209-c2fe-4945-9793-a8e4fd08083d-config-data" (OuterVolumeSpecName: "config-data") pod "14f7a209-c2fe-4945-9793-a8e4fd08083d" (UID: "14f7a209-c2fe-4945-9793-a8e4fd08083d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:46:58 crc kubenswrapper[4755]: I1210 15:46:58.438499 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wxn4w"] Dec 10 15:46:58 crc kubenswrapper[4755]: I1210 15:46:58.485101 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n5qgk\" (UniqueName: \"kubernetes.io/projected/14f7a209-c2fe-4945-9793-a8e4fd08083d-kube-api-access-n5qgk\") on node \"crc\" DevicePath \"\"" Dec 10 15:46:58 crc kubenswrapper[4755]: I1210 15:46:58.485144 4755 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14f7a209-c2fe-4945-9793-a8e4fd08083d-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 15:46:58 crc kubenswrapper[4755]: I1210 15:46:58.485157 4755 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14f7a209-c2fe-4945-9793-a8e4fd08083d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:46:58 crc kubenswrapper[4755]: I1210 15:46:58.558813 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-s8l8s"] Dec 10 15:46:58 crc kubenswrapper[4755]: I1210 15:46:58.559371 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-s8l8s" podUID="affd9511-69f4-4147-8df8-14faa94916ee" containerName="registry-server" containerID="cri-o://4e1b4bdef38242eab70220e1997e518887075ec424dcb6b7454db42f0d42b99e" gracePeriod=2 Dec 10 15:46:58 crc kubenswrapper[4755]: I1210 15:46:58.676774 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e8e5f041-243a-466d-9d50-6657195db032","Type":"ContainerStarted","Data":"d77acec9fa04e8529215a937db8cca68097d45b683fc3af1cb46decce9caf804"} Dec 10 15:46:58 crc kubenswrapper[4755]: I1210 15:46:58.679214 4755 generic.go:334] "Generic (PLEG): container finished" podID="14f7a209-c2fe-4945-9793-a8e4fd08083d" containerID="aa5021f8698e43a56b0cfc0b6fcdc2691145563b34c15d8345d9ed779a19f060" exitCode=137 Dec 10 15:46:58 crc kubenswrapper[4755]: I1210 15:46:58.679525 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 10 15:46:58 crc kubenswrapper[4755]: I1210 15:46:58.679532 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"14f7a209-c2fe-4945-9793-a8e4fd08083d","Type":"ContainerDied","Data":"aa5021f8698e43a56b0cfc0b6fcdc2691145563b34c15d8345d9ed779a19f060"} Dec 10 15:46:58 crc kubenswrapper[4755]: I1210 15:46:58.679595 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"14f7a209-c2fe-4945-9793-a8e4fd08083d","Type":"ContainerDied","Data":"60d5215934b7e8424501c321b3add65e4d684a454bc6986017aea2368c834218"} Dec 10 15:46:58 crc kubenswrapper[4755]: I1210 15:46:58.679616 4755 scope.go:117] "RemoveContainer" containerID="aa5021f8698e43a56b0cfc0b6fcdc2691145563b34c15d8345d9ed779a19f060" Dec 10 15:46:58 crc kubenswrapper[4755]: I1210 15:46:58.729176 4755 scope.go:117] "RemoveContainer" containerID="aa5021f8698e43a56b0cfc0b6fcdc2691145563b34c15d8345d9ed779a19f060" Dec 10 15:46:58 crc kubenswrapper[4755]: E1210 15:46:58.736657 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa5021f8698e43a56b0cfc0b6fcdc2691145563b34c15d8345d9ed779a19f060\": container with ID starting with aa5021f8698e43a56b0cfc0b6fcdc2691145563b34c15d8345d9ed779a19f060 not found: ID does not exist" containerID="aa5021f8698e43a56b0cfc0b6fcdc2691145563b34c15d8345d9ed779a19f060" Dec 10 15:46:58 crc kubenswrapper[4755]: I1210 15:46:58.736727 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa5021f8698e43a56b0cfc0b6fcdc2691145563b34c15d8345d9ed779a19f060"} err="failed to get container status \"aa5021f8698e43a56b0cfc0b6fcdc2691145563b34c15d8345d9ed779a19f060\": rpc error: code = NotFound desc = could not find container \"aa5021f8698e43a56b0cfc0b6fcdc2691145563b34c15d8345d9ed779a19f060\": container with ID starting with aa5021f8698e43a56b0cfc0b6fcdc2691145563b34c15d8345d9ed779a19f060 not found: ID does not exist" Dec 10 15:46:58 crc kubenswrapper[4755]: I1210 15:46:58.786826 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 10 15:46:58 crc kubenswrapper[4755]: I1210 15:46:58.807211 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 10 15:46:58 crc kubenswrapper[4755]: I1210 15:46:58.826606 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 10 15:46:58 crc kubenswrapper[4755]: E1210 15:46:58.827093 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14f7a209-c2fe-4945-9793-a8e4fd08083d" containerName="nova-cell1-novncproxy-novncproxy" Dec 10 15:46:58 crc kubenswrapper[4755]: I1210 15:46:58.827117 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="14f7a209-c2fe-4945-9793-a8e4fd08083d" containerName="nova-cell1-novncproxy-novncproxy" Dec 10 15:46:58 crc kubenswrapper[4755]: I1210 15:46:58.827417 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="14f7a209-c2fe-4945-9793-a8e4fd08083d" containerName="nova-cell1-novncproxy-novncproxy" Dec 10 15:46:58 crc kubenswrapper[4755]: I1210 15:46:58.828757 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 10 15:46:58 crc kubenswrapper[4755]: I1210 15:46:58.833481 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Dec 10 15:46:58 crc kubenswrapper[4755]: I1210 15:46:58.833820 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Dec 10 15:46:58 crc kubenswrapper[4755]: I1210 15:46:58.833987 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Dec 10 15:46:58 crc kubenswrapper[4755]: I1210 15:46:58.857533 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 10 15:46:58 crc kubenswrapper[4755]: I1210 15:46:58.912268 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfeba145-c5bb-4035-93b6-ce1f9ce9c68e-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"bfeba145-c5bb-4035-93b6-ce1f9ce9c68e\") " pod="openstack/nova-cell1-novncproxy-0" Dec 10 15:46:58 crc kubenswrapper[4755]: I1210 15:46:58.912329 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfeba145-c5bb-4035-93b6-ce1f9ce9c68e-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"bfeba145-c5bb-4035-93b6-ce1f9ce9c68e\") " pod="openstack/nova-cell1-novncproxy-0" Dec 10 15:46:58 crc kubenswrapper[4755]: I1210 15:46:58.912512 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfeba145-c5bb-4035-93b6-ce1f9ce9c68e-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"bfeba145-c5bb-4035-93b6-ce1f9ce9c68e\") " pod="openstack/nova-cell1-novncproxy-0" Dec 10 15:46:58 crc kubenswrapper[4755]: I1210 15:46:58.912837 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfeba145-c5bb-4035-93b6-ce1f9ce9c68e-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"bfeba145-c5bb-4035-93b6-ce1f9ce9c68e\") " pod="openstack/nova-cell1-novncproxy-0" Dec 10 15:46:58 crc kubenswrapper[4755]: I1210 15:46:58.913072 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7pz2\" (UniqueName: \"kubernetes.io/projected/bfeba145-c5bb-4035-93b6-ce1f9ce9c68e-kube-api-access-z7pz2\") pod \"nova-cell1-novncproxy-0\" (UID: \"bfeba145-c5bb-4035-93b6-ce1f9ce9c68e\") " pod="openstack/nova-cell1-novncproxy-0" Dec 10 15:46:59 crc kubenswrapper[4755]: I1210 15:46:59.014545 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfeba145-c5bb-4035-93b6-ce1f9ce9c68e-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"bfeba145-c5bb-4035-93b6-ce1f9ce9c68e\") " pod="openstack/nova-cell1-novncproxy-0" Dec 10 15:46:59 crc kubenswrapper[4755]: I1210 15:46:59.014672 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7pz2\" (UniqueName: \"kubernetes.io/projected/bfeba145-c5bb-4035-93b6-ce1f9ce9c68e-kube-api-access-z7pz2\") pod \"nova-cell1-novncproxy-0\" (UID: \"bfeba145-c5bb-4035-93b6-ce1f9ce9c68e\") " pod="openstack/nova-cell1-novncproxy-0" Dec 10 15:46:59 crc kubenswrapper[4755]: I1210 15:46:59.014726 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfeba145-c5bb-4035-93b6-ce1f9ce9c68e-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"bfeba145-c5bb-4035-93b6-ce1f9ce9c68e\") " pod="openstack/nova-cell1-novncproxy-0" Dec 10 15:46:59 crc kubenswrapper[4755]: I1210 15:46:59.014760 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfeba145-c5bb-4035-93b6-ce1f9ce9c68e-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"bfeba145-c5bb-4035-93b6-ce1f9ce9c68e\") " pod="openstack/nova-cell1-novncproxy-0" Dec 10 15:46:59 crc kubenswrapper[4755]: I1210 15:46:59.014811 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfeba145-c5bb-4035-93b6-ce1f9ce9c68e-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"bfeba145-c5bb-4035-93b6-ce1f9ce9c68e\") " pod="openstack/nova-cell1-novncproxy-0" Dec 10 15:46:59 crc kubenswrapper[4755]: I1210 15:46:59.019694 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfeba145-c5bb-4035-93b6-ce1f9ce9c68e-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"bfeba145-c5bb-4035-93b6-ce1f9ce9c68e\") " pod="openstack/nova-cell1-novncproxy-0" Dec 10 15:46:59 crc kubenswrapper[4755]: I1210 15:46:59.019976 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfeba145-c5bb-4035-93b6-ce1f9ce9c68e-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"bfeba145-c5bb-4035-93b6-ce1f9ce9c68e\") " pod="openstack/nova-cell1-novncproxy-0" Dec 10 15:46:59 crc kubenswrapper[4755]: I1210 15:46:59.020019 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfeba145-c5bb-4035-93b6-ce1f9ce9c68e-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"bfeba145-c5bb-4035-93b6-ce1f9ce9c68e\") " pod="openstack/nova-cell1-novncproxy-0" Dec 10 15:46:59 crc kubenswrapper[4755]: I1210 15:46:59.020343 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfeba145-c5bb-4035-93b6-ce1f9ce9c68e-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"bfeba145-c5bb-4035-93b6-ce1f9ce9c68e\") " pod="openstack/nova-cell1-novncproxy-0" Dec 10 15:46:59 crc kubenswrapper[4755]: I1210 15:46:59.035427 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7pz2\" (UniqueName: \"kubernetes.io/projected/bfeba145-c5bb-4035-93b6-ce1f9ce9c68e-kube-api-access-z7pz2\") pod \"nova-cell1-novncproxy-0\" (UID: \"bfeba145-c5bb-4035-93b6-ce1f9ce9c68e\") " pod="openstack/nova-cell1-novncproxy-0" Dec 10 15:46:59 crc kubenswrapper[4755]: I1210 15:46:59.189164 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 10 15:46:59 crc kubenswrapper[4755]: I1210 15:46:59.718640 4755 generic.go:334] "Generic (PLEG): container finished" podID="affd9511-69f4-4147-8df8-14faa94916ee" containerID="4e1b4bdef38242eab70220e1997e518887075ec424dcb6b7454db42f0d42b99e" exitCode=0 Dec 10 15:46:59 crc kubenswrapper[4755]: I1210 15:46:59.718875 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s8l8s" event={"ID":"affd9511-69f4-4147-8df8-14faa94916ee","Type":"ContainerDied","Data":"4e1b4bdef38242eab70220e1997e518887075ec424dcb6b7454db42f0d42b99e"} Dec 10 15:46:59 crc kubenswrapper[4755]: I1210 15:46:59.719031 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s8l8s" event={"ID":"affd9511-69f4-4147-8df8-14faa94916ee","Type":"ContainerDied","Data":"39f74a75026631fc33d198c7eb874cbb26f94622c22cfe8de8c62b8ce2116bd4"} Dec 10 15:46:59 crc kubenswrapper[4755]: I1210 15:46:59.719048 4755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="39f74a75026631fc33d198c7eb874cbb26f94622c22cfe8de8c62b8ce2116bd4" Dec 10 15:46:59 crc kubenswrapper[4755]: I1210 15:46:59.737811 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e8e5f041-243a-466d-9d50-6657195db032","Type":"ContainerStarted","Data":"169039bf409833291a0aa6441ba76456a5d9972ddcdf2a7c0ac736d7527c531b"} Dec 10 15:46:59 crc kubenswrapper[4755]: I1210 15:46:59.743057 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s8l8s" Dec 10 15:46:59 crc kubenswrapper[4755]: I1210 15:46:59.782001 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14f7a209-c2fe-4945-9793-a8e4fd08083d" path="/var/lib/kubelet/pods/14f7a209-c2fe-4945-9793-a8e4fd08083d/volumes" Dec 10 15:46:59 crc kubenswrapper[4755]: I1210 15:46:59.833016 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/affd9511-69f4-4147-8df8-14faa94916ee-catalog-content\") pod \"affd9511-69f4-4147-8df8-14faa94916ee\" (UID: \"affd9511-69f4-4147-8df8-14faa94916ee\") " Dec 10 15:46:59 crc kubenswrapper[4755]: I1210 15:46:59.833176 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m8qhg\" (UniqueName: \"kubernetes.io/projected/affd9511-69f4-4147-8df8-14faa94916ee-kube-api-access-m8qhg\") pod \"affd9511-69f4-4147-8df8-14faa94916ee\" (UID: \"affd9511-69f4-4147-8df8-14faa94916ee\") " Dec 10 15:46:59 crc kubenswrapper[4755]: I1210 15:46:59.833285 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/affd9511-69f4-4147-8df8-14faa94916ee-utilities\") pod \"affd9511-69f4-4147-8df8-14faa94916ee\" (UID: \"affd9511-69f4-4147-8df8-14faa94916ee\") " Dec 10 15:46:59 crc kubenswrapper[4755]: I1210 15:46:59.836042 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/affd9511-69f4-4147-8df8-14faa94916ee-utilities" (OuterVolumeSpecName: "utilities") pod "affd9511-69f4-4147-8df8-14faa94916ee" (UID: "affd9511-69f4-4147-8df8-14faa94916ee"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:46:59 crc kubenswrapper[4755]: I1210 15:46:59.843138 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/affd9511-69f4-4147-8df8-14faa94916ee-kube-api-access-m8qhg" (OuterVolumeSpecName: "kube-api-access-m8qhg") pod "affd9511-69f4-4147-8df8-14faa94916ee" (UID: "affd9511-69f4-4147-8df8-14faa94916ee"). InnerVolumeSpecName "kube-api-access-m8qhg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:46:59 crc kubenswrapper[4755]: I1210 15:46:59.849227 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 10 15:46:59 crc kubenswrapper[4755]: W1210 15:46:59.872959 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbfeba145_c5bb_4035_93b6_ce1f9ce9c68e.slice/crio-85e27cf677ca5f6b91f70bec0b677c63097b60f93788afae7c20757d1f72f86b WatchSource:0}: Error finding container 85e27cf677ca5f6b91f70bec0b677c63097b60f93788afae7c20757d1f72f86b: Status 404 returned error can't find the container with id 85e27cf677ca5f6b91f70bec0b677c63097b60f93788afae7c20757d1f72f86b Dec 10 15:46:59 crc kubenswrapper[4755]: I1210 15:46:59.893595 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Dec 10 15:46:59 crc kubenswrapper[4755]: I1210 15:46:59.938392 4755 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/affd9511-69f4-4147-8df8-14faa94916ee-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 15:46:59 crc kubenswrapper[4755]: I1210 15:46:59.938437 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m8qhg\" (UniqueName: \"kubernetes.io/projected/affd9511-69f4-4147-8df8-14faa94916ee-kube-api-access-m8qhg\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:00 crc kubenswrapper[4755]: I1210 15:47:00.034761 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/affd9511-69f4-4147-8df8-14faa94916ee-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "affd9511-69f4-4147-8df8-14faa94916ee" (UID: "affd9511-69f4-4147-8df8-14faa94916ee"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:47:00 crc kubenswrapper[4755]: I1210 15:47:00.040343 4755 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/affd9511-69f4-4147-8df8-14faa94916ee-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:00 crc kubenswrapper[4755]: I1210 15:47:00.758236 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"bfeba145-c5bb-4035-93b6-ce1f9ce9c68e","Type":"ContainerStarted","Data":"0d5968a2385440f46fa0f6d6a25747b746542fb0f334f4fbb8ed4f9d8b1cbcbf"} Dec 10 15:47:00 crc kubenswrapper[4755]: I1210 15:47:00.760700 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"bfeba145-c5bb-4035-93b6-ce1f9ce9c68e","Type":"ContainerStarted","Data":"85e27cf677ca5f6b91f70bec0b677c63097b60f93788afae7c20757d1f72f86b"} Dec 10 15:47:00 crc kubenswrapper[4755]: I1210 15:47:00.758279 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s8l8s" Dec 10 15:47:00 crc kubenswrapper[4755]: I1210 15:47:00.791164 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.791143307 podStartE2EDuration="2.791143307s" podCreationTimestamp="2025-12-10 15:46:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:47:00.783667745 +0000 UTC m=+1417.384551397" watchObservedRunningTime="2025-12-10 15:47:00.791143307 +0000 UTC m=+1417.392026939" Dec 10 15:47:00 crc kubenswrapper[4755]: I1210 15:47:00.804783 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-s8l8s"] Dec 10 15:47:00 crc kubenswrapper[4755]: I1210 15:47:00.823498 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-s8l8s"] Dec 10 15:47:01 crc kubenswrapper[4755]: I1210 15:47:01.440878 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773r7kx7"] Dec 10 15:47:01 crc kubenswrapper[4755]: E1210 15:47:01.441749 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="affd9511-69f4-4147-8df8-14faa94916ee" containerName="extract-utilities" Dec 10 15:47:01 crc kubenswrapper[4755]: I1210 15:47:01.441825 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="affd9511-69f4-4147-8df8-14faa94916ee" containerName="extract-utilities" Dec 10 15:47:01 crc kubenswrapper[4755]: E1210 15:47:01.441919 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="affd9511-69f4-4147-8df8-14faa94916ee" containerName="registry-server" Dec 10 15:47:01 crc kubenswrapper[4755]: I1210 15:47:01.441992 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="affd9511-69f4-4147-8df8-14faa94916ee" containerName="registry-server" Dec 10 15:47:01 crc kubenswrapper[4755]: E1210 15:47:01.442080 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="affd9511-69f4-4147-8df8-14faa94916ee" containerName="extract-content" Dec 10 15:47:01 crc kubenswrapper[4755]: I1210 15:47:01.442143 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="affd9511-69f4-4147-8df8-14faa94916ee" containerName="extract-content" Dec 10 15:47:01 crc kubenswrapper[4755]: I1210 15:47:01.442427 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="affd9511-69f4-4147-8df8-14faa94916ee" containerName="registry-server" Dec 10 15:47:01 crc kubenswrapper[4755]: I1210 15:47:01.444256 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773r7kx7" Dec 10 15:47:01 crc kubenswrapper[4755]: I1210 15:47:01.447519 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 10 15:47:01 crc kubenswrapper[4755]: I1210 15:47:01.478043 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773r7kx7"] Dec 10 15:47:01 crc kubenswrapper[4755]: I1210 15:47:01.575903 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0713bf5f-e7a0-40b6-b1a9-001d4fb9d972-bundle\") pod \"3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773r7kx7\" (UID: \"0713bf5f-e7a0-40b6-b1a9-001d4fb9d972\") " pod="openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773r7kx7" Dec 10 15:47:01 crc kubenswrapper[4755]: I1210 15:47:01.575977 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0713bf5f-e7a0-40b6-b1a9-001d4fb9d972-util\") pod \"3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773r7kx7\" (UID: \"0713bf5f-e7a0-40b6-b1a9-001d4fb9d972\") " pod="openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773r7kx7" Dec 10 15:47:01 crc kubenswrapper[4755]: I1210 15:47:01.576040 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmh8x\" (UniqueName: \"kubernetes.io/projected/0713bf5f-e7a0-40b6-b1a9-001d4fb9d972-kube-api-access-pmh8x\") pod \"3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773r7kx7\" (UID: \"0713bf5f-e7a0-40b6-b1a9-001d4fb9d972\") " pod="openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773r7kx7" Dec 10 15:47:01 crc kubenswrapper[4755]: I1210 15:47:01.678641 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0713bf5f-e7a0-40b6-b1a9-001d4fb9d972-bundle\") pod \"3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773r7kx7\" (UID: \"0713bf5f-e7a0-40b6-b1a9-001d4fb9d972\") " pod="openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773r7kx7" Dec 10 15:47:01 crc kubenswrapper[4755]: I1210 15:47:01.678971 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0713bf5f-e7a0-40b6-b1a9-001d4fb9d972-util\") pod \"3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773r7kx7\" (UID: \"0713bf5f-e7a0-40b6-b1a9-001d4fb9d972\") " pod="openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773r7kx7" Dec 10 15:47:01 crc kubenswrapper[4755]: I1210 15:47:01.679118 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmh8x\" (UniqueName: \"kubernetes.io/projected/0713bf5f-e7a0-40b6-b1a9-001d4fb9d972-kube-api-access-pmh8x\") pod \"3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773r7kx7\" (UID: \"0713bf5f-e7a0-40b6-b1a9-001d4fb9d972\") " pod="openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773r7kx7" Dec 10 15:47:01 crc kubenswrapper[4755]: I1210 15:47:01.680184 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0713bf5f-e7a0-40b6-b1a9-001d4fb9d972-bundle\") pod \"3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773r7kx7\" (UID: \"0713bf5f-e7a0-40b6-b1a9-001d4fb9d972\") " pod="openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773r7kx7" Dec 10 15:47:01 crc kubenswrapper[4755]: I1210 15:47:01.680398 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0713bf5f-e7a0-40b6-b1a9-001d4fb9d972-util\") pod \"3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773r7kx7\" (UID: \"0713bf5f-e7a0-40b6-b1a9-001d4fb9d972\") " pod="openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773r7kx7" Dec 10 15:47:01 crc kubenswrapper[4755]: I1210 15:47:01.798045 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="affd9511-69f4-4147-8df8-14faa94916ee" path="/var/lib/kubelet/pods/affd9511-69f4-4147-8df8-14faa94916ee/volumes" Dec 10 15:47:01 crc kubenswrapper[4755]: I1210 15:47:01.857115 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmh8x\" (UniqueName: \"kubernetes.io/projected/0713bf5f-e7a0-40b6-b1a9-001d4fb9d972-kube-api-access-pmh8x\") pod \"3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773r7kx7\" (UID: \"0713bf5f-e7a0-40b6-b1a9-001d4fb9d972\") " pod="openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773r7kx7" Dec 10 15:47:02 crc kubenswrapper[4755]: I1210 15:47:02.092543 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773r7kx7" Dec 10 15:47:02 crc kubenswrapper[4755]: I1210 15:47:02.559534 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 10 15:47:02 crc kubenswrapper[4755]: I1210 15:47:02.560584 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 10 15:47:02 crc kubenswrapper[4755]: I1210 15:47:02.566751 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 10 15:47:02 crc kubenswrapper[4755]: I1210 15:47:02.664790 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773r7kx7"] Dec 10 15:47:02 crc kubenswrapper[4755]: I1210 15:47:02.799871 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773r7kx7" event={"ID":"0713bf5f-e7a0-40b6-b1a9-001d4fb9d972","Type":"ContainerStarted","Data":"0bf168f7bf3ea73676608b077f7fcc1561936a946566034b1483c1bf9592a880"} Dec 10 15:47:02 crc kubenswrapper[4755]: I1210 15:47:02.806696 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e8e5f041-243a-466d-9d50-6657195db032","Type":"ContainerStarted","Data":"41faea9892c4e8dfdb0657ce9bc6f8fd7e27e54d25e719a181a9d972425b0ba0"} Dec 10 15:47:02 crc kubenswrapper[4755]: I1210 15:47:02.806837 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 10 15:47:02 crc kubenswrapper[4755]: I1210 15:47:02.844399 4755 scope.go:117] "RemoveContainer" containerID="4e1b4bdef38242eab70220e1997e518887075ec424dcb6b7454db42f0d42b99e" Dec 10 15:47:02 crc kubenswrapper[4755]: I1210 15:47:02.845908 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.396386406 podStartE2EDuration="9.845892632s" podCreationTimestamp="2025-12-10 15:46:53 +0000 UTC" firstStartedPulling="2025-12-10 15:46:54.744177388 +0000 UTC m=+1411.345061010" lastFinishedPulling="2025-12-10 15:47:01.193683614 +0000 UTC m=+1417.794567236" observedRunningTime="2025-12-10 15:47:02.830130624 +0000 UTC m=+1419.431014256" watchObservedRunningTime="2025-12-10 15:47:02.845892632 +0000 UTC m=+1419.446776264" Dec 10 15:47:02 crc kubenswrapper[4755]: I1210 15:47:02.872663 4755 scope.go:117] "RemoveContainer" containerID="0f20e8193c0089cd24dd315c2e57e1bfbada0bdae0170d74e24a9411770b42fb" Dec 10 15:47:02 crc kubenswrapper[4755]: I1210 15:47:02.885178 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 10 15:47:02 crc kubenswrapper[4755]: I1210 15:47:02.916549 4755 scope.go:117] "RemoveContainer" containerID="b167cab8f7b86a5dacd8429cab1db8129e62416fd2cd6e722daf8395f1a1554b" Dec 10 15:47:03 crc kubenswrapper[4755]: I1210 15:47:03.044659 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 10 15:47:03 crc kubenswrapper[4755]: I1210 15:47:03.048155 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 10 15:47:03 crc kubenswrapper[4755]: I1210 15:47:03.049961 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 10 15:47:03 crc kubenswrapper[4755]: I1210 15:47:03.052149 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 10 15:47:03 crc kubenswrapper[4755]: I1210 15:47:03.845370 4755 generic.go:334] "Generic (PLEG): container finished" podID="0713bf5f-e7a0-40b6-b1a9-001d4fb9d972" containerID="0578760550e4239288214e235ec2310ebdf912f67f5f077844aee44499aa85ed" exitCode=0 Dec 10 15:47:03 crc kubenswrapper[4755]: I1210 15:47:03.846806 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773r7kx7" event={"ID":"0713bf5f-e7a0-40b6-b1a9-001d4fb9d972","Type":"ContainerDied","Data":"0578760550e4239288214e235ec2310ebdf912f67f5f077844aee44499aa85ed"} Dec 10 15:47:03 crc kubenswrapper[4755]: I1210 15:47:03.846847 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 10 15:47:03 crc kubenswrapper[4755]: I1210 15:47:03.859953 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 10 15:47:04 crc kubenswrapper[4755]: I1210 15:47:04.026255 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-54dd998c-lc5nf"] Dec 10 15:47:04 crc kubenswrapper[4755]: I1210 15:47:04.028196 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54dd998c-lc5nf" Dec 10 15:47:04 crc kubenswrapper[4755]: I1210 15:47:04.055451 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-54dd998c-lc5nf"] Dec 10 15:47:04 crc kubenswrapper[4755]: I1210 15:47:04.152636 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ab3e3d49-8054-4653-a857-45138337b2f7-ovsdbserver-sb\") pod \"dnsmasq-dns-54dd998c-lc5nf\" (UID: \"ab3e3d49-8054-4653-a857-45138337b2f7\") " pod="openstack/dnsmasq-dns-54dd998c-lc5nf" Dec 10 15:47:04 crc kubenswrapper[4755]: I1210 15:47:04.152720 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ab3e3d49-8054-4653-a857-45138337b2f7-dns-swift-storage-0\") pod \"dnsmasq-dns-54dd998c-lc5nf\" (UID: \"ab3e3d49-8054-4653-a857-45138337b2f7\") " pod="openstack/dnsmasq-dns-54dd998c-lc5nf" Dec 10 15:47:04 crc kubenswrapper[4755]: I1210 15:47:04.152737 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ab3e3d49-8054-4653-a857-45138337b2f7-ovsdbserver-nb\") pod \"dnsmasq-dns-54dd998c-lc5nf\" (UID: \"ab3e3d49-8054-4653-a857-45138337b2f7\") " pod="openstack/dnsmasq-dns-54dd998c-lc5nf" Dec 10 15:47:04 crc kubenswrapper[4755]: I1210 15:47:04.152766 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab3e3d49-8054-4653-a857-45138337b2f7-config\") pod \"dnsmasq-dns-54dd998c-lc5nf\" (UID: \"ab3e3d49-8054-4653-a857-45138337b2f7\") " pod="openstack/dnsmasq-dns-54dd998c-lc5nf" Dec 10 15:47:04 crc kubenswrapper[4755]: I1210 15:47:04.152795 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-749rm\" (UniqueName: \"kubernetes.io/projected/ab3e3d49-8054-4653-a857-45138337b2f7-kube-api-access-749rm\") pod \"dnsmasq-dns-54dd998c-lc5nf\" (UID: \"ab3e3d49-8054-4653-a857-45138337b2f7\") " pod="openstack/dnsmasq-dns-54dd998c-lc5nf" Dec 10 15:47:04 crc kubenswrapper[4755]: I1210 15:47:04.152880 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab3e3d49-8054-4653-a857-45138337b2f7-dns-svc\") pod \"dnsmasq-dns-54dd998c-lc5nf\" (UID: \"ab3e3d49-8054-4653-a857-45138337b2f7\") " pod="openstack/dnsmasq-dns-54dd998c-lc5nf" Dec 10 15:47:04 crc kubenswrapper[4755]: I1210 15:47:04.189805 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Dec 10 15:47:04 crc kubenswrapper[4755]: I1210 15:47:04.255603 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab3e3d49-8054-4653-a857-45138337b2f7-dns-svc\") pod \"dnsmasq-dns-54dd998c-lc5nf\" (UID: \"ab3e3d49-8054-4653-a857-45138337b2f7\") " pod="openstack/dnsmasq-dns-54dd998c-lc5nf" Dec 10 15:47:04 crc kubenswrapper[4755]: I1210 15:47:04.256614 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab3e3d49-8054-4653-a857-45138337b2f7-dns-svc\") pod \"dnsmasq-dns-54dd998c-lc5nf\" (UID: \"ab3e3d49-8054-4653-a857-45138337b2f7\") " pod="openstack/dnsmasq-dns-54dd998c-lc5nf" Dec 10 15:47:04 crc kubenswrapper[4755]: I1210 15:47:04.256791 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ab3e3d49-8054-4653-a857-45138337b2f7-ovsdbserver-sb\") pod \"dnsmasq-dns-54dd998c-lc5nf\" (UID: \"ab3e3d49-8054-4653-a857-45138337b2f7\") " pod="openstack/dnsmasq-dns-54dd998c-lc5nf" Dec 10 15:47:04 crc kubenswrapper[4755]: I1210 15:47:04.257368 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ab3e3d49-8054-4653-a857-45138337b2f7-ovsdbserver-sb\") pod \"dnsmasq-dns-54dd998c-lc5nf\" (UID: \"ab3e3d49-8054-4653-a857-45138337b2f7\") " pod="openstack/dnsmasq-dns-54dd998c-lc5nf" Dec 10 15:47:04 crc kubenswrapper[4755]: I1210 15:47:04.257537 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ab3e3d49-8054-4653-a857-45138337b2f7-dns-swift-storage-0\") pod \"dnsmasq-dns-54dd998c-lc5nf\" (UID: \"ab3e3d49-8054-4653-a857-45138337b2f7\") " pod="openstack/dnsmasq-dns-54dd998c-lc5nf" Dec 10 15:47:04 crc kubenswrapper[4755]: I1210 15:47:04.257786 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ab3e3d49-8054-4653-a857-45138337b2f7-ovsdbserver-nb\") pod \"dnsmasq-dns-54dd998c-lc5nf\" (UID: \"ab3e3d49-8054-4653-a857-45138337b2f7\") " pod="openstack/dnsmasq-dns-54dd998c-lc5nf" Dec 10 15:47:04 crc kubenswrapper[4755]: I1210 15:47:04.257844 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab3e3d49-8054-4653-a857-45138337b2f7-config\") pod \"dnsmasq-dns-54dd998c-lc5nf\" (UID: \"ab3e3d49-8054-4653-a857-45138337b2f7\") " pod="openstack/dnsmasq-dns-54dd998c-lc5nf" Dec 10 15:47:04 crc kubenswrapper[4755]: I1210 15:47:04.258455 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ab3e3d49-8054-4653-a857-45138337b2f7-ovsdbserver-nb\") pod \"dnsmasq-dns-54dd998c-lc5nf\" (UID: \"ab3e3d49-8054-4653-a857-45138337b2f7\") " pod="openstack/dnsmasq-dns-54dd998c-lc5nf" Dec 10 15:47:04 crc kubenswrapper[4755]: I1210 15:47:04.258455 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ab3e3d49-8054-4653-a857-45138337b2f7-dns-swift-storage-0\") pod \"dnsmasq-dns-54dd998c-lc5nf\" (UID: \"ab3e3d49-8054-4653-a857-45138337b2f7\") " pod="openstack/dnsmasq-dns-54dd998c-lc5nf" Dec 10 15:47:04 crc kubenswrapper[4755]: I1210 15:47:04.258532 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab3e3d49-8054-4653-a857-45138337b2f7-config\") pod \"dnsmasq-dns-54dd998c-lc5nf\" (UID: \"ab3e3d49-8054-4653-a857-45138337b2f7\") " pod="openstack/dnsmasq-dns-54dd998c-lc5nf" Dec 10 15:47:04 crc kubenswrapper[4755]: I1210 15:47:04.258566 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-749rm\" (UniqueName: \"kubernetes.io/projected/ab3e3d49-8054-4653-a857-45138337b2f7-kube-api-access-749rm\") pod \"dnsmasq-dns-54dd998c-lc5nf\" (UID: \"ab3e3d49-8054-4653-a857-45138337b2f7\") " pod="openstack/dnsmasq-dns-54dd998c-lc5nf" Dec 10 15:47:04 crc kubenswrapper[4755]: I1210 15:47:04.277434 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-749rm\" (UniqueName: \"kubernetes.io/projected/ab3e3d49-8054-4653-a857-45138337b2f7-kube-api-access-749rm\") pod \"dnsmasq-dns-54dd998c-lc5nf\" (UID: \"ab3e3d49-8054-4653-a857-45138337b2f7\") " pod="openstack/dnsmasq-dns-54dd998c-lc5nf" Dec 10 15:47:04 crc kubenswrapper[4755]: I1210 15:47:04.348458 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54dd998c-lc5nf" Dec 10 15:47:04 crc kubenswrapper[4755]: I1210 15:47:04.952146 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-54dd998c-lc5nf"] Dec 10 15:47:05 crc kubenswrapper[4755]: I1210 15:47:05.869876 4755 generic.go:334] "Generic (PLEG): container finished" podID="0713bf5f-e7a0-40b6-b1a9-001d4fb9d972" containerID="d333508ca8c3cb49f82491a5b6e0f22b8f3e3fd6f98280551054a9b48bb46507" exitCode=0 Dec 10 15:47:05 crc kubenswrapper[4755]: I1210 15:47:05.870053 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773r7kx7" event={"ID":"0713bf5f-e7a0-40b6-b1a9-001d4fb9d972","Type":"ContainerDied","Data":"d333508ca8c3cb49f82491a5b6e0f22b8f3e3fd6f98280551054a9b48bb46507"} Dec 10 15:47:05 crc kubenswrapper[4755]: I1210 15:47:05.873183 4755 generic.go:334] "Generic (PLEG): container finished" podID="ab3e3d49-8054-4653-a857-45138337b2f7" containerID="a05d5628b434305b623c4ea63cf9a4c579a51b1442d8ad0d241cb9b8e6fd7ea5" exitCode=0 Dec 10 15:47:05 crc kubenswrapper[4755]: I1210 15:47:05.873362 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54dd998c-lc5nf" event={"ID":"ab3e3d49-8054-4653-a857-45138337b2f7","Type":"ContainerDied","Data":"a05d5628b434305b623c4ea63cf9a4c579a51b1442d8ad0d241cb9b8e6fd7ea5"} Dec 10 15:47:05 crc kubenswrapper[4755]: I1210 15:47:05.873411 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54dd998c-lc5nf" event={"ID":"ab3e3d49-8054-4653-a857-45138337b2f7","Type":"ContainerStarted","Data":"7dce6e9bf46fdfae56ca0d8da6c5618ff009a3d501016e47e6ffb78d97aef9f3"} Dec 10 15:47:06 crc kubenswrapper[4755]: I1210 15:47:06.886535 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54dd998c-lc5nf" event={"ID":"ab3e3d49-8054-4653-a857-45138337b2f7","Type":"ContainerStarted","Data":"21d6d9aa4274df63136332bafc9e03f1e2f4f0c5020e6d6990630d89f0ab7a1a"} Dec 10 15:47:06 crc kubenswrapper[4755]: I1210 15:47:06.886817 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-54dd998c-lc5nf" Dec 10 15:47:06 crc kubenswrapper[4755]: I1210 15:47:06.889685 4755 generic.go:334] "Generic (PLEG): container finished" podID="0713bf5f-e7a0-40b6-b1a9-001d4fb9d972" containerID="7c26ab07a87cfcaf1dd27c02bc8ea19e9cfa0147744db6f5eec88fdad094ef8e" exitCode=0 Dec 10 15:47:06 crc kubenswrapper[4755]: I1210 15:47:06.889737 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773r7kx7" event={"ID":"0713bf5f-e7a0-40b6-b1a9-001d4fb9d972","Type":"ContainerDied","Data":"7c26ab07a87cfcaf1dd27c02bc8ea19e9cfa0147744db6f5eec88fdad094ef8e"} Dec 10 15:47:06 crc kubenswrapper[4755]: I1210 15:47:06.924074 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-54dd998c-lc5nf" podStartSLOduration=3.924050475 podStartE2EDuration="3.924050475s" podCreationTimestamp="2025-12-10 15:47:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:47:06.911853433 +0000 UTC m=+1423.512737075" watchObservedRunningTime="2025-12-10 15:47:06.924050475 +0000 UTC m=+1423.524934107" Dec 10 15:47:07 crc kubenswrapper[4755]: I1210 15:47:07.744025 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 10 15:47:07 crc kubenswrapper[4755]: I1210 15:47:07.744617 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="7129b800-f170-4cbf-9b18-26d01b1cdd26" containerName="nova-api-log" containerID="cri-o://f8b294449df5ef04262c533d8682ec6bd1dddb8009bbbff61e951331ed15486e" gracePeriod=30 Dec 10 15:47:07 crc kubenswrapper[4755]: I1210 15:47:07.744699 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="7129b800-f170-4cbf-9b18-26d01b1cdd26" containerName="nova-api-api" containerID="cri-o://68c75d4e1d165ebd8cec04c9eabcdfe2b89031902044b808a45d1a5924fb1ecd" gracePeriod=30 Dec 10 15:47:07 crc kubenswrapper[4755]: I1210 15:47:07.905290 4755 generic.go:334] "Generic (PLEG): container finished" podID="7129b800-f170-4cbf-9b18-26d01b1cdd26" containerID="f8b294449df5ef04262c533d8682ec6bd1dddb8009bbbff61e951331ed15486e" exitCode=143 Dec 10 15:47:07 crc kubenswrapper[4755]: I1210 15:47:07.905330 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7129b800-f170-4cbf-9b18-26d01b1cdd26","Type":"ContainerDied","Data":"f8b294449df5ef04262c533d8682ec6bd1dddb8009bbbff61e951331ed15486e"} Dec 10 15:47:07 crc kubenswrapper[4755]: I1210 15:47:07.952155 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 10 15:47:07 crc kubenswrapper[4755]: I1210 15:47:07.952668 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e8e5f041-243a-466d-9d50-6657195db032" containerName="ceilometer-central-agent" containerID="cri-o://b6fbcba8750747d0d6c7ce84e3ec8b01b7f2bbcac0ca7a9ca1e08d8b7a98f725" gracePeriod=30 Dec 10 15:47:07 crc kubenswrapper[4755]: I1210 15:47:07.953242 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e8e5f041-243a-466d-9d50-6657195db032" containerName="proxy-httpd" containerID="cri-o://41faea9892c4e8dfdb0657ce9bc6f8fd7e27e54d25e719a181a9d972425b0ba0" gracePeriod=30 Dec 10 15:47:07 crc kubenswrapper[4755]: I1210 15:47:07.953298 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e8e5f041-243a-466d-9d50-6657195db032" containerName="sg-core" containerID="cri-o://169039bf409833291a0aa6441ba76456a5d9972ddcdf2a7c0ac736d7527c531b" gracePeriod=30 Dec 10 15:47:07 crc kubenswrapper[4755]: I1210 15:47:07.953339 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e8e5f041-243a-466d-9d50-6657195db032" containerName="ceilometer-notification-agent" containerID="cri-o://d77acec9fa04e8529215a937db8cca68097d45b683fc3af1cb46decce9caf804" gracePeriod=30 Dec 10 15:47:08 crc kubenswrapper[4755]: I1210 15:47:08.509192 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773r7kx7" Dec 10 15:47:08 crc kubenswrapper[4755]: I1210 15:47:08.599052 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pmh8x\" (UniqueName: \"kubernetes.io/projected/0713bf5f-e7a0-40b6-b1a9-001d4fb9d972-kube-api-access-pmh8x\") pod \"0713bf5f-e7a0-40b6-b1a9-001d4fb9d972\" (UID: \"0713bf5f-e7a0-40b6-b1a9-001d4fb9d972\") " Dec 10 15:47:08 crc kubenswrapper[4755]: I1210 15:47:08.599371 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0713bf5f-e7a0-40b6-b1a9-001d4fb9d972-bundle\") pod \"0713bf5f-e7a0-40b6-b1a9-001d4fb9d972\" (UID: \"0713bf5f-e7a0-40b6-b1a9-001d4fb9d972\") " Dec 10 15:47:08 crc kubenswrapper[4755]: I1210 15:47:08.599424 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0713bf5f-e7a0-40b6-b1a9-001d4fb9d972-util\") pod \"0713bf5f-e7a0-40b6-b1a9-001d4fb9d972\" (UID: \"0713bf5f-e7a0-40b6-b1a9-001d4fb9d972\") " Dec 10 15:47:08 crc kubenswrapper[4755]: I1210 15:47:08.606875 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0713bf5f-e7a0-40b6-b1a9-001d4fb9d972-bundle" (OuterVolumeSpecName: "bundle") pod "0713bf5f-e7a0-40b6-b1a9-001d4fb9d972" (UID: "0713bf5f-e7a0-40b6-b1a9-001d4fb9d972"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:47:08 crc kubenswrapper[4755]: I1210 15:47:08.608723 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0713bf5f-e7a0-40b6-b1a9-001d4fb9d972-kube-api-access-pmh8x" (OuterVolumeSpecName: "kube-api-access-pmh8x") pod "0713bf5f-e7a0-40b6-b1a9-001d4fb9d972" (UID: "0713bf5f-e7a0-40b6-b1a9-001d4fb9d972"). InnerVolumeSpecName "kube-api-access-pmh8x". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:47:08 crc kubenswrapper[4755]: I1210 15:47:08.702376 4755 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0713bf5f-e7a0-40b6-b1a9-001d4fb9d972-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:08 crc kubenswrapper[4755]: I1210 15:47:08.702756 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pmh8x\" (UniqueName: \"kubernetes.io/projected/0713bf5f-e7a0-40b6-b1a9-001d4fb9d972-kube-api-access-pmh8x\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:08 crc kubenswrapper[4755]: I1210 15:47:08.918180 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773r7kx7" event={"ID":"0713bf5f-e7a0-40b6-b1a9-001d4fb9d972","Type":"ContainerDied","Data":"0bf168f7bf3ea73676608b077f7fcc1561936a946566034b1483c1bf9592a880"} Dec 10 15:47:08 crc kubenswrapper[4755]: I1210 15:47:08.918229 4755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0bf168f7bf3ea73676608b077f7fcc1561936a946566034b1483c1bf9592a880" Dec 10 15:47:08 crc kubenswrapper[4755]: I1210 15:47:08.918293 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773r7kx7" Dec 10 15:47:08 crc kubenswrapper[4755]: I1210 15:47:08.923772 4755 generic.go:334] "Generic (PLEG): container finished" podID="e8e5f041-243a-466d-9d50-6657195db032" containerID="41faea9892c4e8dfdb0657ce9bc6f8fd7e27e54d25e719a181a9d972425b0ba0" exitCode=0 Dec 10 15:47:08 crc kubenswrapper[4755]: I1210 15:47:08.923798 4755 generic.go:334] "Generic (PLEG): container finished" podID="e8e5f041-243a-466d-9d50-6657195db032" containerID="169039bf409833291a0aa6441ba76456a5d9972ddcdf2a7c0ac736d7527c531b" exitCode=2 Dec 10 15:47:08 crc kubenswrapper[4755]: I1210 15:47:08.923816 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e8e5f041-243a-466d-9d50-6657195db032","Type":"ContainerDied","Data":"41faea9892c4e8dfdb0657ce9bc6f8fd7e27e54d25e719a181a9d972425b0ba0"} Dec 10 15:47:08 crc kubenswrapper[4755]: I1210 15:47:08.923839 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e8e5f041-243a-466d-9d50-6657195db032","Type":"ContainerDied","Data":"169039bf409833291a0aa6441ba76456a5d9972ddcdf2a7c0ac736d7527c531b"} Dec 10 15:47:09 crc kubenswrapper[4755]: I1210 15:47:09.074884 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0713bf5f-e7a0-40b6-b1a9-001d4fb9d972-util" (OuterVolumeSpecName: "util") pod "0713bf5f-e7a0-40b6-b1a9-001d4fb9d972" (UID: "0713bf5f-e7a0-40b6-b1a9-001d4fb9d972"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:47:09 crc kubenswrapper[4755]: I1210 15:47:09.116698 4755 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0713bf5f-e7a0-40b6-b1a9-001d4fb9d972-util\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:09 crc kubenswrapper[4755]: I1210 15:47:09.190164 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Dec 10 15:47:09 crc kubenswrapper[4755]: I1210 15:47:09.219876 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Dec 10 15:47:09 crc kubenswrapper[4755]: I1210 15:47:09.988169 4755 generic.go:334] "Generic (PLEG): container finished" podID="e8e5f041-243a-466d-9d50-6657195db032" containerID="d77acec9fa04e8529215a937db8cca68097d45b683fc3af1cb46decce9caf804" exitCode=0 Dec 10 15:47:09 crc kubenswrapper[4755]: I1210 15:47:09.988398 4755 generic.go:334] "Generic (PLEG): container finished" podID="e8e5f041-243a-466d-9d50-6657195db032" containerID="b6fbcba8750747d0d6c7ce84e3ec8b01b7f2bbcac0ca7a9ca1e08d8b7a98f725" exitCode=0 Dec 10 15:47:09 crc kubenswrapper[4755]: I1210 15:47:09.988201 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e8e5f041-243a-466d-9d50-6657195db032","Type":"ContainerDied","Data":"d77acec9fa04e8529215a937db8cca68097d45b683fc3af1cb46decce9caf804"} Dec 10 15:47:09 crc kubenswrapper[4755]: I1210 15:47:09.989548 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e8e5f041-243a-466d-9d50-6657195db032","Type":"ContainerDied","Data":"b6fbcba8750747d0d6c7ce84e3ec8b01b7f2bbcac0ca7a9ca1e08d8b7a98f725"} Dec 10 15:47:10 crc kubenswrapper[4755]: I1210 15:47:10.088134 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Dec 10 15:47:10 crc kubenswrapper[4755]: I1210 15:47:10.465286 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-vmr2t"] Dec 10 15:47:10 crc kubenswrapper[4755]: E1210 15:47:10.465965 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0713bf5f-e7a0-40b6-b1a9-001d4fb9d972" containerName="pull" Dec 10 15:47:10 crc kubenswrapper[4755]: I1210 15:47:10.465976 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="0713bf5f-e7a0-40b6-b1a9-001d4fb9d972" containerName="pull" Dec 10 15:47:10 crc kubenswrapper[4755]: E1210 15:47:10.465994 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0713bf5f-e7a0-40b6-b1a9-001d4fb9d972" containerName="util" Dec 10 15:47:10 crc kubenswrapper[4755]: I1210 15:47:10.465999 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="0713bf5f-e7a0-40b6-b1a9-001d4fb9d972" containerName="util" Dec 10 15:47:10 crc kubenswrapper[4755]: E1210 15:47:10.466015 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0713bf5f-e7a0-40b6-b1a9-001d4fb9d972" containerName="extract" Dec 10 15:47:10 crc kubenswrapper[4755]: I1210 15:47:10.466022 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="0713bf5f-e7a0-40b6-b1a9-001d4fb9d972" containerName="extract" Dec 10 15:47:10 crc kubenswrapper[4755]: I1210 15:47:10.466227 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="0713bf5f-e7a0-40b6-b1a9-001d4fb9d972" containerName="extract" Dec 10 15:47:10 crc kubenswrapper[4755]: I1210 15:47:10.467126 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-vmr2t" Dec 10 15:47:10 crc kubenswrapper[4755]: I1210 15:47:10.470551 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Dec 10 15:47:10 crc kubenswrapper[4755]: I1210 15:47:10.470816 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Dec 10 15:47:10 crc kubenswrapper[4755]: I1210 15:47:10.491508 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-vmr2t"] Dec 10 15:47:10 crc kubenswrapper[4755]: I1210 15:47:10.554064 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79ff0629-ddf7-481b-be4f-58d4023b0ae7-scripts\") pod \"nova-cell1-cell-mapping-vmr2t\" (UID: \"79ff0629-ddf7-481b-be4f-58d4023b0ae7\") " pod="openstack/nova-cell1-cell-mapping-vmr2t" Dec 10 15:47:10 crc kubenswrapper[4755]: I1210 15:47:10.554142 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mm9k4\" (UniqueName: \"kubernetes.io/projected/79ff0629-ddf7-481b-be4f-58d4023b0ae7-kube-api-access-mm9k4\") pod \"nova-cell1-cell-mapping-vmr2t\" (UID: \"79ff0629-ddf7-481b-be4f-58d4023b0ae7\") " pod="openstack/nova-cell1-cell-mapping-vmr2t" Dec 10 15:47:10 crc kubenswrapper[4755]: I1210 15:47:10.554211 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79ff0629-ddf7-481b-be4f-58d4023b0ae7-config-data\") pod \"nova-cell1-cell-mapping-vmr2t\" (UID: \"79ff0629-ddf7-481b-be4f-58d4023b0ae7\") " pod="openstack/nova-cell1-cell-mapping-vmr2t" Dec 10 15:47:10 crc kubenswrapper[4755]: I1210 15:47:10.554384 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79ff0629-ddf7-481b-be4f-58d4023b0ae7-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-vmr2t\" (UID: \"79ff0629-ddf7-481b-be4f-58d4023b0ae7\") " pod="openstack/nova-cell1-cell-mapping-vmr2t" Dec 10 15:47:10 crc kubenswrapper[4755]: I1210 15:47:10.656804 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79ff0629-ddf7-481b-be4f-58d4023b0ae7-scripts\") pod \"nova-cell1-cell-mapping-vmr2t\" (UID: \"79ff0629-ddf7-481b-be4f-58d4023b0ae7\") " pod="openstack/nova-cell1-cell-mapping-vmr2t" Dec 10 15:47:10 crc kubenswrapper[4755]: I1210 15:47:10.656874 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mm9k4\" (UniqueName: \"kubernetes.io/projected/79ff0629-ddf7-481b-be4f-58d4023b0ae7-kube-api-access-mm9k4\") pod \"nova-cell1-cell-mapping-vmr2t\" (UID: \"79ff0629-ddf7-481b-be4f-58d4023b0ae7\") " pod="openstack/nova-cell1-cell-mapping-vmr2t" Dec 10 15:47:10 crc kubenswrapper[4755]: I1210 15:47:10.656937 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79ff0629-ddf7-481b-be4f-58d4023b0ae7-config-data\") pod \"nova-cell1-cell-mapping-vmr2t\" (UID: \"79ff0629-ddf7-481b-be4f-58d4023b0ae7\") " pod="openstack/nova-cell1-cell-mapping-vmr2t" Dec 10 15:47:10 crc kubenswrapper[4755]: I1210 15:47:10.657132 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79ff0629-ddf7-481b-be4f-58d4023b0ae7-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-vmr2t\" (UID: \"79ff0629-ddf7-481b-be4f-58d4023b0ae7\") " pod="openstack/nova-cell1-cell-mapping-vmr2t" Dec 10 15:47:10 crc kubenswrapper[4755]: I1210 15:47:10.664926 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79ff0629-ddf7-481b-be4f-58d4023b0ae7-scripts\") pod \"nova-cell1-cell-mapping-vmr2t\" (UID: \"79ff0629-ddf7-481b-be4f-58d4023b0ae7\") " pod="openstack/nova-cell1-cell-mapping-vmr2t" Dec 10 15:47:10 crc kubenswrapper[4755]: I1210 15:47:10.668787 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79ff0629-ddf7-481b-be4f-58d4023b0ae7-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-vmr2t\" (UID: \"79ff0629-ddf7-481b-be4f-58d4023b0ae7\") " pod="openstack/nova-cell1-cell-mapping-vmr2t" Dec 10 15:47:10 crc kubenswrapper[4755]: I1210 15:47:10.675165 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79ff0629-ddf7-481b-be4f-58d4023b0ae7-config-data\") pod \"nova-cell1-cell-mapping-vmr2t\" (UID: \"79ff0629-ddf7-481b-be4f-58d4023b0ae7\") " pod="openstack/nova-cell1-cell-mapping-vmr2t" Dec 10 15:47:10 crc kubenswrapper[4755]: I1210 15:47:10.679661 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mm9k4\" (UniqueName: \"kubernetes.io/projected/79ff0629-ddf7-481b-be4f-58d4023b0ae7-kube-api-access-mm9k4\") pod \"nova-cell1-cell-mapping-vmr2t\" (UID: \"79ff0629-ddf7-481b-be4f-58d4023b0ae7\") " pod="openstack/nova-cell1-cell-mapping-vmr2t" Dec 10 15:47:10 crc kubenswrapper[4755]: I1210 15:47:10.782757 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 15:47:10 crc kubenswrapper[4755]: I1210 15:47:10.806222 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-vmr2t" Dec 10 15:47:10 crc kubenswrapper[4755]: I1210 15:47:10.860327 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8e5f041-243a-466d-9d50-6657195db032-config-data\") pod \"e8e5f041-243a-466d-9d50-6657195db032\" (UID: \"e8e5f041-243a-466d-9d50-6657195db032\") " Dec 10 15:47:10 crc kubenswrapper[4755]: I1210 15:47:10.860374 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e8e5f041-243a-466d-9d50-6657195db032-log-httpd\") pod \"e8e5f041-243a-466d-9d50-6657195db032\" (UID: \"e8e5f041-243a-466d-9d50-6657195db032\") " Dec 10 15:47:10 crc kubenswrapper[4755]: I1210 15:47:10.860401 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e8e5f041-243a-466d-9d50-6657195db032-sg-core-conf-yaml\") pod \"e8e5f041-243a-466d-9d50-6657195db032\" (UID: \"e8e5f041-243a-466d-9d50-6657195db032\") " Dec 10 15:47:10 crc kubenswrapper[4755]: I1210 15:47:10.860446 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-smcvc\" (UniqueName: \"kubernetes.io/projected/e8e5f041-243a-466d-9d50-6657195db032-kube-api-access-smcvc\") pod \"e8e5f041-243a-466d-9d50-6657195db032\" (UID: \"e8e5f041-243a-466d-9d50-6657195db032\") " Dec 10 15:47:10 crc kubenswrapper[4755]: I1210 15:47:10.860591 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8e5f041-243a-466d-9d50-6657195db032-ceilometer-tls-certs\") pod \"e8e5f041-243a-466d-9d50-6657195db032\" (UID: \"e8e5f041-243a-466d-9d50-6657195db032\") " Dec 10 15:47:10 crc kubenswrapper[4755]: I1210 15:47:10.860690 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e8e5f041-243a-466d-9d50-6657195db032-run-httpd\") pod \"e8e5f041-243a-466d-9d50-6657195db032\" (UID: \"e8e5f041-243a-466d-9d50-6657195db032\") " Dec 10 15:47:10 crc kubenswrapper[4755]: I1210 15:47:10.860786 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e8e5f041-243a-466d-9d50-6657195db032-scripts\") pod \"e8e5f041-243a-466d-9d50-6657195db032\" (UID: \"e8e5f041-243a-466d-9d50-6657195db032\") " Dec 10 15:47:10 crc kubenswrapper[4755]: I1210 15:47:10.860806 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8e5f041-243a-466d-9d50-6657195db032-combined-ca-bundle\") pod \"e8e5f041-243a-466d-9d50-6657195db032\" (UID: \"e8e5f041-243a-466d-9d50-6657195db032\") " Dec 10 15:47:10 crc kubenswrapper[4755]: I1210 15:47:10.861224 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8e5f041-243a-466d-9d50-6657195db032-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e8e5f041-243a-466d-9d50-6657195db032" (UID: "e8e5f041-243a-466d-9d50-6657195db032"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:47:10 crc kubenswrapper[4755]: I1210 15:47:10.865463 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8e5f041-243a-466d-9d50-6657195db032-kube-api-access-smcvc" (OuterVolumeSpecName: "kube-api-access-smcvc") pod "e8e5f041-243a-466d-9d50-6657195db032" (UID: "e8e5f041-243a-466d-9d50-6657195db032"). InnerVolumeSpecName "kube-api-access-smcvc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:47:10 crc kubenswrapper[4755]: I1210 15:47:10.866116 4755 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e8e5f041-243a-466d-9d50-6657195db032-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:10 crc kubenswrapper[4755]: I1210 15:47:10.866360 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-smcvc\" (UniqueName: \"kubernetes.io/projected/e8e5f041-243a-466d-9d50-6657195db032-kube-api-access-smcvc\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:10 crc kubenswrapper[4755]: I1210 15:47:10.866626 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8e5f041-243a-466d-9d50-6657195db032-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e8e5f041-243a-466d-9d50-6657195db032" (UID: "e8e5f041-243a-466d-9d50-6657195db032"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:47:10 crc kubenswrapper[4755]: I1210 15:47:10.871673 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8e5f041-243a-466d-9d50-6657195db032-scripts" (OuterVolumeSpecName: "scripts") pod "e8e5f041-243a-466d-9d50-6657195db032" (UID: "e8e5f041-243a-466d-9d50-6657195db032"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:47:10 crc kubenswrapper[4755]: I1210 15:47:10.933816 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8e5f041-243a-466d-9d50-6657195db032-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e8e5f041-243a-466d-9d50-6657195db032" (UID: "e8e5f041-243a-466d-9d50-6657195db032"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:47:10 crc kubenswrapper[4755]: I1210 15:47:10.956520 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8e5f041-243a-466d-9d50-6657195db032-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "e8e5f041-243a-466d-9d50-6657195db032" (UID: "e8e5f041-243a-466d-9d50-6657195db032"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:47:10 crc kubenswrapper[4755]: I1210 15:47:10.969116 4755 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e8e5f041-243a-466d-9d50-6657195db032-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:10 crc kubenswrapper[4755]: I1210 15:47:10.969144 4755 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e8e5f041-243a-466d-9d50-6657195db032-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:10 crc kubenswrapper[4755]: I1210 15:47:10.969158 4755 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e8e5f041-243a-466d-9d50-6657195db032-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:10 crc kubenswrapper[4755]: I1210 15:47:10.969171 4755 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8e5f041-243a-466d-9d50-6657195db032-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:11 crc kubenswrapper[4755]: I1210 15:47:11.017702 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8e5f041-243a-466d-9d50-6657195db032-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e8e5f041-243a-466d-9d50-6657195db032" (UID: "e8e5f041-243a-466d-9d50-6657195db032"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:47:11 crc kubenswrapper[4755]: I1210 15:47:11.034713 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 15:47:11 crc kubenswrapper[4755]: I1210 15:47:11.035533 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e8e5f041-243a-466d-9d50-6657195db032","Type":"ContainerDied","Data":"15e38929c2b49f2b72a47595f106277799eac2adf0fc6054a720577439e76801"} Dec 10 15:47:11 crc kubenswrapper[4755]: I1210 15:47:11.035636 4755 scope.go:117] "RemoveContainer" containerID="41faea9892c4e8dfdb0657ce9bc6f8fd7e27e54d25e719a181a9d972425b0ba0" Dec 10 15:47:11 crc kubenswrapper[4755]: I1210 15:47:11.074511 4755 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8e5f041-243a-466d-9d50-6657195db032-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:11 crc kubenswrapper[4755]: I1210 15:47:11.097332 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8e5f041-243a-466d-9d50-6657195db032-config-data" (OuterVolumeSpecName: "config-data") pod "e8e5f041-243a-466d-9d50-6657195db032" (UID: "e8e5f041-243a-466d-9d50-6657195db032"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:47:11 crc kubenswrapper[4755]: I1210 15:47:11.160455 4755 scope.go:117] "RemoveContainer" containerID="169039bf409833291a0aa6441ba76456a5d9972ddcdf2a7c0ac736d7527c531b" Dec 10 15:47:11 crc kubenswrapper[4755]: I1210 15:47:11.176591 4755 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8e5f041-243a-466d-9d50-6657195db032-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:11 crc kubenswrapper[4755]: I1210 15:47:11.188133 4755 scope.go:117] "RemoveContainer" containerID="d77acec9fa04e8529215a937db8cca68097d45b683fc3af1cb46decce9caf804" Dec 10 15:47:11 crc kubenswrapper[4755]: I1210 15:47:11.266015 4755 scope.go:117] "RemoveContainer" containerID="b6fbcba8750747d0d6c7ce84e3ec8b01b7f2bbcac0ca7a9ca1e08d8b7a98f725" Dec 10 15:47:11 crc kubenswrapper[4755]: I1210 15:47:11.392852 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 10 15:47:11 crc kubenswrapper[4755]: I1210 15:47:11.408552 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 10 15:47:11 crc kubenswrapper[4755]: I1210 15:47:11.437329 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-vmr2t"] Dec 10 15:47:11 crc kubenswrapper[4755]: I1210 15:47:11.454922 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 10 15:47:11 crc kubenswrapper[4755]: E1210 15:47:11.455552 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8e5f041-243a-466d-9d50-6657195db032" containerName="sg-core" Dec 10 15:47:11 crc kubenswrapper[4755]: I1210 15:47:11.455574 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8e5f041-243a-466d-9d50-6657195db032" containerName="sg-core" Dec 10 15:47:11 crc kubenswrapper[4755]: E1210 15:47:11.455595 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8e5f041-243a-466d-9d50-6657195db032" containerName="ceilometer-central-agent" Dec 10 15:47:11 crc kubenswrapper[4755]: I1210 15:47:11.455603 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8e5f041-243a-466d-9d50-6657195db032" containerName="ceilometer-central-agent" Dec 10 15:47:11 crc kubenswrapper[4755]: E1210 15:47:11.455620 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8e5f041-243a-466d-9d50-6657195db032" containerName="proxy-httpd" Dec 10 15:47:11 crc kubenswrapper[4755]: I1210 15:47:11.455627 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8e5f041-243a-466d-9d50-6657195db032" containerName="proxy-httpd" Dec 10 15:47:11 crc kubenswrapper[4755]: E1210 15:47:11.455670 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8e5f041-243a-466d-9d50-6657195db032" containerName="ceilometer-notification-agent" Dec 10 15:47:11 crc kubenswrapper[4755]: I1210 15:47:11.455678 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8e5f041-243a-466d-9d50-6657195db032" containerName="ceilometer-notification-agent" Dec 10 15:47:11 crc kubenswrapper[4755]: I1210 15:47:11.455857 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8e5f041-243a-466d-9d50-6657195db032" containerName="ceilometer-notification-agent" Dec 10 15:47:11 crc kubenswrapper[4755]: I1210 15:47:11.455868 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8e5f041-243a-466d-9d50-6657195db032" containerName="sg-core" Dec 10 15:47:11 crc kubenswrapper[4755]: I1210 15:47:11.455893 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8e5f041-243a-466d-9d50-6657195db032" containerName="ceilometer-central-agent" Dec 10 15:47:11 crc kubenswrapper[4755]: I1210 15:47:11.455905 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8e5f041-243a-466d-9d50-6657195db032" containerName="proxy-httpd" Dec 10 15:47:11 crc kubenswrapper[4755]: I1210 15:47:11.457997 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 15:47:11 crc kubenswrapper[4755]: I1210 15:47:11.461356 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 10 15:47:11 crc kubenswrapper[4755]: I1210 15:47:11.462559 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 10 15:47:11 crc kubenswrapper[4755]: I1210 15:47:11.462947 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Dec 10 15:47:11 crc kubenswrapper[4755]: I1210 15:47:11.463540 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 10 15:47:11 crc kubenswrapper[4755]: I1210 15:47:11.586250 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 10 15:47:11 crc kubenswrapper[4755]: I1210 15:47:11.589815 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2fcbda1-d1f1-4983-a9bc-d8575c3872d3-scripts\") pod \"ceilometer-0\" (UID: \"b2fcbda1-d1f1-4983-a9bc-d8575c3872d3\") " pod="openstack/ceilometer-0" Dec 10 15:47:11 crc kubenswrapper[4755]: I1210 15:47:11.589941 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b2fcbda1-d1f1-4983-a9bc-d8575c3872d3-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b2fcbda1-d1f1-4983-a9bc-d8575c3872d3\") " pod="openstack/ceilometer-0" Dec 10 15:47:11 crc kubenswrapper[4755]: I1210 15:47:11.589975 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2fcbda1-d1f1-4983-a9bc-d8575c3872d3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b2fcbda1-d1f1-4983-a9bc-d8575c3872d3\") " pod="openstack/ceilometer-0" Dec 10 15:47:11 crc kubenswrapper[4755]: I1210 15:47:11.591645 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b2fcbda1-d1f1-4983-a9bc-d8575c3872d3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b2fcbda1-d1f1-4983-a9bc-d8575c3872d3\") " pod="openstack/ceilometer-0" Dec 10 15:47:11 crc kubenswrapper[4755]: I1210 15:47:11.591774 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2fcbda1-d1f1-4983-a9bc-d8575c3872d3-config-data\") pod \"ceilometer-0\" (UID: \"b2fcbda1-d1f1-4983-a9bc-d8575c3872d3\") " pod="openstack/ceilometer-0" Dec 10 15:47:11 crc kubenswrapper[4755]: I1210 15:47:11.591859 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2fcbda1-d1f1-4983-a9bc-d8575c3872d3-log-httpd\") pod \"ceilometer-0\" (UID: \"b2fcbda1-d1f1-4983-a9bc-d8575c3872d3\") " pod="openstack/ceilometer-0" Dec 10 15:47:11 crc kubenswrapper[4755]: I1210 15:47:11.591913 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2fcbda1-d1f1-4983-a9bc-d8575c3872d3-run-httpd\") pod \"ceilometer-0\" (UID: \"b2fcbda1-d1f1-4983-a9bc-d8575c3872d3\") " pod="openstack/ceilometer-0" Dec 10 15:47:11 crc kubenswrapper[4755]: I1210 15:47:11.592008 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvcrp\" (UniqueName: \"kubernetes.io/projected/b2fcbda1-d1f1-4983-a9bc-d8575c3872d3-kube-api-access-lvcrp\") pod \"ceilometer-0\" (UID: \"b2fcbda1-d1f1-4983-a9bc-d8575c3872d3\") " pod="openstack/ceilometer-0" Dec 10 15:47:11 crc kubenswrapper[4755]: I1210 15:47:11.697373 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7129b800-f170-4cbf-9b18-26d01b1cdd26-config-data\") pod \"7129b800-f170-4cbf-9b18-26d01b1cdd26\" (UID: \"7129b800-f170-4cbf-9b18-26d01b1cdd26\") " Dec 10 15:47:11 crc kubenswrapper[4755]: I1210 15:47:11.697433 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7129b800-f170-4cbf-9b18-26d01b1cdd26-combined-ca-bundle\") pod \"7129b800-f170-4cbf-9b18-26d01b1cdd26\" (UID: \"7129b800-f170-4cbf-9b18-26d01b1cdd26\") " Dec 10 15:47:11 crc kubenswrapper[4755]: I1210 15:47:11.697550 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7129b800-f170-4cbf-9b18-26d01b1cdd26-logs\") pod \"7129b800-f170-4cbf-9b18-26d01b1cdd26\" (UID: \"7129b800-f170-4cbf-9b18-26d01b1cdd26\") " Dec 10 15:47:11 crc kubenswrapper[4755]: I1210 15:47:11.697584 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zg7wt\" (UniqueName: \"kubernetes.io/projected/7129b800-f170-4cbf-9b18-26d01b1cdd26-kube-api-access-zg7wt\") pod \"7129b800-f170-4cbf-9b18-26d01b1cdd26\" (UID: \"7129b800-f170-4cbf-9b18-26d01b1cdd26\") " Dec 10 15:47:11 crc kubenswrapper[4755]: I1210 15:47:11.697846 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b2fcbda1-d1f1-4983-a9bc-d8575c3872d3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b2fcbda1-d1f1-4983-a9bc-d8575c3872d3\") " pod="openstack/ceilometer-0" Dec 10 15:47:11 crc kubenswrapper[4755]: I1210 15:47:11.697876 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2fcbda1-d1f1-4983-a9bc-d8575c3872d3-config-data\") pod \"ceilometer-0\" (UID: \"b2fcbda1-d1f1-4983-a9bc-d8575c3872d3\") " pod="openstack/ceilometer-0" Dec 10 15:47:11 crc kubenswrapper[4755]: I1210 15:47:11.697908 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2fcbda1-d1f1-4983-a9bc-d8575c3872d3-log-httpd\") pod \"ceilometer-0\" (UID: \"b2fcbda1-d1f1-4983-a9bc-d8575c3872d3\") " pod="openstack/ceilometer-0" Dec 10 15:47:11 crc kubenswrapper[4755]: I1210 15:47:11.697926 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2fcbda1-d1f1-4983-a9bc-d8575c3872d3-run-httpd\") pod \"ceilometer-0\" (UID: \"b2fcbda1-d1f1-4983-a9bc-d8575c3872d3\") " pod="openstack/ceilometer-0" Dec 10 15:47:11 crc kubenswrapper[4755]: I1210 15:47:11.697956 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvcrp\" (UniqueName: \"kubernetes.io/projected/b2fcbda1-d1f1-4983-a9bc-d8575c3872d3-kube-api-access-lvcrp\") pod \"ceilometer-0\" (UID: \"b2fcbda1-d1f1-4983-a9bc-d8575c3872d3\") " pod="openstack/ceilometer-0" Dec 10 15:47:11 crc kubenswrapper[4755]: I1210 15:47:11.698018 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2fcbda1-d1f1-4983-a9bc-d8575c3872d3-scripts\") pod \"ceilometer-0\" (UID: \"b2fcbda1-d1f1-4983-a9bc-d8575c3872d3\") " pod="openstack/ceilometer-0" Dec 10 15:47:11 crc kubenswrapper[4755]: I1210 15:47:11.698083 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b2fcbda1-d1f1-4983-a9bc-d8575c3872d3-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b2fcbda1-d1f1-4983-a9bc-d8575c3872d3\") " pod="openstack/ceilometer-0" Dec 10 15:47:11 crc kubenswrapper[4755]: I1210 15:47:11.698105 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2fcbda1-d1f1-4983-a9bc-d8575c3872d3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b2fcbda1-d1f1-4983-a9bc-d8575c3872d3\") " pod="openstack/ceilometer-0" Dec 10 15:47:11 crc kubenswrapper[4755]: I1210 15:47:11.709872 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2fcbda1-d1f1-4983-a9bc-d8575c3872d3-log-httpd\") pod \"ceilometer-0\" (UID: \"b2fcbda1-d1f1-4983-a9bc-d8575c3872d3\") " pod="openstack/ceilometer-0" Dec 10 15:47:11 crc kubenswrapper[4755]: I1210 15:47:11.710139 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2fcbda1-d1f1-4983-a9bc-d8575c3872d3-run-httpd\") pod \"ceilometer-0\" (UID: \"b2fcbda1-d1f1-4983-a9bc-d8575c3872d3\") " pod="openstack/ceilometer-0" Dec 10 15:47:11 crc kubenswrapper[4755]: I1210 15:47:11.710449 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7129b800-f170-4cbf-9b18-26d01b1cdd26-logs" (OuterVolumeSpecName: "logs") pod "7129b800-f170-4cbf-9b18-26d01b1cdd26" (UID: "7129b800-f170-4cbf-9b18-26d01b1cdd26"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:47:11 crc kubenswrapper[4755]: I1210 15:47:11.730800 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7129b800-f170-4cbf-9b18-26d01b1cdd26-kube-api-access-zg7wt" (OuterVolumeSpecName: "kube-api-access-zg7wt") pod "7129b800-f170-4cbf-9b18-26d01b1cdd26" (UID: "7129b800-f170-4cbf-9b18-26d01b1cdd26"). InnerVolumeSpecName "kube-api-access-zg7wt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:47:11 crc kubenswrapper[4755]: I1210 15:47:11.736230 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2fcbda1-d1f1-4983-a9bc-d8575c3872d3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b2fcbda1-d1f1-4983-a9bc-d8575c3872d3\") " pod="openstack/ceilometer-0" Dec 10 15:47:11 crc kubenswrapper[4755]: I1210 15:47:11.739039 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b2fcbda1-d1f1-4983-a9bc-d8575c3872d3-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b2fcbda1-d1f1-4983-a9bc-d8575c3872d3\") " pod="openstack/ceilometer-0" Dec 10 15:47:11 crc kubenswrapper[4755]: I1210 15:47:11.741223 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b2fcbda1-d1f1-4983-a9bc-d8575c3872d3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b2fcbda1-d1f1-4983-a9bc-d8575c3872d3\") " pod="openstack/ceilometer-0" Dec 10 15:47:11 crc kubenswrapper[4755]: I1210 15:47:11.741993 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2fcbda1-d1f1-4983-a9bc-d8575c3872d3-scripts\") pod \"ceilometer-0\" (UID: \"b2fcbda1-d1f1-4983-a9bc-d8575c3872d3\") " pod="openstack/ceilometer-0" Dec 10 15:47:11 crc kubenswrapper[4755]: I1210 15:47:11.744625 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2fcbda1-d1f1-4983-a9bc-d8575c3872d3-config-data\") pod \"ceilometer-0\" (UID: \"b2fcbda1-d1f1-4983-a9bc-d8575c3872d3\") " pod="openstack/ceilometer-0" Dec 10 15:47:11 crc kubenswrapper[4755]: I1210 15:47:11.766203 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvcrp\" (UniqueName: \"kubernetes.io/projected/b2fcbda1-d1f1-4983-a9bc-d8575c3872d3-kube-api-access-lvcrp\") pod \"ceilometer-0\" (UID: \"b2fcbda1-d1f1-4983-a9bc-d8575c3872d3\") " pod="openstack/ceilometer-0" Dec 10 15:47:11 crc kubenswrapper[4755]: I1210 15:47:11.781588 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8e5f041-243a-466d-9d50-6657195db032" path="/var/lib/kubelet/pods/e8e5f041-243a-466d-9d50-6657195db032/volumes" Dec 10 15:47:11 crc kubenswrapper[4755]: I1210 15:47:11.814061 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 15:47:11 crc kubenswrapper[4755]: I1210 15:47:11.818630 4755 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7129b800-f170-4cbf-9b18-26d01b1cdd26-logs\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:11 crc kubenswrapper[4755]: I1210 15:47:11.818688 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zg7wt\" (UniqueName: \"kubernetes.io/projected/7129b800-f170-4cbf-9b18-26d01b1cdd26-kube-api-access-zg7wt\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:11 crc kubenswrapper[4755]: I1210 15:47:11.875096 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7129b800-f170-4cbf-9b18-26d01b1cdd26-config-data" (OuterVolumeSpecName: "config-data") pod "7129b800-f170-4cbf-9b18-26d01b1cdd26" (UID: "7129b800-f170-4cbf-9b18-26d01b1cdd26"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:47:11 crc kubenswrapper[4755]: I1210 15:47:11.876684 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7129b800-f170-4cbf-9b18-26d01b1cdd26-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7129b800-f170-4cbf-9b18-26d01b1cdd26" (UID: "7129b800-f170-4cbf-9b18-26d01b1cdd26"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:47:11 crc kubenswrapper[4755]: I1210 15:47:11.929956 4755 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7129b800-f170-4cbf-9b18-26d01b1cdd26-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:11 crc kubenswrapper[4755]: I1210 15:47:11.929999 4755 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7129b800-f170-4cbf-9b18-26d01b1cdd26-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:12 crc kubenswrapper[4755]: I1210 15:47:12.070953 4755 generic.go:334] "Generic (PLEG): container finished" podID="7129b800-f170-4cbf-9b18-26d01b1cdd26" containerID="68c75d4e1d165ebd8cec04c9eabcdfe2b89031902044b808a45d1a5924fb1ecd" exitCode=0 Dec 10 15:47:12 crc kubenswrapper[4755]: I1210 15:47:12.071416 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 10 15:47:12 crc kubenswrapper[4755]: I1210 15:47:12.071713 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7129b800-f170-4cbf-9b18-26d01b1cdd26","Type":"ContainerDied","Data":"68c75d4e1d165ebd8cec04c9eabcdfe2b89031902044b808a45d1a5924fb1ecd"} Dec 10 15:47:12 crc kubenswrapper[4755]: I1210 15:47:12.071790 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7129b800-f170-4cbf-9b18-26d01b1cdd26","Type":"ContainerDied","Data":"365c28a19621b484a94ae54c49e779d5cad9af65c113bbaa0a52d59f9dd2ef18"} Dec 10 15:47:12 crc kubenswrapper[4755]: I1210 15:47:12.071816 4755 scope.go:117] "RemoveContainer" containerID="68c75d4e1d165ebd8cec04c9eabcdfe2b89031902044b808a45d1a5924fb1ecd" Dec 10 15:47:12 crc kubenswrapper[4755]: I1210 15:47:12.075448 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-vmr2t" event={"ID":"79ff0629-ddf7-481b-be4f-58d4023b0ae7","Type":"ContainerStarted","Data":"7b59ed82c9b544643082119788f0cfdaa32b6beaa9131b7ed7f4833049015779"} Dec 10 15:47:12 crc kubenswrapper[4755]: I1210 15:47:12.075506 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-vmr2t" event={"ID":"79ff0629-ddf7-481b-be4f-58d4023b0ae7","Type":"ContainerStarted","Data":"7ad8db1cc56c95593d3d843e0681a1cc32f715bf4c69b6d3b5bc9ac612115c37"} Dec 10 15:47:12 crc kubenswrapper[4755]: I1210 15:47:12.102828 4755 scope.go:117] "RemoveContainer" containerID="f8b294449df5ef04262c533d8682ec6bd1dddb8009bbbff61e951331ed15486e" Dec 10 15:47:12 crc kubenswrapper[4755]: I1210 15:47:12.125579 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-vmr2t" podStartSLOduration=2.125554847 podStartE2EDuration="2.125554847s" podCreationTimestamp="2025-12-10 15:47:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:47:12.103001423 +0000 UTC m=+1428.703885055" watchObservedRunningTime="2025-12-10 15:47:12.125554847 +0000 UTC m=+1428.726438479" Dec 10 15:47:12 crc kubenswrapper[4755]: I1210 15:47:12.148211 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 10 15:47:12 crc kubenswrapper[4755]: I1210 15:47:12.160813 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 10 15:47:12 crc kubenswrapper[4755]: I1210 15:47:12.164091 4755 scope.go:117] "RemoveContainer" containerID="68c75d4e1d165ebd8cec04c9eabcdfe2b89031902044b808a45d1a5924fb1ecd" Dec 10 15:47:12 crc kubenswrapper[4755]: E1210 15:47:12.164617 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68c75d4e1d165ebd8cec04c9eabcdfe2b89031902044b808a45d1a5924fb1ecd\": container with ID starting with 68c75d4e1d165ebd8cec04c9eabcdfe2b89031902044b808a45d1a5924fb1ecd not found: ID does not exist" containerID="68c75d4e1d165ebd8cec04c9eabcdfe2b89031902044b808a45d1a5924fb1ecd" Dec 10 15:47:12 crc kubenswrapper[4755]: I1210 15:47:12.164649 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68c75d4e1d165ebd8cec04c9eabcdfe2b89031902044b808a45d1a5924fb1ecd"} err="failed to get container status \"68c75d4e1d165ebd8cec04c9eabcdfe2b89031902044b808a45d1a5924fb1ecd\": rpc error: code = NotFound desc = could not find container \"68c75d4e1d165ebd8cec04c9eabcdfe2b89031902044b808a45d1a5924fb1ecd\": container with ID starting with 68c75d4e1d165ebd8cec04c9eabcdfe2b89031902044b808a45d1a5924fb1ecd not found: ID does not exist" Dec 10 15:47:12 crc kubenswrapper[4755]: I1210 15:47:12.164675 4755 scope.go:117] "RemoveContainer" containerID="f8b294449df5ef04262c533d8682ec6bd1dddb8009bbbff61e951331ed15486e" Dec 10 15:47:12 crc kubenswrapper[4755]: E1210 15:47:12.165052 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8b294449df5ef04262c533d8682ec6bd1dddb8009bbbff61e951331ed15486e\": container with ID starting with f8b294449df5ef04262c533d8682ec6bd1dddb8009bbbff61e951331ed15486e not found: ID does not exist" containerID="f8b294449df5ef04262c533d8682ec6bd1dddb8009bbbff61e951331ed15486e" Dec 10 15:47:12 crc kubenswrapper[4755]: I1210 15:47:12.165077 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8b294449df5ef04262c533d8682ec6bd1dddb8009bbbff61e951331ed15486e"} err="failed to get container status \"f8b294449df5ef04262c533d8682ec6bd1dddb8009bbbff61e951331ed15486e\": rpc error: code = NotFound desc = could not find container \"f8b294449df5ef04262c533d8682ec6bd1dddb8009bbbff61e951331ed15486e\": container with ID starting with f8b294449df5ef04262c533d8682ec6bd1dddb8009bbbff61e951331ed15486e not found: ID does not exist" Dec 10 15:47:12 crc kubenswrapper[4755]: I1210 15:47:12.170514 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 10 15:47:12 crc kubenswrapper[4755]: E1210 15:47:12.171958 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7129b800-f170-4cbf-9b18-26d01b1cdd26" containerName="nova-api-log" Dec 10 15:47:12 crc kubenswrapper[4755]: I1210 15:47:12.171999 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="7129b800-f170-4cbf-9b18-26d01b1cdd26" containerName="nova-api-log" Dec 10 15:47:12 crc kubenswrapper[4755]: E1210 15:47:12.172011 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7129b800-f170-4cbf-9b18-26d01b1cdd26" containerName="nova-api-api" Dec 10 15:47:12 crc kubenswrapper[4755]: I1210 15:47:12.172017 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="7129b800-f170-4cbf-9b18-26d01b1cdd26" containerName="nova-api-api" Dec 10 15:47:12 crc kubenswrapper[4755]: I1210 15:47:12.172259 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="7129b800-f170-4cbf-9b18-26d01b1cdd26" containerName="nova-api-api" Dec 10 15:47:12 crc kubenswrapper[4755]: I1210 15:47:12.172279 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="7129b800-f170-4cbf-9b18-26d01b1cdd26" containerName="nova-api-log" Dec 10 15:47:12 crc kubenswrapper[4755]: I1210 15:47:12.175776 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 10 15:47:12 crc kubenswrapper[4755]: I1210 15:47:12.179151 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Dec 10 15:47:12 crc kubenswrapper[4755]: I1210 15:47:12.179891 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 10 15:47:12 crc kubenswrapper[4755]: I1210 15:47:12.179916 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Dec 10 15:47:12 crc kubenswrapper[4755]: I1210 15:47:12.203530 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 10 15:47:12 crc kubenswrapper[4755]: I1210 15:47:12.342926 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8d3202f-3176-4aa8-93db-7461617d75be-internal-tls-certs\") pod \"nova-api-0\" (UID: \"d8d3202f-3176-4aa8-93db-7461617d75be\") " pod="openstack/nova-api-0" Dec 10 15:47:12 crc kubenswrapper[4755]: I1210 15:47:12.343001 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8d3202f-3176-4aa8-93db-7461617d75be-logs\") pod \"nova-api-0\" (UID: \"d8d3202f-3176-4aa8-93db-7461617d75be\") " pod="openstack/nova-api-0" Dec 10 15:47:12 crc kubenswrapper[4755]: I1210 15:47:12.343103 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvdpq\" (UniqueName: \"kubernetes.io/projected/d8d3202f-3176-4aa8-93db-7461617d75be-kube-api-access-tvdpq\") pod \"nova-api-0\" (UID: \"d8d3202f-3176-4aa8-93db-7461617d75be\") " pod="openstack/nova-api-0" Dec 10 15:47:12 crc kubenswrapper[4755]: I1210 15:47:12.343136 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8d3202f-3176-4aa8-93db-7461617d75be-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d8d3202f-3176-4aa8-93db-7461617d75be\") " pod="openstack/nova-api-0" Dec 10 15:47:12 crc kubenswrapper[4755]: I1210 15:47:12.343154 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8d3202f-3176-4aa8-93db-7461617d75be-config-data\") pod \"nova-api-0\" (UID: \"d8d3202f-3176-4aa8-93db-7461617d75be\") " pod="openstack/nova-api-0" Dec 10 15:47:12 crc kubenswrapper[4755]: I1210 15:47:12.343175 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8d3202f-3176-4aa8-93db-7461617d75be-public-tls-certs\") pod \"nova-api-0\" (UID: \"d8d3202f-3176-4aa8-93db-7461617d75be\") " pod="openstack/nova-api-0" Dec 10 15:47:12 crc kubenswrapper[4755]: W1210 15:47:12.417165 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb2fcbda1_d1f1_4983_a9bc_d8575c3872d3.slice/crio-d14c33e6ee2f2c88d10d0e352a5d0ef2656798b21de6c2da150e7842d50c5f5b WatchSource:0}: Error finding container d14c33e6ee2f2c88d10d0e352a5d0ef2656798b21de6c2da150e7842d50c5f5b: Status 404 returned error can't find the container with id d14c33e6ee2f2c88d10d0e352a5d0ef2656798b21de6c2da150e7842d50c5f5b Dec 10 15:47:12 crc kubenswrapper[4755]: I1210 15:47:12.418055 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 10 15:47:12 crc kubenswrapper[4755]: I1210 15:47:12.445830 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8d3202f-3176-4aa8-93db-7461617d75be-internal-tls-certs\") pod \"nova-api-0\" (UID: \"d8d3202f-3176-4aa8-93db-7461617d75be\") " pod="openstack/nova-api-0" Dec 10 15:47:12 crc kubenswrapper[4755]: I1210 15:47:12.445905 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8d3202f-3176-4aa8-93db-7461617d75be-logs\") pod \"nova-api-0\" (UID: \"d8d3202f-3176-4aa8-93db-7461617d75be\") " pod="openstack/nova-api-0" Dec 10 15:47:12 crc kubenswrapper[4755]: I1210 15:47:12.446012 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvdpq\" (UniqueName: \"kubernetes.io/projected/d8d3202f-3176-4aa8-93db-7461617d75be-kube-api-access-tvdpq\") pod \"nova-api-0\" (UID: \"d8d3202f-3176-4aa8-93db-7461617d75be\") " pod="openstack/nova-api-0" Dec 10 15:47:12 crc kubenswrapper[4755]: I1210 15:47:12.446043 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8d3202f-3176-4aa8-93db-7461617d75be-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d8d3202f-3176-4aa8-93db-7461617d75be\") " pod="openstack/nova-api-0" Dec 10 15:47:12 crc kubenswrapper[4755]: I1210 15:47:12.446066 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8d3202f-3176-4aa8-93db-7461617d75be-config-data\") pod \"nova-api-0\" (UID: \"d8d3202f-3176-4aa8-93db-7461617d75be\") " pod="openstack/nova-api-0" Dec 10 15:47:12 crc kubenswrapper[4755]: I1210 15:47:12.446090 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8d3202f-3176-4aa8-93db-7461617d75be-public-tls-certs\") pod \"nova-api-0\" (UID: \"d8d3202f-3176-4aa8-93db-7461617d75be\") " pod="openstack/nova-api-0" Dec 10 15:47:12 crc kubenswrapper[4755]: I1210 15:47:12.447986 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8d3202f-3176-4aa8-93db-7461617d75be-logs\") pod \"nova-api-0\" (UID: \"d8d3202f-3176-4aa8-93db-7461617d75be\") " pod="openstack/nova-api-0" Dec 10 15:47:12 crc kubenswrapper[4755]: I1210 15:47:12.462108 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8d3202f-3176-4aa8-93db-7461617d75be-public-tls-certs\") pod \"nova-api-0\" (UID: \"d8d3202f-3176-4aa8-93db-7461617d75be\") " pod="openstack/nova-api-0" Dec 10 15:47:12 crc kubenswrapper[4755]: I1210 15:47:12.462358 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8d3202f-3176-4aa8-93db-7461617d75be-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d8d3202f-3176-4aa8-93db-7461617d75be\") " pod="openstack/nova-api-0" Dec 10 15:47:12 crc kubenswrapper[4755]: I1210 15:47:12.467072 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8d3202f-3176-4aa8-93db-7461617d75be-internal-tls-certs\") pod \"nova-api-0\" (UID: \"d8d3202f-3176-4aa8-93db-7461617d75be\") " pod="openstack/nova-api-0" Dec 10 15:47:12 crc kubenswrapper[4755]: I1210 15:47:12.467661 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8d3202f-3176-4aa8-93db-7461617d75be-config-data\") pod \"nova-api-0\" (UID: \"d8d3202f-3176-4aa8-93db-7461617d75be\") " pod="openstack/nova-api-0" Dec 10 15:47:12 crc kubenswrapper[4755]: I1210 15:47:12.468527 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvdpq\" (UniqueName: \"kubernetes.io/projected/d8d3202f-3176-4aa8-93db-7461617d75be-kube-api-access-tvdpq\") pod \"nova-api-0\" (UID: \"d8d3202f-3176-4aa8-93db-7461617d75be\") " pod="openstack/nova-api-0" Dec 10 15:47:12 crc kubenswrapper[4755]: I1210 15:47:12.504603 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 10 15:47:15 crc kubenswrapper[4755]: I1210 15:47:13.078243 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 10 15:47:15 crc kubenswrapper[4755]: I1210 15:47:13.108903 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d8d3202f-3176-4aa8-93db-7461617d75be","Type":"ContainerStarted","Data":"b917bd7484a453c1025206095b2e616ade1926c2ac029106eb813efa6feed82d"} Dec 10 15:47:15 crc kubenswrapper[4755]: I1210 15:47:13.121390 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2fcbda1-d1f1-4983-a9bc-d8575c3872d3","Type":"ContainerStarted","Data":"d14c33e6ee2f2c88d10d0e352a5d0ef2656798b21de6c2da150e7842d50c5f5b"} Dec 10 15:47:15 crc kubenswrapper[4755]: I1210 15:47:13.777459 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7129b800-f170-4cbf-9b18-26d01b1cdd26" path="/var/lib/kubelet/pods/7129b800-f170-4cbf-9b18-26d01b1cdd26/volumes" Dec 10 15:47:15 crc kubenswrapper[4755]: I1210 15:47:14.352129 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-54dd998c-lc5nf" Dec 10 15:47:15 crc kubenswrapper[4755]: I1210 15:47:14.427705 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-884c8b8f5-p2gb9"] Dec 10 15:47:15 crc kubenswrapper[4755]: I1210 15:47:14.428552 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-884c8b8f5-p2gb9" podUID="7b7f7f25-62c8-45da-bbf6-1759da909d8b" containerName="dnsmasq-dns" containerID="cri-o://99ffa58cf1ff350d6602a854711010bed17523f78745126a9386419d3f526f23" gracePeriod=10 Dec 10 15:47:15 crc kubenswrapper[4755]: I1210 15:47:15.143804 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2fcbda1-d1f1-4983-a9bc-d8575c3872d3","Type":"ContainerStarted","Data":"ff733829e2460a8346b0092d4246cb7c60ef161a3c1f2ca80cef8fd8a5c403d6"} Dec 10 15:47:15 crc kubenswrapper[4755]: I1210 15:47:15.151158 4755 generic.go:334] "Generic (PLEG): container finished" podID="7b7f7f25-62c8-45da-bbf6-1759da909d8b" containerID="99ffa58cf1ff350d6602a854711010bed17523f78745126a9386419d3f526f23" exitCode=0 Dec 10 15:47:15 crc kubenswrapper[4755]: I1210 15:47:15.151240 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-884c8b8f5-p2gb9" event={"ID":"7b7f7f25-62c8-45da-bbf6-1759da909d8b","Type":"ContainerDied","Data":"99ffa58cf1ff350d6602a854711010bed17523f78745126a9386419d3f526f23"} Dec 10 15:47:15 crc kubenswrapper[4755]: I1210 15:47:15.153583 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d8d3202f-3176-4aa8-93db-7461617d75be","Type":"ContainerStarted","Data":"ef88933ff432db51ec3fdd8d3ae77a9d0af47d9f412d3818a05d9a9e6200517b"} Dec 10 15:47:15 crc kubenswrapper[4755]: I1210 15:47:15.153610 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d8d3202f-3176-4aa8-93db-7461617d75be","Type":"ContainerStarted","Data":"6b1b3f7b780a709d4f26c3d8772d497eb46912c68bf802f4718655620e7b48e5"} Dec 10 15:47:15 crc kubenswrapper[4755]: I1210 15:47:15.228381 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.228363112 podStartE2EDuration="3.228363112s" podCreationTimestamp="2025-12-10 15:47:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:47:15.223278393 +0000 UTC m=+1431.824162015" watchObservedRunningTime="2025-12-10 15:47:15.228363112 +0000 UTC m=+1431.829246744" Dec 10 15:47:15 crc kubenswrapper[4755]: I1210 15:47:15.727817 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-884c8b8f5-p2gb9" Dec 10 15:47:15 crc kubenswrapper[4755]: I1210 15:47:15.887975 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-frkhj\" (UniqueName: \"kubernetes.io/projected/7b7f7f25-62c8-45da-bbf6-1759da909d8b-kube-api-access-frkhj\") pod \"7b7f7f25-62c8-45da-bbf6-1759da909d8b\" (UID: \"7b7f7f25-62c8-45da-bbf6-1759da909d8b\") " Dec 10 15:47:15 crc kubenswrapper[4755]: I1210 15:47:15.888142 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7b7f7f25-62c8-45da-bbf6-1759da909d8b-dns-swift-storage-0\") pod \"7b7f7f25-62c8-45da-bbf6-1759da909d8b\" (UID: \"7b7f7f25-62c8-45da-bbf6-1759da909d8b\") " Dec 10 15:47:15 crc kubenswrapper[4755]: I1210 15:47:15.888252 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b7f7f25-62c8-45da-bbf6-1759da909d8b-config\") pod \"7b7f7f25-62c8-45da-bbf6-1759da909d8b\" (UID: \"7b7f7f25-62c8-45da-bbf6-1759da909d8b\") " Dec 10 15:47:15 crc kubenswrapper[4755]: I1210 15:47:15.888300 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7b7f7f25-62c8-45da-bbf6-1759da909d8b-ovsdbserver-nb\") pod \"7b7f7f25-62c8-45da-bbf6-1759da909d8b\" (UID: \"7b7f7f25-62c8-45da-bbf6-1759da909d8b\") " Dec 10 15:47:15 crc kubenswrapper[4755]: I1210 15:47:15.888344 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7b7f7f25-62c8-45da-bbf6-1759da909d8b-ovsdbserver-sb\") pod \"7b7f7f25-62c8-45da-bbf6-1759da909d8b\" (UID: \"7b7f7f25-62c8-45da-bbf6-1759da909d8b\") " Dec 10 15:47:15 crc kubenswrapper[4755]: I1210 15:47:15.888400 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7b7f7f25-62c8-45da-bbf6-1759da909d8b-dns-svc\") pod \"7b7f7f25-62c8-45da-bbf6-1759da909d8b\" (UID: \"7b7f7f25-62c8-45da-bbf6-1759da909d8b\") " Dec 10 15:47:15 crc kubenswrapper[4755]: I1210 15:47:15.961664 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b7f7f25-62c8-45da-bbf6-1759da909d8b-kube-api-access-frkhj" (OuterVolumeSpecName: "kube-api-access-frkhj") pod "7b7f7f25-62c8-45da-bbf6-1759da909d8b" (UID: "7b7f7f25-62c8-45da-bbf6-1759da909d8b"). InnerVolumeSpecName "kube-api-access-frkhj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:47:16 crc kubenswrapper[4755]: I1210 15:47:16.006621 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-frkhj\" (UniqueName: \"kubernetes.io/projected/7b7f7f25-62c8-45da-bbf6-1759da909d8b-kube-api-access-frkhj\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:16 crc kubenswrapper[4755]: I1210 15:47:16.089542 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b7f7f25-62c8-45da-bbf6-1759da909d8b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7b7f7f25-62c8-45da-bbf6-1759da909d8b" (UID: "7b7f7f25-62c8-45da-bbf6-1759da909d8b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:47:16 crc kubenswrapper[4755]: I1210 15:47:16.120123 4755 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7b7f7f25-62c8-45da-bbf6-1759da909d8b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:16 crc kubenswrapper[4755]: I1210 15:47:16.123553 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b7f7f25-62c8-45da-bbf6-1759da909d8b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7b7f7f25-62c8-45da-bbf6-1759da909d8b" (UID: "7b7f7f25-62c8-45da-bbf6-1759da909d8b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:47:16 crc kubenswrapper[4755]: I1210 15:47:16.124411 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b7f7f25-62c8-45da-bbf6-1759da909d8b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "7b7f7f25-62c8-45da-bbf6-1759da909d8b" (UID: "7b7f7f25-62c8-45da-bbf6-1759da909d8b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:47:16 crc kubenswrapper[4755]: I1210 15:47:16.136382 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b7f7f25-62c8-45da-bbf6-1759da909d8b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7b7f7f25-62c8-45da-bbf6-1759da909d8b" (UID: "7b7f7f25-62c8-45da-bbf6-1759da909d8b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:47:16 crc kubenswrapper[4755]: I1210 15:47:16.156178 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b7f7f25-62c8-45da-bbf6-1759da909d8b-config" (OuterVolumeSpecName: "config") pod "7b7f7f25-62c8-45da-bbf6-1759da909d8b" (UID: "7b7f7f25-62c8-45da-bbf6-1759da909d8b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:47:16 crc kubenswrapper[4755]: I1210 15:47:16.203152 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-884c8b8f5-p2gb9" Dec 10 15:47:16 crc kubenswrapper[4755]: I1210 15:47:16.203295 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-884c8b8f5-p2gb9" event={"ID":"7b7f7f25-62c8-45da-bbf6-1759da909d8b","Type":"ContainerDied","Data":"34073b35aba2112689ea93167b786303d9de8163c23538a6a30116afc0c6e4b9"} Dec 10 15:47:16 crc kubenswrapper[4755]: I1210 15:47:16.203337 4755 scope.go:117] "RemoveContainer" containerID="99ffa58cf1ff350d6602a854711010bed17523f78745126a9386419d3f526f23" Dec 10 15:47:16 crc kubenswrapper[4755]: I1210 15:47:16.223455 4755 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7b7f7f25-62c8-45da-bbf6-1759da909d8b-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:16 crc kubenswrapper[4755]: I1210 15:47:16.223493 4755 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7b7f7f25-62c8-45da-bbf6-1759da909d8b-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:16 crc kubenswrapper[4755]: I1210 15:47:16.223503 4755 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b7f7f25-62c8-45da-bbf6-1759da909d8b-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:16 crc kubenswrapper[4755]: I1210 15:47:16.223511 4755 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7b7f7f25-62c8-45da-bbf6-1759da909d8b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:16 crc kubenswrapper[4755]: I1210 15:47:16.242355 4755 scope.go:117] "RemoveContainer" containerID="b3d90b386da5b188f2d52f5c9967bb3e12220508e9c08388fa572c06cf2a30d3" Dec 10 15:47:16 crc kubenswrapper[4755]: I1210 15:47:16.263633 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-884c8b8f5-p2gb9"] Dec 10 15:47:16 crc kubenswrapper[4755]: I1210 15:47:16.273260 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-884c8b8f5-p2gb9"] Dec 10 15:47:17 crc kubenswrapper[4755]: I1210 15:47:17.212222 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2fcbda1-d1f1-4983-a9bc-d8575c3872d3","Type":"ContainerStarted","Data":"22b1e7b1fe5466a2a30affe55a39f8b15b7cac87d7e28673310bd867ea2308c5"} Dec 10 15:47:17 crc kubenswrapper[4755]: I1210 15:47:17.770359 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b7f7f25-62c8-45da-bbf6-1759da909d8b" path="/var/lib/kubelet/pods/7b7f7f25-62c8-45da-bbf6-1759da909d8b/volumes" Dec 10 15:47:18 crc kubenswrapper[4755]: I1210 15:47:18.224203 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2fcbda1-d1f1-4983-a9bc-d8575c3872d3","Type":"ContainerStarted","Data":"6bbe1eaf6240c09e1b40c1e0dac6f2375549f75bea162ba608a2126b6d7054ad"} Dec 10 15:47:19 crc kubenswrapper[4755]: I1210 15:47:19.138608 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-85977f6c59-57kcn"] Dec 10 15:47:19 crc kubenswrapper[4755]: E1210 15:47:19.139587 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b7f7f25-62c8-45da-bbf6-1759da909d8b" containerName="dnsmasq-dns" Dec 10 15:47:19 crc kubenswrapper[4755]: I1210 15:47:19.139614 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b7f7f25-62c8-45da-bbf6-1759da909d8b" containerName="dnsmasq-dns" Dec 10 15:47:19 crc kubenswrapper[4755]: E1210 15:47:19.139635 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b7f7f25-62c8-45da-bbf6-1759da909d8b" containerName="init" Dec 10 15:47:19 crc kubenswrapper[4755]: I1210 15:47:19.139664 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b7f7f25-62c8-45da-bbf6-1759da909d8b" containerName="init" Dec 10 15:47:19 crc kubenswrapper[4755]: I1210 15:47:19.140007 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b7f7f25-62c8-45da-bbf6-1759da909d8b" containerName="dnsmasq-dns" Dec 10 15:47:19 crc kubenswrapper[4755]: I1210 15:47:19.142260 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-85977f6c59-57kcn" Dec 10 15:47:19 crc kubenswrapper[4755]: I1210 15:47:19.229695 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgskn\" (UniqueName: \"kubernetes.io/projected/9778155f-9c4a-4cb4-9085-12d776d78435-kube-api-access-lgskn\") pod \"loki-operator-controller-manager-85977f6c59-57kcn\" (UID: \"9778155f-9c4a-4cb4-9085-12d776d78435\") " pod="openshift-operators-redhat/loki-operator-controller-manager-85977f6c59-57kcn" Dec 10 15:47:19 crc kubenswrapper[4755]: I1210 15:47:19.229805 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/9778155f-9c4a-4cb4-9085-12d776d78435-manager-config\") pod \"loki-operator-controller-manager-85977f6c59-57kcn\" (UID: \"9778155f-9c4a-4cb4-9085-12d776d78435\") " pod="openshift-operators-redhat/loki-operator-controller-manager-85977f6c59-57kcn" Dec 10 15:47:19 crc kubenswrapper[4755]: I1210 15:47:19.229854 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9778155f-9c4a-4cb4-9085-12d776d78435-webhook-cert\") pod \"loki-operator-controller-manager-85977f6c59-57kcn\" (UID: \"9778155f-9c4a-4cb4-9085-12d776d78435\") " pod="openshift-operators-redhat/loki-operator-controller-manager-85977f6c59-57kcn" Dec 10 15:47:19 crc kubenswrapper[4755]: I1210 15:47:19.229995 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9778155f-9c4a-4cb4-9085-12d776d78435-apiservice-cert\") pod \"loki-operator-controller-manager-85977f6c59-57kcn\" (UID: \"9778155f-9c4a-4cb4-9085-12d776d78435\") " pod="openshift-operators-redhat/loki-operator-controller-manager-85977f6c59-57kcn" Dec 10 15:47:19 crc kubenswrapper[4755]: I1210 15:47:19.230021 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9778155f-9c4a-4cb4-9085-12d776d78435-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-85977f6c59-57kcn\" (UID: \"9778155f-9c4a-4cb4-9085-12d776d78435\") " pod="openshift-operators-redhat/loki-operator-controller-manager-85977f6c59-57kcn" Dec 10 15:47:19 crc kubenswrapper[4755]: I1210 15:47:19.261008 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-85977f6c59-57kcn"] Dec 10 15:47:19 crc kubenswrapper[4755]: I1210 15:47:19.332069 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9778155f-9c4a-4cb4-9085-12d776d78435-webhook-cert\") pod \"loki-operator-controller-manager-85977f6c59-57kcn\" (UID: \"9778155f-9c4a-4cb4-9085-12d776d78435\") " pod="openshift-operators-redhat/loki-operator-controller-manager-85977f6c59-57kcn" Dec 10 15:47:19 crc kubenswrapper[4755]: I1210 15:47:19.332183 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9778155f-9c4a-4cb4-9085-12d776d78435-apiservice-cert\") pod \"loki-operator-controller-manager-85977f6c59-57kcn\" (UID: \"9778155f-9c4a-4cb4-9085-12d776d78435\") " pod="openshift-operators-redhat/loki-operator-controller-manager-85977f6c59-57kcn" Dec 10 15:47:19 crc kubenswrapper[4755]: I1210 15:47:19.332215 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9778155f-9c4a-4cb4-9085-12d776d78435-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-85977f6c59-57kcn\" (UID: \"9778155f-9c4a-4cb4-9085-12d776d78435\") " pod="openshift-operators-redhat/loki-operator-controller-manager-85977f6c59-57kcn" Dec 10 15:47:19 crc kubenswrapper[4755]: I1210 15:47:19.332388 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgskn\" (UniqueName: \"kubernetes.io/projected/9778155f-9c4a-4cb4-9085-12d776d78435-kube-api-access-lgskn\") pod \"loki-operator-controller-manager-85977f6c59-57kcn\" (UID: \"9778155f-9c4a-4cb4-9085-12d776d78435\") " pod="openshift-operators-redhat/loki-operator-controller-manager-85977f6c59-57kcn" Dec 10 15:47:19 crc kubenswrapper[4755]: I1210 15:47:19.332440 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/9778155f-9c4a-4cb4-9085-12d776d78435-manager-config\") pod \"loki-operator-controller-manager-85977f6c59-57kcn\" (UID: \"9778155f-9c4a-4cb4-9085-12d776d78435\") " pod="openshift-operators-redhat/loki-operator-controller-manager-85977f6c59-57kcn" Dec 10 15:47:19 crc kubenswrapper[4755]: I1210 15:47:19.333515 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/9778155f-9c4a-4cb4-9085-12d776d78435-manager-config\") pod \"loki-operator-controller-manager-85977f6c59-57kcn\" (UID: \"9778155f-9c4a-4cb4-9085-12d776d78435\") " pod="openshift-operators-redhat/loki-operator-controller-manager-85977f6c59-57kcn" Dec 10 15:47:19 crc kubenswrapper[4755]: I1210 15:47:19.336313 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9778155f-9c4a-4cb4-9085-12d776d78435-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-85977f6c59-57kcn\" (UID: \"9778155f-9c4a-4cb4-9085-12d776d78435\") " pod="openshift-operators-redhat/loki-operator-controller-manager-85977f6c59-57kcn" Dec 10 15:47:19 crc kubenswrapper[4755]: I1210 15:47:19.338205 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9778155f-9c4a-4cb4-9085-12d776d78435-webhook-cert\") pod \"loki-operator-controller-manager-85977f6c59-57kcn\" (UID: \"9778155f-9c4a-4cb4-9085-12d776d78435\") " pod="openshift-operators-redhat/loki-operator-controller-manager-85977f6c59-57kcn" Dec 10 15:47:19 crc kubenswrapper[4755]: I1210 15:47:19.354108 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgskn\" (UniqueName: \"kubernetes.io/projected/9778155f-9c4a-4cb4-9085-12d776d78435-kube-api-access-lgskn\") pod \"loki-operator-controller-manager-85977f6c59-57kcn\" (UID: \"9778155f-9c4a-4cb4-9085-12d776d78435\") " pod="openshift-operators-redhat/loki-operator-controller-manager-85977f6c59-57kcn" Dec 10 15:47:19 crc kubenswrapper[4755]: I1210 15:47:19.374339 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9778155f-9c4a-4cb4-9085-12d776d78435-apiservice-cert\") pod \"loki-operator-controller-manager-85977f6c59-57kcn\" (UID: \"9778155f-9c4a-4cb4-9085-12d776d78435\") " pod="openshift-operators-redhat/loki-operator-controller-manager-85977f6c59-57kcn" Dec 10 15:47:19 crc kubenswrapper[4755]: I1210 15:47:19.543251 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-85977f6c59-57kcn" Dec 10 15:47:20 crc kubenswrapper[4755]: W1210 15:47:20.184915 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9778155f_9c4a_4cb4_9085_12d776d78435.slice/crio-845364af72e04aeb67028bdf5515ca76657ed7d72a840cc4e145060d683bfdca WatchSource:0}: Error finding container 845364af72e04aeb67028bdf5515ca76657ed7d72a840cc4e145060d683bfdca: Status 404 returned error can't find the container with id 845364af72e04aeb67028bdf5515ca76657ed7d72a840cc4e145060d683bfdca Dec 10 15:47:20 crc kubenswrapper[4755]: I1210 15:47:20.216163 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-85977f6c59-57kcn"] Dec 10 15:47:20 crc kubenswrapper[4755]: I1210 15:47:20.363700 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-85977f6c59-57kcn" event={"ID":"9778155f-9c4a-4cb4-9085-12d776d78435","Type":"ContainerStarted","Data":"845364af72e04aeb67028bdf5515ca76657ed7d72a840cc4e145060d683bfdca"} Dec 10 15:47:20 crc kubenswrapper[4755]: I1210 15:47:20.409634 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2fcbda1-d1f1-4983-a9bc-d8575c3872d3","Type":"ContainerStarted","Data":"632225f0f1b6e3a898b37ef5dd601955d7ce1d91e0cc3d9d660312af9918ff7f"} Dec 10 15:47:20 crc kubenswrapper[4755]: I1210 15:47:20.409956 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 10 15:47:20 crc kubenswrapper[4755]: I1210 15:47:20.479224 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.956886328 podStartE2EDuration="9.479198948s" podCreationTimestamp="2025-12-10 15:47:11 +0000 UTC" firstStartedPulling="2025-12-10 15:47:12.420213428 +0000 UTC m=+1429.021097060" lastFinishedPulling="2025-12-10 15:47:18.942526048 +0000 UTC m=+1435.543409680" observedRunningTime="2025-12-10 15:47:20.472453174 +0000 UTC m=+1437.073336806" watchObservedRunningTime="2025-12-10 15:47:20.479198948 +0000 UTC m=+1437.080082580" Dec 10 15:47:22 crc kubenswrapper[4755]: I1210 15:47:22.469691 4755 generic.go:334] "Generic (PLEG): container finished" podID="79ff0629-ddf7-481b-be4f-58d4023b0ae7" containerID="7b59ed82c9b544643082119788f0cfdaa32b6beaa9131b7ed7f4833049015779" exitCode=0 Dec 10 15:47:22 crc kubenswrapper[4755]: I1210 15:47:22.469819 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-vmr2t" event={"ID":"79ff0629-ddf7-481b-be4f-58d4023b0ae7","Type":"ContainerDied","Data":"7b59ed82c9b544643082119788f0cfdaa32b6beaa9131b7ed7f4833049015779"} Dec 10 15:47:22 crc kubenswrapper[4755]: I1210 15:47:22.505726 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 10 15:47:22 crc kubenswrapper[4755]: I1210 15:47:22.506085 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 10 15:47:23 crc kubenswrapper[4755]: I1210 15:47:23.525657 4755 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="d8d3202f-3176-4aa8-93db-7461617d75be" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.228:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 10 15:47:23 crc kubenswrapper[4755]: I1210 15:47:23.525945 4755 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="d8d3202f-3176-4aa8-93db-7461617d75be" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.228:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 10 15:47:24 crc kubenswrapper[4755]: I1210 15:47:24.059826 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-vmr2t" Dec 10 15:47:24 crc kubenswrapper[4755]: I1210 15:47:24.159458 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mm9k4\" (UniqueName: \"kubernetes.io/projected/79ff0629-ddf7-481b-be4f-58d4023b0ae7-kube-api-access-mm9k4\") pod \"79ff0629-ddf7-481b-be4f-58d4023b0ae7\" (UID: \"79ff0629-ddf7-481b-be4f-58d4023b0ae7\") " Dec 10 15:47:24 crc kubenswrapper[4755]: I1210 15:47:24.159548 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79ff0629-ddf7-481b-be4f-58d4023b0ae7-scripts\") pod \"79ff0629-ddf7-481b-be4f-58d4023b0ae7\" (UID: \"79ff0629-ddf7-481b-be4f-58d4023b0ae7\") " Dec 10 15:47:24 crc kubenswrapper[4755]: I1210 15:47:24.159616 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79ff0629-ddf7-481b-be4f-58d4023b0ae7-combined-ca-bundle\") pod \"79ff0629-ddf7-481b-be4f-58d4023b0ae7\" (UID: \"79ff0629-ddf7-481b-be4f-58d4023b0ae7\") " Dec 10 15:47:24 crc kubenswrapper[4755]: I1210 15:47:24.159779 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79ff0629-ddf7-481b-be4f-58d4023b0ae7-config-data\") pod \"79ff0629-ddf7-481b-be4f-58d4023b0ae7\" (UID: \"79ff0629-ddf7-481b-be4f-58d4023b0ae7\") " Dec 10 15:47:24 crc kubenswrapper[4755]: I1210 15:47:24.165895 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79ff0629-ddf7-481b-be4f-58d4023b0ae7-kube-api-access-mm9k4" (OuterVolumeSpecName: "kube-api-access-mm9k4") pod "79ff0629-ddf7-481b-be4f-58d4023b0ae7" (UID: "79ff0629-ddf7-481b-be4f-58d4023b0ae7"). InnerVolumeSpecName "kube-api-access-mm9k4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:47:24 crc kubenswrapper[4755]: I1210 15:47:24.171213 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79ff0629-ddf7-481b-be4f-58d4023b0ae7-scripts" (OuterVolumeSpecName: "scripts") pod "79ff0629-ddf7-481b-be4f-58d4023b0ae7" (UID: "79ff0629-ddf7-481b-be4f-58d4023b0ae7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:47:24 crc kubenswrapper[4755]: I1210 15:47:24.247341 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79ff0629-ddf7-481b-be4f-58d4023b0ae7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "79ff0629-ddf7-481b-be4f-58d4023b0ae7" (UID: "79ff0629-ddf7-481b-be4f-58d4023b0ae7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:47:24 crc kubenswrapper[4755]: I1210 15:47:24.259569 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79ff0629-ddf7-481b-be4f-58d4023b0ae7-config-data" (OuterVolumeSpecName: "config-data") pod "79ff0629-ddf7-481b-be4f-58d4023b0ae7" (UID: "79ff0629-ddf7-481b-be4f-58d4023b0ae7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:47:24 crc kubenswrapper[4755]: I1210 15:47:24.262008 4755 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79ff0629-ddf7-481b-be4f-58d4023b0ae7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:24 crc kubenswrapper[4755]: I1210 15:47:24.262043 4755 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79ff0629-ddf7-481b-be4f-58d4023b0ae7-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:24 crc kubenswrapper[4755]: I1210 15:47:24.262063 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mm9k4\" (UniqueName: \"kubernetes.io/projected/79ff0629-ddf7-481b-be4f-58d4023b0ae7-kube-api-access-mm9k4\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:24 crc kubenswrapper[4755]: I1210 15:47:24.262074 4755 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79ff0629-ddf7-481b-be4f-58d4023b0ae7-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:24 crc kubenswrapper[4755]: I1210 15:47:24.495811 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-vmr2t" event={"ID":"79ff0629-ddf7-481b-be4f-58d4023b0ae7","Type":"ContainerDied","Data":"7ad8db1cc56c95593d3d843e0681a1cc32f715bf4c69b6d3b5bc9ac612115c37"} Dec 10 15:47:24 crc kubenswrapper[4755]: I1210 15:47:24.496018 4755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ad8db1cc56c95593d3d843e0681a1cc32f715bf4c69b6d3b5bc9ac612115c37" Dec 10 15:47:24 crc kubenswrapper[4755]: I1210 15:47:24.496120 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-vmr2t" Dec 10 15:47:24 crc kubenswrapper[4755]: I1210 15:47:24.698811 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 10 15:47:24 crc kubenswrapper[4755]: I1210 15:47:24.699164 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="14ae9d17-ca94-4a4a-8148-f24ad4a8f268" containerName="nova-scheduler-scheduler" containerID="cri-o://13e8855d5ddcd82893c087a54ffbfd319e8a9a2e0328720840004fa890f48894" gracePeriod=30 Dec 10 15:47:24 crc kubenswrapper[4755]: I1210 15:47:24.728944 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 10 15:47:24 crc kubenswrapper[4755]: I1210 15:47:24.729431 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="d8d3202f-3176-4aa8-93db-7461617d75be" containerName="nova-api-log" containerID="cri-o://6b1b3f7b780a709d4f26c3d8772d497eb46912c68bf802f4718655620e7b48e5" gracePeriod=30 Dec 10 15:47:24 crc kubenswrapper[4755]: I1210 15:47:24.729891 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="d8d3202f-3176-4aa8-93db-7461617d75be" containerName="nova-api-api" containerID="cri-o://ef88933ff432db51ec3fdd8d3ae77a9d0af47d9f412d3818a05d9a9e6200517b" gracePeriod=30 Dec 10 15:47:24 crc kubenswrapper[4755]: I1210 15:47:24.751405 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 10 15:47:24 crc kubenswrapper[4755]: I1210 15:47:24.755004 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="04cde7cd-7f43-4870-a4fc-c8bdb0b4188a" containerName="nova-metadata-log" containerID="cri-o://6086fa8be8b19c45b98b1004a3d289895d70855527be98b4a9fa4c956dbba37f" gracePeriod=30 Dec 10 15:47:24 crc kubenswrapper[4755]: I1210 15:47:24.755666 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="04cde7cd-7f43-4870-a4fc-c8bdb0b4188a" containerName="nova-metadata-metadata" containerID="cri-o://12a08f7fafdc5148848eae58acbfe57f0fdbb16cedeb387bb3f2dfa8499bff19" gracePeriod=30 Dec 10 15:47:25 crc kubenswrapper[4755]: I1210 15:47:25.518515 4755 generic.go:334] "Generic (PLEG): container finished" podID="04cde7cd-7f43-4870-a4fc-c8bdb0b4188a" containerID="6086fa8be8b19c45b98b1004a3d289895d70855527be98b4a9fa4c956dbba37f" exitCode=143 Dec 10 15:47:25 crc kubenswrapper[4755]: I1210 15:47:25.518630 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"04cde7cd-7f43-4870-a4fc-c8bdb0b4188a","Type":"ContainerDied","Data":"6086fa8be8b19c45b98b1004a3d289895d70855527be98b4a9fa4c956dbba37f"} Dec 10 15:47:25 crc kubenswrapper[4755]: I1210 15:47:25.526208 4755 generic.go:334] "Generic (PLEG): container finished" podID="d8d3202f-3176-4aa8-93db-7461617d75be" containerID="6b1b3f7b780a709d4f26c3d8772d497eb46912c68bf802f4718655620e7b48e5" exitCode=143 Dec 10 15:47:25 crc kubenswrapper[4755]: I1210 15:47:25.526255 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d8d3202f-3176-4aa8-93db-7461617d75be","Type":"ContainerDied","Data":"6b1b3f7b780a709d4f26c3d8772d497eb46912c68bf802f4718655620e7b48e5"} Dec 10 15:47:27 crc kubenswrapper[4755]: I1210 15:47:27.927306 4755 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="04cde7cd-7f43-4870-a4fc-c8bdb0b4188a" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.217:8775/\": read tcp 10.217.0.2:34978->10.217.0.217:8775: read: connection reset by peer" Dec 10 15:47:27 crc kubenswrapper[4755]: I1210 15:47:27.927312 4755 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="04cde7cd-7f43-4870-a4fc-c8bdb0b4188a" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.217:8775/\": read tcp 10.217.0.2:34994->10.217.0.217:8775: read: connection reset by peer" Dec 10 15:47:28 crc kubenswrapper[4755]: I1210 15:47:28.568589 4755 generic.go:334] "Generic (PLEG): container finished" podID="04cde7cd-7f43-4870-a4fc-c8bdb0b4188a" containerID="12a08f7fafdc5148848eae58acbfe57f0fdbb16cedeb387bb3f2dfa8499bff19" exitCode=0 Dec 10 15:47:28 crc kubenswrapper[4755]: I1210 15:47:28.568639 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"04cde7cd-7f43-4870-a4fc-c8bdb0b4188a","Type":"ContainerDied","Data":"12a08f7fafdc5148848eae58acbfe57f0fdbb16cedeb387bb3f2dfa8499bff19"} Dec 10 15:47:28 crc kubenswrapper[4755]: E1210 15:47:28.851399 4755 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="13e8855d5ddcd82893c087a54ffbfd319e8a9a2e0328720840004fa890f48894" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 10 15:47:28 crc kubenswrapper[4755]: E1210 15:47:28.852823 4755 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="13e8855d5ddcd82893c087a54ffbfd319e8a9a2e0328720840004fa890f48894" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 10 15:47:28 crc kubenswrapper[4755]: E1210 15:47:28.853847 4755 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="13e8855d5ddcd82893c087a54ffbfd319e8a9a2e0328720840004fa890f48894" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 10 15:47:28 crc kubenswrapper[4755]: E1210 15:47:28.853875 4755 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="14ae9d17-ca94-4a4a-8148-f24ad4a8f268" containerName="nova-scheduler-scheduler" Dec 10 15:47:30 crc kubenswrapper[4755]: I1210 15:47:30.591186 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 10 15:47:30 crc kubenswrapper[4755]: I1210 15:47:30.598718 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"04cde7cd-7f43-4870-a4fc-c8bdb0b4188a","Type":"ContainerDied","Data":"698027df8097caa4f6581591afc1210a9ba63f1ddce2ee55c77905bb52857e00"} Dec 10 15:47:30 crc kubenswrapper[4755]: I1210 15:47:30.598807 4755 scope.go:117] "RemoveContainer" containerID="12a08f7fafdc5148848eae58acbfe57f0fdbb16cedeb387bb3f2dfa8499bff19" Dec 10 15:47:30 crc kubenswrapper[4755]: I1210 15:47:30.599591 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 10 15:47:30 crc kubenswrapper[4755]: I1210 15:47:30.605121 4755 generic.go:334] "Generic (PLEG): container finished" podID="14ae9d17-ca94-4a4a-8148-f24ad4a8f268" containerID="13e8855d5ddcd82893c087a54ffbfd319e8a9a2e0328720840004fa890f48894" exitCode=0 Dec 10 15:47:30 crc kubenswrapper[4755]: I1210 15:47:30.605170 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"14ae9d17-ca94-4a4a-8148-f24ad4a8f268","Type":"ContainerDied","Data":"13e8855d5ddcd82893c087a54ffbfd319e8a9a2e0328720840004fa890f48894"} Dec 10 15:47:30 crc kubenswrapper[4755]: I1210 15:47:30.632184 4755 scope.go:117] "RemoveContainer" containerID="6086fa8be8b19c45b98b1004a3d289895d70855527be98b4a9fa4c956dbba37f" Dec 10 15:47:30 crc kubenswrapper[4755]: I1210 15:47:30.698145 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04cde7cd-7f43-4870-a4fc-c8bdb0b4188a-logs\") pod \"04cde7cd-7f43-4870-a4fc-c8bdb0b4188a\" (UID: \"04cde7cd-7f43-4870-a4fc-c8bdb0b4188a\") " Dec 10 15:47:30 crc kubenswrapper[4755]: I1210 15:47:30.698269 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04cde7cd-7f43-4870-a4fc-c8bdb0b4188a-config-data\") pod \"04cde7cd-7f43-4870-a4fc-c8bdb0b4188a\" (UID: \"04cde7cd-7f43-4870-a4fc-c8bdb0b4188a\") " Dec 10 15:47:30 crc kubenswrapper[4755]: I1210 15:47:30.698366 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04cde7cd-7f43-4870-a4fc-c8bdb0b4188a-combined-ca-bundle\") pod \"04cde7cd-7f43-4870-a4fc-c8bdb0b4188a\" (UID: \"04cde7cd-7f43-4870-a4fc-c8bdb0b4188a\") " Dec 10 15:47:30 crc kubenswrapper[4755]: I1210 15:47:30.698529 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/04cde7cd-7f43-4870-a4fc-c8bdb0b4188a-nova-metadata-tls-certs\") pod \"04cde7cd-7f43-4870-a4fc-c8bdb0b4188a\" (UID: \"04cde7cd-7f43-4870-a4fc-c8bdb0b4188a\") " Dec 10 15:47:30 crc kubenswrapper[4755]: I1210 15:47:30.698580 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6zzmg\" (UniqueName: \"kubernetes.io/projected/04cde7cd-7f43-4870-a4fc-c8bdb0b4188a-kube-api-access-6zzmg\") pod \"04cde7cd-7f43-4870-a4fc-c8bdb0b4188a\" (UID: \"04cde7cd-7f43-4870-a4fc-c8bdb0b4188a\") " Dec 10 15:47:30 crc kubenswrapper[4755]: I1210 15:47:30.703569 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04cde7cd-7f43-4870-a4fc-c8bdb0b4188a-logs" (OuterVolumeSpecName: "logs") pod "04cde7cd-7f43-4870-a4fc-c8bdb0b4188a" (UID: "04cde7cd-7f43-4870-a4fc-c8bdb0b4188a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:47:30 crc kubenswrapper[4755]: I1210 15:47:30.707439 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04cde7cd-7f43-4870-a4fc-c8bdb0b4188a-kube-api-access-6zzmg" (OuterVolumeSpecName: "kube-api-access-6zzmg") pod "04cde7cd-7f43-4870-a4fc-c8bdb0b4188a" (UID: "04cde7cd-7f43-4870-a4fc-c8bdb0b4188a"). InnerVolumeSpecName "kube-api-access-6zzmg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:47:30 crc kubenswrapper[4755]: I1210 15:47:30.751675 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04cde7cd-7f43-4870-a4fc-c8bdb0b4188a-config-data" (OuterVolumeSpecName: "config-data") pod "04cde7cd-7f43-4870-a4fc-c8bdb0b4188a" (UID: "04cde7cd-7f43-4870-a4fc-c8bdb0b4188a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:47:30 crc kubenswrapper[4755]: I1210 15:47:30.761673 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04cde7cd-7f43-4870-a4fc-c8bdb0b4188a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "04cde7cd-7f43-4870-a4fc-c8bdb0b4188a" (UID: "04cde7cd-7f43-4870-a4fc-c8bdb0b4188a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:47:30 crc kubenswrapper[4755]: I1210 15:47:30.801268 4755 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04cde7cd-7f43-4870-a4fc-c8bdb0b4188a-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:30 crc kubenswrapper[4755]: I1210 15:47:30.801299 4755 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04cde7cd-7f43-4870-a4fc-c8bdb0b4188a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:30 crc kubenswrapper[4755]: I1210 15:47:30.801310 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6zzmg\" (UniqueName: \"kubernetes.io/projected/04cde7cd-7f43-4870-a4fc-c8bdb0b4188a-kube-api-access-6zzmg\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:30 crc kubenswrapper[4755]: I1210 15:47:30.801318 4755 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04cde7cd-7f43-4870-a4fc-c8bdb0b4188a-logs\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:30 crc kubenswrapper[4755]: I1210 15:47:30.859177 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04cde7cd-7f43-4870-a4fc-c8bdb0b4188a-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "04cde7cd-7f43-4870-a4fc-c8bdb0b4188a" (UID: "04cde7cd-7f43-4870-a4fc-c8bdb0b4188a"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:47:30 crc kubenswrapper[4755]: I1210 15:47:30.904217 4755 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/04cde7cd-7f43-4870-a4fc-c8bdb0b4188a-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:31 crc kubenswrapper[4755]: I1210 15:47:31.023998 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 10 15:47:31 crc kubenswrapper[4755]: I1210 15:47:31.050772 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 10 15:47:31 crc kubenswrapper[4755]: I1210 15:47:31.063618 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 10 15:47:31 crc kubenswrapper[4755]: I1210 15:47:31.104231 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 10 15:47:31 crc kubenswrapper[4755]: E1210 15:47:31.104711 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04cde7cd-7f43-4870-a4fc-c8bdb0b4188a" containerName="nova-metadata-log" Dec 10 15:47:31 crc kubenswrapper[4755]: I1210 15:47:31.104733 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="04cde7cd-7f43-4870-a4fc-c8bdb0b4188a" containerName="nova-metadata-log" Dec 10 15:47:31 crc kubenswrapper[4755]: E1210 15:47:31.104758 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79ff0629-ddf7-481b-be4f-58d4023b0ae7" containerName="nova-manage" Dec 10 15:47:31 crc kubenswrapper[4755]: I1210 15:47:31.104764 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="79ff0629-ddf7-481b-be4f-58d4023b0ae7" containerName="nova-manage" Dec 10 15:47:31 crc kubenswrapper[4755]: E1210 15:47:31.104787 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14ae9d17-ca94-4a4a-8148-f24ad4a8f268" containerName="nova-scheduler-scheduler" Dec 10 15:47:31 crc kubenswrapper[4755]: I1210 15:47:31.104793 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="14ae9d17-ca94-4a4a-8148-f24ad4a8f268" containerName="nova-scheduler-scheduler" Dec 10 15:47:31 crc kubenswrapper[4755]: E1210 15:47:31.104801 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04cde7cd-7f43-4870-a4fc-c8bdb0b4188a" containerName="nova-metadata-metadata" Dec 10 15:47:31 crc kubenswrapper[4755]: I1210 15:47:31.104807 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="04cde7cd-7f43-4870-a4fc-c8bdb0b4188a" containerName="nova-metadata-metadata" Dec 10 15:47:31 crc kubenswrapper[4755]: I1210 15:47:31.104998 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="79ff0629-ddf7-481b-be4f-58d4023b0ae7" containerName="nova-manage" Dec 10 15:47:31 crc kubenswrapper[4755]: I1210 15:47:31.105022 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="04cde7cd-7f43-4870-a4fc-c8bdb0b4188a" containerName="nova-metadata-metadata" Dec 10 15:47:31 crc kubenswrapper[4755]: I1210 15:47:31.105041 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="04cde7cd-7f43-4870-a4fc-c8bdb0b4188a" containerName="nova-metadata-log" Dec 10 15:47:31 crc kubenswrapper[4755]: I1210 15:47:31.105060 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="14ae9d17-ca94-4a4a-8148-f24ad4a8f268" containerName="nova-scheduler-scheduler" Dec 10 15:47:31 crc kubenswrapper[4755]: I1210 15:47:31.114291 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jjxck\" (UniqueName: \"kubernetes.io/projected/14ae9d17-ca94-4a4a-8148-f24ad4a8f268-kube-api-access-jjxck\") pod \"14ae9d17-ca94-4a4a-8148-f24ad4a8f268\" (UID: \"14ae9d17-ca94-4a4a-8148-f24ad4a8f268\") " Dec 10 15:47:31 crc kubenswrapper[4755]: I1210 15:47:31.114662 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14ae9d17-ca94-4a4a-8148-f24ad4a8f268-config-data\") pod \"14ae9d17-ca94-4a4a-8148-f24ad4a8f268\" (UID: \"14ae9d17-ca94-4a4a-8148-f24ad4a8f268\") " Dec 10 15:47:31 crc kubenswrapper[4755]: I1210 15:47:31.114737 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14ae9d17-ca94-4a4a-8148-f24ad4a8f268-combined-ca-bundle\") pod \"14ae9d17-ca94-4a4a-8148-f24ad4a8f268\" (UID: \"14ae9d17-ca94-4a4a-8148-f24ad4a8f268\") " Dec 10 15:47:31 crc kubenswrapper[4755]: I1210 15:47:31.116251 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 10 15:47:31 crc kubenswrapper[4755]: I1210 15:47:31.135914 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 10 15:47:31 crc kubenswrapper[4755]: I1210 15:47:31.136537 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Dec 10 15:47:31 crc kubenswrapper[4755]: I1210 15:47:31.139028 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14ae9d17-ca94-4a4a-8148-f24ad4a8f268-kube-api-access-jjxck" (OuterVolumeSpecName: "kube-api-access-jjxck") pod "14ae9d17-ca94-4a4a-8148-f24ad4a8f268" (UID: "14ae9d17-ca94-4a4a-8148-f24ad4a8f268"). InnerVolumeSpecName "kube-api-access-jjxck". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:47:31 crc kubenswrapper[4755]: I1210 15:47:31.164050 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 10 15:47:31 crc kubenswrapper[4755]: I1210 15:47:31.223781 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6457b1e1-42e4-46b2-bc4b-6bbd9451131e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6457b1e1-42e4-46b2-bc4b-6bbd9451131e\") " pod="openstack/nova-metadata-0" Dec 10 15:47:31 crc kubenswrapper[4755]: I1210 15:47:31.223878 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6457b1e1-42e4-46b2-bc4b-6bbd9451131e-logs\") pod \"nova-metadata-0\" (UID: \"6457b1e1-42e4-46b2-bc4b-6bbd9451131e\") " pod="openstack/nova-metadata-0" Dec 10 15:47:31 crc kubenswrapper[4755]: I1210 15:47:31.223956 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6457b1e1-42e4-46b2-bc4b-6bbd9451131e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"6457b1e1-42e4-46b2-bc4b-6bbd9451131e\") " pod="openstack/nova-metadata-0" Dec 10 15:47:31 crc kubenswrapper[4755]: I1210 15:47:31.224026 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6457b1e1-42e4-46b2-bc4b-6bbd9451131e-config-data\") pod \"nova-metadata-0\" (UID: \"6457b1e1-42e4-46b2-bc4b-6bbd9451131e\") " pod="openstack/nova-metadata-0" Dec 10 15:47:31 crc kubenswrapper[4755]: I1210 15:47:31.224054 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dwz6\" (UniqueName: \"kubernetes.io/projected/6457b1e1-42e4-46b2-bc4b-6bbd9451131e-kube-api-access-6dwz6\") pod \"nova-metadata-0\" (UID: \"6457b1e1-42e4-46b2-bc4b-6bbd9451131e\") " pod="openstack/nova-metadata-0" Dec 10 15:47:31 crc kubenswrapper[4755]: I1210 15:47:31.224116 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jjxck\" (UniqueName: \"kubernetes.io/projected/14ae9d17-ca94-4a4a-8148-f24ad4a8f268-kube-api-access-jjxck\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:31 crc kubenswrapper[4755]: I1210 15:47:31.232692 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14ae9d17-ca94-4a4a-8148-f24ad4a8f268-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "14ae9d17-ca94-4a4a-8148-f24ad4a8f268" (UID: "14ae9d17-ca94-4a4a-8148-f24ad4a8f268"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:47:31 crc kubenswrapper[4755]: I1210 15:47:31.310853 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14ae9d17-ca94-4a4a-8148-f24ad4a8f268-config-data" (OuterVolumeSpecName: "config-data") pod "14ae9d17-ca94-4a4a-8148-f24ad4a8f268" (UID: "14ae9d17-ca94-4a4a-8148-f24ad4a8f268"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:47:31 crc kubenswrapper[4755]: I1210 15:47:31.329866 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6457b1e1-42e4-46b2-bc4b-6bbd9451131e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6457b1e1-42e4-46b2-bc4b-6bbd9451131e\") " pod="openstack/nova-metadata-0" Dec 10 15:47:31 crc kubenswrapper[4755]: I1210 15:47:31.329971 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6457b1e1-42e4-46b2-bc4b-6bbd9451131e-logs\") pod \"nova-metadata-0\" (UID: \"6457b1e1-42e4-46b2-bc4b-6bbd9451131e\") " pod="openstack/nova-metadata-0" Dec 10 15:47:31 crc kubenswrapper[4755]: I1210 15:47:31.330062 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6457b1e1-42e4-46b2-bc4b-6bbd9451131e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"6457b1e1-42e4-46b2-bc4b-6bbd9451131e\") " pod="openstack/nova-metadata-0" Dec 10 15:47:31 crc kubenswrapper[4755]: I1210 15:47:31.330113 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6457b1e1-42e4-46b2-bc4b-6bbd9451131e-config-data\") pod \"nova-metadata-0\" (UID: \"6457b1e1-42e4-46b2-bc4b-6bbd9451131e\") " pod="openstack/nova-metadata-0" Dec 10 15:47:31 crc kubenswrapper[4755]: I1210 15:47:31.330141 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dwz6\" (UniqueName: \"kubernetes.io/projected/6457b1e1-42e4-46b2-bc4b-6bbd9451131e-kube-api-access-6dwz6\") pod \"nova-metadata-0\" (UID: \"6457b1e1-42e4-46b2-bc4b-6bbd9451131e\") " pod="openstack/nova-metadata-0" Dec 10 15:47:31 crc kubenswrapper[4755]: I1210 15:47:31.330219 4755 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14ae9d17-ca94-4a4a-8148-f24ad4a8f268-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:31 crc kubenswrapper[4755]: I1210 15:47:31.330229 4755 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14ae9d17-ca94-4a4a-8148-f24ad4a8f268-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:31 crc kubenswrapper[4755]: I1210 15:47:31.331315 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6457b1e1-42e4-46b2-bc4b-6bbd9451131e-logs\") pod \"nova-metadata-0\" (UID: \"6457b1e1-42e4-46b2-bc4b-6bbd9451131e\") " pod="openstack/nova-metadata-0" Dec 10 15:47:31 crc kubenswrapper[4755]: I1210 15:47:31.337348 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6457b1e1-42e4-46b2-bc4b-6bbd9451131e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6457b1e1-42e4-46b2-bc4b-6bbd9451131e\") " pod="openstack/nova-metadata-0" Dec 10 15:47:31 crc kubenswrapper[4755]: I1210 15:47:31.338799 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6457b1e1-42e4-46b2-bc4b-6bbd9451131e-config-data\") pod \"nova-metadata-0\" (UID: \"6457b1e1-42e4-46b2-bc4b-6bbd9451131e\") " pod="openstack/nova-metadata-0" Dec 10 15:47:31 crc kubenswrapper[4755]: I1210 15:47:31.350143 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6457b1e1-42e4-46b2-bc4b-6bbd9451131e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"6457b1e1-42e4-46b2-bc4b-6bbd9451131e\") " pod="openstack/nova-metadata-0" Dec 10 15:47:31 crc kubenswrapper[4755]: I1210 15:47:31.384005 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dwz6\" (UniqueName: \"kubernetes.io/projected/6457b1e1-42e4-46b2-bc4b-6bbd9451131e-kube-api-access-6dwz6\") pod \"nova-metadata-0\" (UID: \"6457b1e1-42e4-46b2-bc4b-6bbd9451131e\") " pod="openstack/nova-metadata-0" Dec 10 15:47:31 crc kubenswrapper[4755]: I1210 15:47:31.625048 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 10 15:47:31 crc kubenswrapper[4755]: I1210 15:47:31.625500 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"14ae9d17-ca94-4a4a-8148-f24ad4a8f268","Type":"ContainerDied","Data":"c8ac9fc09600e932189eaf102e994da8ead58aaf2389c682648cfbb1afd55d3b"} Dec 10 15:47:31 crc kubenswrapper[4755]: I1210 15:47:31.625590 4755 scope.go:117] "RemoveContainer" containerID="13e8855d5ddcd82893c087a54ffbfd319e8a9a2e0328720840004fa890f48894" Dec 10 15:47:31 crc kubenswrapper[4755]: I1210 15:47:31.631181 4755 generic.go:334] "Generic (PLEG): container finished" podID="d8d3202f-3176-4aa8-93db-7461617d75be" containerID="ef88933ff432db51ec3fdd8d3ae77a9d0af47d9f412d3818a05d9a9e6200517b" exitCode=0 Dec 10 15:47:31 crc kubenswrapper[4755]: I1210 15:47:31.631223 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d8d3202f-3176-4aa8-93db-7461617d75be","Type":"ContainerDied","Data":"ef88933ff432db51ec3fdd8d3ae77a9d0af47d9f412d3818a05d9a9e6200517b"} Dec 10 15:47:31 crc kubenswrapper[4755]: I1210 15:47:31.645823 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 10 15:47:31 crc kubenswrapper[4755]: I1210 15:47:31.691825 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 10 15:47:31 crc kubenswrapper[4755]: I1210 15:47:31.700644 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Dec 10 15:47:31 crc kubenswrapper[4755]: I1210 15:47:31.725666 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 10 15:47:31 crc kubenswrapper[4755]: I1210 15:47:31.727222 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 10 15:47:31 crc kubenswrapper[4755]: I1210 15:47:31.730087 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 10 15:47:31 crc kubenswrapper[4755]: I1210 15:47:31.743189 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 10 15:47:31 crc kubenswrapper[4755]: I1210 15:47:31.790045 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04cde7cd-7f43-4870-a4fc-c8bdb0b4188a" path="/var/lib/kubelet/pods/04cde7cd-7f43-4870-a4fc-c8bdb0b4188a/volumes" Dec 10 15:47:31 crc kubenswrapper[4755]: I1210 15:47:31.794218 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14ae9d17-ca94-4a4a-8148-f24ad4a8f268" path="/var/lib/kubelet/pods/14ae9d17-ca94-4a4a-8148-f24ad4a8f268/volumes" Dec 10 15:47:31 crc kubenswrapper[4755]: I1210 15:47:31.845817 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59jvl\" (UniqueName: \"kubernetes.io/projected/ae382d2b-a255-4ec6-8bf4-d70a8d3a7a4e-kube-api-access-59jvl\") pod \"nova-scheduler-0\" (UID: \"ae382d2b-a255-4ec6-8bf4-d70a8d3a7a4e\") " pod="openstack/nova-scheduler-0" Dec 10 15:47:31 crc kubenswrapper[4755]: I1210 15:47:31.845997 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae382d2b-a255-4ec6-8bf4-d70a8d3a7a4e-config-data\") pod \"nova-scheduler-0\" (UID: \"ae382d2b-a255-4ec6-8bf4-d70a8d3a7a4e\") " pod="openstack/nova-scheduler-0" Dec 10 15:47:31 crc kubenswrapper[4755]: I1210 15:47:31.846178 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae382d2b-a255-4ec6-8bf4-d70a8d3a7a4e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ae382d2b-a255-4ec6-8bf4-d70a8d3a7a4e\") " pod="openstack/nova-scheduler-0" Dec 10 15:47:31 crc kubenswrapper[4755]: I1210 15:47:31.948498 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae382d2b-a255-4ec6-8bf4-d70a8d3a7a4e-config-data\") pod \"nova-scheduler-0\" (UID: \"ae382d2b-a255-4ec6-8bf4-d70a8d3a7a4e\") " pod="openstack/nova-scheduler-0" Dec 10 15:47:31 crc kubenswrapper[4755]: I1210 15:47:31.948617 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae382d2b-a255-4ec6-8bf4-d70a8d3a7a4e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ae382d2b-a255-4ec6-8bf4-d70a8d3a7a4e\") " pod="openstack/nova-scheduler-0" Dec 10 15:47:31 crc kubenswrapper[4755]: I1210 15:47:31.948711 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59jvl\" (UniqueName: \"kubernetes.io/projected/ae382d2b-a255-4ec6-8bf4-d70a8d3a7a4e-kube-api-access-59jvl\") pod \"nova-scheduler-0\" (UID: \"ae382d2b-a255-4ec6-8bf4-d70a8d3a7a4e\") " pod="openstack/nova-scheduler-0" Dec 10 15:47:31 crc kubenswrapper[4755]: I1210 15:47:31.953878 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae382d2b-a255-4ec6-8bf4-d70a8d3a7a4e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ae382d2b-a255-4ec6-8bf4-d70a8d3a7a4e\") " pod="openstack/nova-scheduler-0" Dec 10 15:47:31 crc kubenswrapper[4755]: I1210 15:47:31.960767 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae382d2b-a255-4ec6-8bf4-d70a8d3a7a4e-config-data\") pod \"nova-scheduler-0\" (UID: \"ae382d2b-a255-4ec6-8bf4-d70a8d3a7a4e\") " pod="openstack/nova-scheduler-0" Dec 10 15:47:31 crc kubenswrapper[4755]: I1210 15:47:31.964843 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59jvl\" (UniqueName: \"kubernetes.io/projected/ae382d2b-a255-4ec6-8bf4-d70a8d3a7a4e-kube-api-access-59jvl\") pod \"nova-scheduler-0\" (UID: \"ae382d2b-a255-4ec6-8bf4-d70a8d3a7a4e\") " pod="openstack/nova-scheduler-0" Dec 10 15:47:32 crc kubenswrapper[4755]: I1210 15:47:32.051748 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 10 15:47:32 crc kubenswrapper[4755]: I1210 15:47:32.215269 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 10 15:47:32 crc kubenswrapper[4755]: I1210 15:47:32.355810 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8d3202f-3176-4aa8-93db-7461617d75be-combined-ca-bundle\") pod \"d8d3202f-3176-4aa8-93db-7461617d75be\" (UID: \"d8d3202f-3176-4aa8-93db-7461617d75be\") " Dec 10 15:47:32 crc kubenswrapper[4755]: I1210 15:47:32.356520 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8d3202f-3176-4aa8-93db-7461617d75be-logs\") pod \"d8d3202f-3176-4aa8-93db-7461617d75be\" (UID: \"d8d3202f-3176-4aa8-93db-7461617d75be\") " Dec 10 15:47:32 crc kubenswrapper[4755]: I1210 15:47:32.356565 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8d3202f-3176-4aa8-93db-7461617d75be-internal-tls-certs\") pod \"d8d3202f-3176-4aa8-93db-7461617d75be\" (UID: \"d8d3202f-3176-4aa8-93db-7461617d75be\") " Dec 10 15:47:32 crc kubenswrapper[4755]: I1210 15:47:32.356593 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tvdpq\" (UniqueName: \"kubernetes.io/projected/d8d3202f-3176-4aa8-93db-7461617d75be-kube-api-access-tvdpq\") pod \"d8d3202f-3176-4aa8-93db-7461617d75be\" (UID: \"d8d3202f-3176-4aa8-93db-7461617d75be\") " Dec 10 15:47:32 crc kubenswrapper[4755]: I1210 15:47:32.356650 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8d3202f-3176-4aa8-93db-7461617d75be-public-tls-certs\") pod \"d8d3202f-3176-4aa8-93db-7461617d75be\" (UID: \"d8d3202f-3176-4aa8-93db-7461617d75be\") " Dec 10 15:47:32 crc kubenswrapper[4755]: I1210 15:47:32.356672 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8d3202f-3176-4aa8-93db-7461617d75be-config-data\") pod \"d8d3202f-3176-4aa8-93db-7461617d75be\" (UID: \"d8d3202f-3176-4aa8-93db-7461617d75be\") " Dec 10 15:47:32 crc kubenswrapper[4755]: I1210 15:47:32.356987 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8d3202f-3176-4aa8-93db-7461617d75be-logs" (OuterVolumeSpecName: "logs") pod "d8d3202f-3176-4aa8-93db-7461617d75be" (UID: "d8d3202f-3176-4aa8-93db-7461617d75be"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:47:32 crc kubenswrapper[4755]: I1210 15:47:32.357320 4755 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8d3202f-3176-4aa8-93db-7461617d75be-logs\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:32 crc kubenswrapper[4755]: I1210 15:47:32.361517 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8d3202f-3176-4aa8-93db-7461617d75be-kube-api-access-tvdpq" (OuterVolumeSpecName: "kube-api-access-tvdpq") pod "d8d3202f-3176-4aa8-93db-7461617d75be" (UID: "d8d3202f-3176-4aa8-93db-7461617d75be"). InnerVolumeSpecName "kube-api-access-tvdpq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:47:32 crc kubenswrapper[4755]: I1210 15:47:32.392677 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8d3202f-3176-4aa8-93db-7461617d75be-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d8d3202f-3176-4aa8-93db-7461617d75be" (UID: "d8d3202f-3176-4aa8-93db-7461617d75be"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:47:32 crc kubenswrapper[4755]: I1210 15:47:32.394970 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8d3202f-3176-4aa8-93db-7461617d75be-config-data" (OuterVolumeSpecName: "config-data") pod "d8d3202f-3176-4aa8-93db-7461617d75be" (UID: "d8d3202f-3176-4aa8-93db-7461617d75be"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:47:32 crc kubenswrapper[4755]: I1210 15:47:32.440046 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8d3202f-3176-4aa8-93db-7461617d75be-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "d8d3202f-3176-4aa8-93db-7461617d75be" (UID: "d8d3202f-3176-4aa8-93db-7461617d75be"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:47:32 crc kubenswrapper[4755]: I1210 15:47:32.459219 4755 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8d3202f-3176-4aa8-93db-7461617d75be-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:32 crc kubenswrapper[4755]: I1210 15:47:32.459251 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tvdpq\" (UniqueName: \"kubernetes.io/projected/d8d3202f-3176-4aa8-93db-7461617d75be-kube-api-access-tvdpq\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:32 crc kubenswrapper[4755]: I1210 15:47:32.459265 4755 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8d3202f-3176-4aa8-93db-7461617d75be-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:32 crc kubenswrapper[4755]: I1210 15:47:32.459273 4755 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8d3202f-3176-4aa8-93db-7461617d75be-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:32 crc kubenswrapper[4755]: I1210 15:47:32.476201 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8d3202f-3176-4aa8-93db-7461617d75be-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "d8d3202f-3176-4aa8-93db-7461617d75be" (UID: "d8d3202f-3176-4aa8-93db-7461617d75be"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:47:32 crc kubenswrapper[4755]: I1210 15:47:32.561803 4755 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8d3202f-3176-4aa8-93db-7461617d75be-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:32 crc kubenswrapper[4755]: I1210 15:47:32.692531 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 10 15:47:32 crc kubenswrapper[4755]: I1210 15:47:32.692543 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d8d3202f-3176-4aa8-93db-7461617d75be","Type":"ContainerDied","Data":"b917bd7484a453c1025206095b2e616ade1926c2ac029106eb813efa6feed82d"} Dec 10 15:47:32 crc kubenswrapper[4755]: I1210 15:47:32.693292 4755 scope.go:117] "RemoveContainer" containerID="ef88933ff432db51ec3fdd8d3ae77a9d0af47d9f412d3818a05d9a9e6200517b" Dec 10 15:47:32 crc kubenswrapper[4755]: I1210 15:47:32.704296 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-85977f6c59-57kcn" event={"ID":"9778155f-9c4a-4cb4-9085-12d776d78435","Type":"ContainerStarted","Data":"c584690dd349726272bedf88c63d1eac00ce1b0cf364a9a13654832cf4c77f90"} Dec 10 15:47:32 crc kubenswrapper[4755]: I1210 15:47:32.704352 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-85977f6c59-57kcn" event={"ID":"9778155f-9c4a-4cb4-9085-12d776d78435","Type":"ContainerStarted","Data":"50ebf44b14ce8a47ca2117678b8c7de8624398e9781b1d16df545f1d2d01af86"} Dec 10 15:47:32 crc kubenswrapper[4755]: I1210 15:47:32.704656 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-85977f6c59-57kcn" Dec 10 15:47:32 crc kubenswrapper[4755]: I1210 15:47:32.738203 4755 scope.go:117] "RemoveContainer" containerID="6b1b3f7b780a709d4f26c3d8772d497eb46912c68bf802f4718655620e7b48e5" Dec 10 15:47:32 crc kubenswrapper[4755]: I1210 15:47:32.753742 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 10 15:47:32 crc kubenswrapper[4755]: I1210 15:47:32.765148 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 10 15:47:32 crc kubenswrapper[4755]: I1210 15:47:32.773765 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators-redhat/loki-operator-controller-manager-85977f6c59-57kcn" podStartSLOduration=1.934650451 podStartE2EDuration="13.773151848s" podCreationTimestamp="2025-12-10 15:47:19 +0000 UTC" firstStartedPulling="2025-12-10 15:47:20.221665189 +0000 UTC m=+1436.822548821" lastFinishedPulling="2025-12-10 15:47:32.060166586 +0000 UTC m=+1448.661050218" observedRunningTime="2025-12-10 15:47:32.728719577 +0000 UTC m=+1449.329603229" watchObservedRunningTime="2025-12-10 15:47:32.773151848 +0000 UTC m=+1449.374035480" Dec 10 15:47:32 crc kubenswrapper[4755]: I1210 15:47:32.800498 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 10 15:47:32 crc kubenswrapper[4755]: I1210 15:47:32.811706 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 10 15:47:32 crc kubenswrapper[4755]: I1210 15:47:32.820595 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 10 15:47:32 crc kubenswrapper[4755]: E1210 15:47:32.821075 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8d3202f-3176-4aa8-93db-7461617d75be" containerName="nova-api-api" Dec 10 15:47:32 crc kubenswrapper[4755]: I1210 15:47:32.821094 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8d3202f-3176-4aa8-93db-7461617d75be" containerName="nova-api-api" Dec 10 15:47:32 crc kubenswrapper[4755]: E1210 15:47:32.821117 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8d3202f-3176-4aa8-93db-7461617d75be" containerName="nova-api-log" Dec 10 15:47:32 crc kubenswrapper[4755]: I1210 15:47:32.821125 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8d3202f-3176-4aa8-93db-7461617d75be" containerName="nova-api-log" Dec 10 15:47:32 crc kubenswrapper[4755]: I1210 15:47:32.821317 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8d3202f-3176-4aa8-93db-7461617d75be" containerName="nova-api-log" Dec 10 15:47:32 crc kubenswrapper[4755]: I1210 15:47:32.821337 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8d3202f-3176-4aa8-93db-7461617d75be" containerName="nova-api-api" Dec 10 15:47:32 crc kubenswrapper[4755]: I1210 15:47:32.822549 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 10 15:47:32 crc kubenswrapper[4755]: I1210 15:47:32.825652 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 10 15:47:32 crc kubenswrapper[4755]: I1210 15:47:32.825791 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Dec 10 15:47:32 crc kubenswrapper[4755]: I1210 15:47:32.825915 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Dec 10 15:47:32 crc kubenswrapper[4755]: I1210 15:47:32.834040 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 10 15:47:32 crc kubenswrapper[4755]: I1210 15:47:32.984294 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/203a68f8-a70b-45c0-8fd5-37c56b10fe90-logs\") pod \"nova-api-0\" (UID: \"203a68f8-a70b-45c0-8fd5-37c56b10fe90\") " pod="openstack/nova-api-0" Dec 10 15:47:32 crc kubenswrapper[4755]: I1210 15:47:32.984364 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/203a68f8-a70b-45c0-8fd5-37c56b10fe90-public-tls-certs\") pod \"nova-api-0\" (UID: \"203a68f8-a70b-45c0-8fd5-37c56b10fe90\") " pod="openstack/nova-api-0" Dec 10 15:47:32 crc kubenswrapper[4755]: I1210 15:47:32.984387 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/203a68f8-a70b-45c0-8fd5-37c56b10fe90-internal-tls-certs\") pod \"nova-api-0\" (UID: \"203a68f8-a70b-45c0-8fd5-37c56b10fe90\") " pod="openstack/nova-api-0" Dec 10 15:47:32 crc kubenswrapper[4755]: I1210 15:47:32.984429 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvf2s\" (UniqueName: \"kubernetes.io/projected/203a68f8-a70b-45c0-8fd5-37c56b10fe90-kube-api-access-bvf2s\") pod \"nova-api-0\" (UID: \"203a68f8-a70b-45c0-8fd5-37c56b10fe90\") " pod="openstack/nova-api-0" Dec 10 15:47:32 crc kubenswrapper[4755]: I1210 15:47:32.984449 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/203a68f8-a70b-45c0-8fd5-37c56b10fe90-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"203a68f8-a70b-45c0-8fd5-37c56b10fe90\") " pod="openstack/nova-api-0" Dec 10 15:47:32 crc kubenswrapper[4755]: I1210 15:47:32.984490 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/203a68f8-a70b-45c0-8fd5-37c56b10fe90-config-data\") pod \"nova-api-0\" (UID: \"203a68f8-a70b-45c0-8fd5-37c56b10fe90\") " pod="openstack/nova-api-0" Dec 10 15:47:33 crc kubenswrapper[4755]: I1210 15:47:33.085833 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvf2s\" (UniqueName: \"kubernetes.io/projected/203a68f8-a70b-45c0-8fd5-37c56b10fe90-kube-api-access-bvf2s\") pod \"nova-api-0\" (UID: \"203a68f8-a70b-45c0-8fd5-37c56b10fe90\") " pod="openstack/nova-api-0" Dec 10 15:47:33 crc kubenswrapper[4755]: I1210 15:47:33.085893 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/203a68f8-a70b-45c0-8fd5-37c56b10fe90-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"203a68f8-a70b-45c0-8fd5-37c56b10fe90\") " pod="openstack/nova-api-0" Dec 10 15:47:33 crc kubenswrapper[4755]: I1210 15:47:33.085932 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/203a68f8-a70b-45c0-8fd5-37c56b10fe90-config-data\") pod \"nova-api-0\" (UID: \"203a68f8-a70b-45c0-8fd5-37c56b10fe90\") " pod="openstack/nova-api-0" Dec 10 15:47:33 crc kubenswrapper[4755]: I1210 15:47:33.086081 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/203a68f8-a70b-45c0-8fd5-37c56b10fe90-logs\") pod \"nova-api-0\" (UID: \"203a68f8-a70b-45c0-8fd5-37c56b10fe90\") " pod="openstack/nova-api-0" Dec 10 15:47:33 crc kubenswrapper[4755]: I1210 15:47:33.086142 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/203a68f8-a70b-45c0-8fd5-37c56b10fe90-public-tls-certs\") pod \"nova-api-0\" (UID: \"203a68f8-a70b-45c0-8fd5-37c56b10fe90\") " pod="openstack/nova-api-0" Dec 10 15:47:33 crc kubenswrapper[4755]: I1210 15:47:33.086165 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/203a68f8-a70b-45c0-8fd5-37c56b10fe90-internal-tls-certs\") pod \"nova-api-0\" (UID: \"203a68f8-a70b-45c0-8fd5-37c56b10fe90\") " pod="openstack/nova-api-0" Dec 10 15:47:33 crc kubenswrapper[4755]: I1210 15:47:33.086756 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/203a68f8-a70b-45c0-8fd5-37c56b10fe90-logs\") pod \"nova-api-0\" (UID: \"203a68f8-a70b-45c0-8fd5-37c56b10fe90\") " pod="openstack/nova-api-0" Dec 10 15:47:33 crc kubenswrapper[4755]: I1210 15:47:33.089739 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/203a68f8-a70b-45c0-8fd5-37c56b10fe90-config-data\") pod \"nova-api-0\" (UID: \"203a68f8-a70b-45c0-8fd5-37c56b10fe90\") " pod="openstack/nova-api-0" Dec 10 15:47:33 crc kubenswrapper[4755]: I1210 15:47:33.089864 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/203a68f8-a70b-45c0-8fd5-37c56b10fe90-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"203a68f8-a70b-45c0-8fd5-37c56b10fe90\") " pod="openstack/nova-api-0" Dec 10 15:47:33 crc kubenswrapper[4755]: I1210 15:47:33.090719 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/203a68f8-a70b-45c0-8fd5-37c56b10fe90-public-tls-certs\") pod \"nova-api-0\" (UID: \"203a68f8-a70b-45c0-8fd5-37c56b10fe90\") " pod="openstack/nova-api-0" Dec 10 15:47:33 crc kubenswrapper[4755]: I1210 15:47:33.092883 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/203a68f8-a70b-45c0-8fd5-37c56b10fe90-internal-tls-certs\") pod \"nova-api-0\" (UID: \"203a68f8-a70b-45c0-8fd5-37c56b10fe90\") " pod="openstack/nova-api-0" Dec 10 15:47:33 crc kubenswrapper[4755]: I1210 15:47:33.102657 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvf2s\" (UniqueName: \"kubernetes.io/projected/203a68f8-a70b-45c0-8fd5-37c56b10fe90-kube-api-access-bvf2s\") pod \"nova-api-0\" (UID: \"203a68f8-a70b-45c0-8fd5-37c56b10fe90\") " pod="openstack/nova-api-0" Dec 10 15:47:33 crc kubenswrapper[4755]: I1210 15:47:33.212019 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 10 15:47:33 crc kubenswrapper[4755]: I1210 15:47:33.717216 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6457b1e1-42e4-46b2-bc4b-6bbd9451131e","Type":"ContainerStarted","Data":"c724a124350f7999c67225bbe578c2465292736153480751f53a94c0cfb5b196"} Dec 10 15:47:33 crc kubenswrapper[4755]: I1210 15:47:33.718485 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6457b1e1-42e4-46b2-bc4b-6bbd9451131e","Type":"ContainerStarted","Data":"acdf86c7815ec06ebf9d7a3b454f9f8aa15f088197feac6de3f19c1fe5314b12"} Dec 10 15:47:33 crc kubenswrapper[4755]: I1210 15:47:33.718570 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6457b1e1-42e4-46b2-bc4b-6bbd9451131e","Type":"ContainerStarted","Data":"fe5f1f87cc3adb48111bf57c173eea1c2e5b57d5efe1b9f79d65650749589f48"} Dec 10 15:47:33 crc kubenswrapper[4755]: I1210 15:47:33.721253 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ae382d2b-a255-4ec6-8bf4-d70a8d3a7a4e","Type":"ContainerStarted","Data":"ea2387f52c509c42e31ade18071c39a7dd3f915d3e3f6ab4f14c1e89d983a9fd"} Dec 10 15:47:33 crc kubenswrapper[4755]: I1210 15:47:33.721303 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ae382d2b-a255-4ec6-8bf4-d70a8d3a7a4e","Type":"ContainerStarted","Data":"5e196e01415be6da7c6f0e7b6218461e0083105ea90dff7b255b3bd09d65ed33"} Dec 10 15:47:33 crc kubenswrapper[4755]: W1210 15:47:33.732541 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod203a68f8_a70b_45c0_8fd5_37c56b10fe90.slice/crio-8afdca1d9b58805695dc84f2439a4e1a2150fd3b53126c8f123826b16847a2cd WatchSource:0}: Error finding container 8afdca1d9b58805695dc84f2439a4e1a2150fd3b53126c8f123826b16847a2cd: Status 404 returned error can't find the container with id 8afdca1d9b58805695dc84f2439a4e1a2150fd3b53126c8f123826b16847a2cd Dec 10 15:47:33 crc kubenswrapper[4755]: I1210 15:47:33.733170 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 10 15:47:33 crc kubenswrapper[4755]: I1210 15:47:33.758552 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.758529303 podStartE2EDuration="2.758529303s" podCreationTimestamp="2025-12-10 15:47:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:47:33.753667661 +0000 UTC m=+1450.354551313" watchObservedRunningTime="2025-12-10 15:47:33.758529303 +0000 UTC m=+1450.359412935" Dec 10 15:47:33 crc kubenswrapper[4755]: I1210 15:47:33.783874 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.7838529530000002 podStartE2EDuration="2.783852953s" podCreationTimestamp="2025-12-10 15:47:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:47:33.769621905 +0000 UTC m=+1450.370505557" watchObservedRunningTime="2025-12-10 15:47:33.783852953 +0000 UTC m=+1450.384736585" Dec 10 15:47:33 crc kubenswrapper[4755]: I1210 15:47:33.784912 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8d3202f-3176-4aa8-93db-7461617d75be" path="/var/lib/kubelet/pods/d8d3202f-3176-4aa8-93db-7461617d75be/volumes" Dec 10 15:47:34 crc kubenswrapper[4755]: I1210 15:47:34.740895 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"203a68f8-a70b-45c0-8fd5-37c56b10fe90","Type":"ContainerStarted","Data":"43aa2cd17e180d13bc241ca339477bbd9c9aba190c50cf14f501284be81959a6"} Dec 10 15:47:34 crc kubenswrapper[4755]: I1210 15:47:34.741178 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"203a68f8-a70b-45c0-8fd5-37c56b10fe90","Type":"ContainerStarted","Data":"1c85d8e7710c3c2c6fae6ad3b0219df12369fe35ed0135d00b5342ffd5ea127a"} Dec 10 15:47:34 crc kubenswrapper[4755]: I1210 15:47:34.741189 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"203a68f8-a70b-45c0-8fd5-37c56b10fe90","Type":"ContainerStarted","Data":"8afdca1d9b58805695dc84f2439a4e1a2150fd3b53126c8f123826b16847a2cd"} Dec 10 15:47:34 crc kubenswrapper[4755]: I1210 15:47:34.770337 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.7703128489999997 podStartE2EDuration="2.770312849s" podCreationTimestamp="2025-12-10 15:47:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:47:34.761169909 +0000 UTC m=+1451.362053551" watchObservedRunningTime="2025-12-10 15:47:34.770312849 +0000 UTC m=+1451.371196501" Dec 10 15:47:36 crc kubenswrapper[4755]: I1210 15:47:36.646452 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 10 15:47:36 crc kubenswrapper[4755]: I1210 15:47:36.646798 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 10 15:47:37 crc kubenswrapper[4755]: I1210 15:47:37.051939 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 10 15:47:39 crc kubenswrapper[4755]: I1210 15:47:39.547100 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-85977f6c59-57kcn" Dec 10 15:47:39 crc kubenswrapper[4755]: I1210 15:47:39.617554 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-55ff878876-td264"] Dec 10 15:47:39 crc kubenswrapper[4755]: I1210 15:47:39.618015 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operators-redhat/loki-operator-controller-manager-55ff878876-td264" podUID="ce2a9f6f-bc3d-472a-b820-51118827c3b6" containerName="manager" containerID="cri-o://4163142720a374e0fa97f3badc22b4c7558f15335f8c2be94d8f9b64e5404a0e" gracePeriod=10 Dec 10 15:47:39 crc kubenswrapper[4755]: I1210 15:47:39.618103 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operators-redhat/loki-operator-controller-manager-55ff878876-td264" podUID="ce2a9f6f-bc3d-472a-b820-51118827c3b6" containerName="kube-rbac-proxy" containerID="cri-o://8697664e7ecc536e3f3e4681f6d5d6d0c52ddaf3cf7f512805a95f14b1dfe796" gracePeriod=10 Dec 10 15:47:39 crc kubenswrapper[4755]: I1210 15:47:39.843830 4755 generic.go:334] "Generic (PLEG): container finished" podID="ce2a9f6f-bc3d-472a-b820-51118827c3b6" containerID="8697664e7ecc536e3f3e4681f6d5d6d0c52ddaf3cf7f512805a95f14b1dfe796" exitCode=0 Dec 10 15:47:39 crc kubenswrapper[4755]: I1210 15:47:39.844117 4755 generic.go:334] "Generic (PLEG): container finished" podID="ce2a9f6f-bc3d-472a-b820-51118827c3b6" containerID="4163142720a374e0fa97f3badc22b4c7558f15335f8c2be94d8f9b64e5404a0e" exitCode=0 Dec 10 15:47:39 crc kubenswrapper[4755]: I1210 15:47:39.844141 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-55ff878876-td264" event={"ID":"ce2a9f6f-bc3d-472a-b820-51118827c3b6","Type":"ContainerDied","Data":"8697664e7ecc536e3f3e4681f6d5d6d0c52ddaf3cf7f512805a95f14b1dfe796"} Dec 10 15:47:39 crc kubenswrapper[4755]: I1210 15:47:39.844169 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-55ff878876-td264" event={"ID":"ce2a9f6f-bc3d-472a-b820-51118827c3b6","Type":"ContainerDied","Data":"4163142720a374e0fa97f3badc22b4c7558f15335f8c2be94d8f9b64e5404a0e"} Dec 10 15:47:40 crc kubenswrapper[4755]: I1210 15:47:40.223528 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-55ff878876-td264" Dec 10 15:47:40 crc kubenswrapper[4755]: I1210 15:47:40.350198 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/ce2a9f6f-bc3d-472a-b820-51118827c3b6-manager-config\") pod \"ce2a9f6f-bc3d-472a-b820-51118827c3b6\" (UID: \"ce2a9f6f-bc3d-472a-b820-51118827c3b6\") " Dec 10 15:47:40 crc kubenswrapper[4755]: I1210 15:47:40.350427 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ce2a9f6f-bc3d-472a-b820-51118827c3b6-webhook-cert\") pod \"ce2a9f6f-bc3d-472a-b820-51118827c3b6\" (UID: \"ce2a9f6f-bc3d-472a-b820-51118827c3b6\") " Dec 10 15:47:40 crc kubenswrapper[4755]: I1210 15:47:40.350717 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ce2a9f6f-bc3d-472a-b820-51118827c3b6-loki-operator-metrics-cert\") pod \"ce2a9f6f-bc3d-472a-b820-51118827c3b6\" (UID: \"ce2a9f6f-bc3d-472a-b820-51118827c3b6\") " Dec 10 15:47:40 crc kubenswrapper[4755]: I1210 15:47:40.350790 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ce2a9f6f-bc3d-472a-b820-51118827c3b6-apiservice-cert\") pod \"ce2a9f6f-bc3d-472a-b820-51118827c3b6\" (UID: \"ce2a9f6f-bc3d-472a-b820-51118827c3b6\") " Dec 10 15:47:40 crc kubenswrapper[4755]: I1210 15:47:40.350872 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hggs4\" (UniqueName: \"kubernetes.io/projected/ce2a9f6f-bc3d-472a-b820-51118827c3b6-kube-api-access-hggs4\") pod \"ce2a9f6f-bc3d-472a-b820-51118827c3b6\" (UID: \"ce2a9f6f-bc3d-472a-b820-51118827c3b6\") " Dec 10 15:47:40 crc kubenswrapper[4755]: I1210 15:47:40.355552 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce2a9f6f-bc3d-472a-b820-51118827c3b6-kube-api-access-hggs4" (OuterVolumeSpecName: "kube-api-access-hggs4") pod "ce2a9f6f-bc3d-472a-b820-51118827c3b6" (UID: "ce2a9f6f-bc3d-472a-b820-51118827c3b6"). InnerVolumeSpecName "kube-api-access-hggs4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:47:40 crc kubenswrapper[4755]: I1210 15:47:40.355864 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce2a9f6f-bc3d-472a-b820-51118827c3b6-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "ce2a9f6f-bc3d-472a-b820-51118827c3b6" (UID: "ce2a9f6f-bc3d-472a-b820-51118827c3b6"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:47:40 crc kubenswrapper[4755]: I1210 15:47:40.366058 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce2a9f6f-bc3d-472a-b820-51118827c3b6-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "ce2a9f6f-bc3d-472a-b820-51118827c3b6" (UID: "ce2a9f6f-bc3d-472a-b820-51118827c3b6"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:47:40 crc kubenswrapper[4755]: I1210 15:47:40.366164 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce2a9f6f-bc3d-472a-b820-51118827c3b6-loki-operator-metrics-cert" (OuterVolumeSpecName: "loki-operator-metrics-cert") pod "ce2a9f6f-bc3d-472a-b820-51118827c3b6" (UID: "ce2a9f6f-bc3d-472a-b820-51118827c3b6"). InnerVolumeSpecName "loki-operator-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:47:40 crc kubenswrapper[4755]: I1210 15:47:40.382009 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce2a9f6f-bc3d-472a-b820-51118827c3b6-manager-config" (OuterVolumeSpecName: "manager-config") pod "ce2a9f6f-bc3d-472a-b820-51118827c3b6" (UID: "ce2a9f6f-bc3d-472a-b820-51118827c3b6"). InnerVolumeSpecName "manager-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:47:40 crc kubenswrapper[4755]: I1210 15:47:40.453001 4755 reconciler_common.go:293] "Volume detached for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ce2a9f6f-bc3d-472a-b820-51118827c3b6-loki-operator-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:40 crc kubenswrapper[4755]: I1210 15:47:40.453028 4755 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ce2a9f6f-bc3d-472a-b820-51118827c3b6-apiservice-cert\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:40 crc kubenswrapper[4755]: I1210 15:47:40.453040 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hggs4\" (UniqueName: \"kubernetes.io/projected/ce2a9f6f-bc3d-472a-b820-51118827c3b6-kube-api-access-hggs4\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:40 crc kubenswrapper[4755]: I1210 15:47:40.453050 4755 reconciler_common.go:293] "Volume detached for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/ce2a9f6f-bc3d-472a-b820-51118827c3b6-manager-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:40 crc kubenswrapper[4755]: I1210 15:47:40.453059 4755 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ce2a9f6f-bc3d-472a-b820-51118827c3b6-webhook-cert\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:40 crc kubenswrapper[4755]: I1210 15:47:40.856191 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-55ff878876-td264" event={"ID":"ce2a9f6f-bc3d-472a-b820-51118827c3b6","Type":"ContainerDied","Data":"ad74b737401375b4f7f3b7846199699f7b117d3af7691673e4042df7c49e5d13"} Dec 10 15:47:40 crc kubenswrapper[4755]: I1210 15:47:40.856247 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-55ff878876-td264" Dec 10 15:47:40 crc kubenswrapper[4755]: I1210 15:47:40.857108 4755 scope.go:117] "RemoveContainer" containerID="8697664e7ecc536e3f3e4681f6d5d6d0c52ddaf3cf7f512805a95f14b1dfe796" Dec 10 15:47:40 crc kubenswrapper[4755]: I1210 15:47:40.903157 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-55ff878876-td264"] Dec 10 15:47:40 crc kubenswrapper[4755]: I1210 15:47:40.904881 4755 scope.go:117] "RemoveContainer" containerID="4163142720a374e0fa97f3badc22b4c7558f15335f8c2be94d8f9b64e5404a0e" Dec 10 15:47:40 crc kubenswrapper[4755]: I1210 15:47:40.913555 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-55ff878876-td264"] Dec 10 15:47:41 crc kubenswrapper[4755]: I1210 15:47:41.645945 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 10 15:47:41 crc kubenswrapper[4755]: I1210 15:47:41.646275 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 10 15:47:41 crc kubenswrapper[4755]: I1210 15:47:41.770880 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce2a9f6f-bc3d-472a-b820-51118827c3b6" path="/var/lib/kubelet/pods/ce2a9f6f-bc3d-472a-b820-51118827c3b6/volumes" Dec 10 15:47:41 crc kubenswrapper[4755]: I1210 15:47:41.834404 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 10 15:47:42 crc kubenswrapper[4755]: I1210 15:47:42.052059 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 10 15:47:42 crc kubenswrapper[4755]: I1210 15:47:42.098286 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 10 15:47:42 crc kubenswrapper[4755]: I1210 15:47:42.727741 4755 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="6457b1e1-42e4-46b2-bc4b-6bbd9451131e" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.230:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 10 15:47:42 crc kubenswrapper[4755]: I1210 15:47:42.727761 4755 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="6457b1e1-42e4-46b2-bc4b-6bbd9451131e" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.230:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 10 15:47:42 crc kubenswrapper[4755]: I1210 15:47:42.925213 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 10 15:47:43 crc kubenswrapper[4755]: I1210 15:47:43.212691 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 10 15:47:43 crc kubenswrapper[4755]: I1210 15:47:43.212774 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 10 15:47:44 crc kubenswrapper[4755]: I1210 15:47:44.248494 4755 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="203a68f8-a70b-45c0-8fd5-37c56b10fe90" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.232:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 10 15:47:44 crc kubenswrapper[4755]: I1210 15:47:44.249215 4755 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="203a68f8-a70b-45c0-8fd5-37c56b10fe90" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.232:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 10 15:47:46 crc kubenswrapper[4755]: I1210 15:47:46.426770 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-558c4df967-tdf8t"] Dec 10 15:47:46 crc kubenswrapper[4755]: E1210 15:47:46.427653 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce2a9f6f-bc3d-472a-b820-51118827c3b6" containerName="kube-rbac-proxy" Dec 10 15:47:46 crc kubenswrapper[4755]: I1210 15:47:46.427670 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce2a9f6f-bc3d-472a-b820-51118827c3b6" containerName="kube-rbac-proxy" Dec 10 15:47:46 crc kubenswrapper[4755]: E1210 15:47:46.427709 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce2a9f6f-bc3d-472a-b820-51118827c3b6" containerName="manager" Dec 10 15:47:46 crc kubenswrapper[4755]: I1210 15:47:46.427717 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce2a9f6f-bc3d-472a-b820-51118827c3b6" containerName="manager" Dec 10 15:47:46 crc kubenswrapper[4755]: I1210 15:47:46.427969 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce2a9f6f-bc3d-472a-b820-51118827c3b6" containerName="manager" Dec 10 15:47:46 crc kubenswrapper[4755]: I1210 15:47:46.427995 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce2a9f6f-bc3d-472a-b820-51118827c3b6" containerName="kube-rbac-proxy" Dec 10 15:47:46 crc kubenswrapper[4755]: I1210 15:47:46.429463 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-558c4df967-tdf8t" Dec 10 15:47:46 crc kubenswrapper[4755]: I1210 15:47:46.455876 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-558c4df967-tdf8t"] Dec 10 15:47:46 crc kubenswrapper[4755]: I1210 15:47:46.600395 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/57ef8333-3c3c-4e02-ad27-24ccac555a55-manager-config\") pod \"loki-operator-controller-manager-558c4df967-tdf8t\" (UID: \"57ef8333-3c3c-4e02-ad27-24ccac555a55\") " pod="openshift-operators-redhat/loki-operator-controller-manager-558c4df967-tdf8t" Dec 10 15:47:46 crc kubenswrapper[4755]: I1210 15:47:46.600878 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/57ef8333-3c3c-4e02-ad27-24ccac555a55-webhook-cert\") pod \"loki-operator-controller-manager-558c4df967-tdf8t\" (UID: \"57ef8333-3c3c-4e02-ad27-24ccac555a55\") " pod="openshift-operators-redhat/loki-operator-controller-manager-558c4df967-tdf8t" Dec 10 15:47:46 crc kubenswrapper[4755]: I1210 15:47:46.600909 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5l42\" (UniqueName: \"kubernetes.io/projected/57ef8333-3c3c-4e02-ad27-24ccac555a55-kube-api-access-w5l42\") pod \"loki-operator-controller-manager-558c4df967-tdf8t\" (UID: \"57ef8333-3c3c-4e02-ad27-24ccac555a55\") " pod="openshift-operators-redhat/loki-operator-controller-manager-558c4df967-tdf8t" Dec 10 15:47:46 crc kubenswrapper[4755]: I1210 15:47:46.600944 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/57ef8333-3c3c-4e02-ad27-24ccac555a55-apiservice-cert\") pod \"loki-operator-controller-manager-558c4df967-tdf8t\" (UID: \"57ef8333-3c3c-4e02-ad27-24ccac555a55\") " pod="openshift-operators-redhat/loki-operator-controller-manager-558c4df967-tdf8t" Dec 10 15:47:46 crc kubenswrapper[4755]: I1210 15:47:46.601170 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/57ef8333-3c3c-4e02-ad27-24ccac555a55-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-558c4df967-tdf8t\" (UID: \"57ef8333-3c3c-4e02-ad27-24ccac555a55\") " pod="openshift-operators-redhat/loki-operator-controller-manager-558c4df967-tdf8t" Dec 10 15:47:46 crc kubenswrapper[4755]: I1210 15:47:46.703073 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/57ef8333-3c3c-4e02-ad27-24ccac555a55-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-558c4df967-tdf8t\" (UID: \"57ef8333-3c3c-4e02-ad27-24ccac555a55\") " pod="openshift-operators-redhat/loki-operator-controller-manager-558c4df967-tdf8t" Dec 10 15:47:46 crc kubenswrapper[4755]: I1210 15:47:46.703159 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/57ef8333-3c3c-4e02-ad27-24ccac555a55-manager-config\") pod \"loki-operator-controller-manager-558c4df967-tdf8t\" (UID: \"57ef8333-3c3c-4e02-ad27-24ccac555a55\") " pod="openshift-operators-redhat/loki-operator-controller-manager-558c4df967-tdf8t" Dec 10 15:47:46 crc kubenswrapper[4755]: I1210 15:47:46.703246 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/57ef8333-3c3c-4e02-ad27-24ccac555a55-webhook-cert\") pod \"loki-operator-controller-manager-558c4df967-tdf8t\" (UID: \"57ef8333-3c3c-4e02-ad27-24ccac555a55\") " pod="openshift-operators-redhat/loki-operator-controller-manager-558c4df967-tdf8t" Dec 10 15:47:46 crc kubenswrapper[4755]: I1210 15:47:46.703272 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5l42\" (UniqueName: \"kubernetes.io/projected/57ef8333-3c3c-4e02-ad27-24ccac555a55-kube-api-access-w5l42\") pod \"loki-operator-controller-manager-558c4df967-tdf8t\" (UID: \"57ef8333-3c3c-4e02-ad27-24ccac555a55\") " pod="openshift-operators-redhat/loki-operator-controller-manager-558c4df967-tdf8t" Dec 10 15:47:46 crc kubenswrapper[4755]: I1210 15:47:46.703302 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/57ef8333-3c3c-4e02-ad27-24ccac555a55-apiservice-cert\") pod \"loki-operator-controller-manager-558c4df967-tdf8t\" (UID: \"57ef8333-3c3c-4e02-ad27-24ccac555a55\") " pod="openshift-operators-redhat/loki-operator-controller-manager-558c4df967-tdf8t" Dec 10 15:47:46 crc kubenswrapper[4755]: I1210 15:47:46.704415 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/57ef8333-3c3c-4e02-ad27-24ccac555a55-manager-config\") pod \"loki-operator-controller-manager-558c4df967-tdf8t\" (UID: \"57ef8333-3c3c-4e02-ad27-24ccac555a55\") " pod="openshift-operators-redhat/loki-operator-controller-manager-558c4df967-tdf8t" Dec 10 15:47:46 crc kubenswrapper[4755]: I1210 15:47:46.709987 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/57ef8333-3c3c-4e02-ad27-24ccac555a55-apiservice-cert\") pod \"loki-operator-controller-manager-558c4df967-tdf8t\" (UID: \"57ef8333-3c3c-4e02-ad27-24ccac555a55\") " pod="openshift-operators-redhat/loki-operator-controller-manager-558c4df967-tdf8t" Dec 10 15:47:46 crc kubenswrapper[4755]: I1210 15:47:46.713714 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/57ef8333-3c3c-4e02-ad27-24ccac555a55-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-558c4df967-tdf8t\" (UID: \"57ef8333-3c3c-4e02-ad27-24ccac555a55\") " pod="openshift-operators-redhat/loki-operator-controller-manager-558c4df967-tdf8t" Dec 10 15:47:46 crc kubenswrapper[4755]: I1210 15:47:46.714314 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/57ef8333-3c3c-4e02-ad27-24ccac555a55-webhook-cert\") pod \"loki-operator-controller-manager-558c4df967-tdf8t\" (UID: \"57ef8333-3c3c-4e02-ad27-24ccac555a55\") " pod="openshift-operators-redhat/loki-operator-controller-manager-558c4df967-tdf8t" Dec 10 15:47:46 crc kubenswrapper[4755]: I1210 15:47:46.729041 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5l42\" (UniqueName: \"kubernetes.io/projected/57ef8333-3c3c-4e02-ad27-24ccac555a55-kube-api-access-w5l42\") pod \"loki-operator-controller-manager-558c4df967-tdf8t\" (UID: \"57ef8333-3c3c-4e02-ad27-24ccac555a55\") " pod="openshift-operators-redhat/loki-operator-controller-manager-558c4df967-tdf8t" Dec 10 15:47:46 crc kubenswrapper[4755]: I1210 15:47:46.754386 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-558c4df967-tdf8t" Dec 10 15:47:47 crc kubenswrapper[4755]: I1210 15:47:47.340346 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-558c4df967-tdf8t"] Dec 10 15:47:48 crc kubenswrapper[4755]: I1210 15:47:47.999931 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-558c4df967-tdf8t" event={"ID":"57ef8333-3c3c-4e02-ad27-24ccac555a55","Type":"ContainerStarted","Data":"88994072f6978ef45bb3cd7d01eeaf7fb88d8eabb9c7b59d7cbaf263e29d230a"} Dec 10 15:47:48 crc kubenswrapper[4755]: I1210 15:47:48.001192 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-558c4df967-tdf8t" Dec 10 15:47:48 crc kubenswrapper[4755]: I1210 15:47:48.001277 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-558c4df967-tdf8t" event={"ID":"57ef8333-3c3c-4e02-ad27-24ccac555a55","Type":"ContainerStarted","Data":"b76942e94422753c794b52787c6f8f18b92e79e486a29069cb54f15aea561af7"} Dec 10 15:47:48 crc kubenswrapper[4755]: I1210 15:47:48.001341 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-558c4df967-tdf8t" event={"ID":"57ef8333-3c3c-4e02-ad27-24ccac555a55","Type":"ContainerStarted","Data":"a5679efa2d096edb374742e3a009767dba5787eebe6b086b0d34b3b38890a7df"} Dec 10 15:47:48 crc kubenswrapper[4755]: I1210 15:47:48.023933 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators-redhat/loki-operator-controller-manager-558c4df967-tdf8t" podStartSLOduration=2.023913433 podStartE2EDuration="2.023913433s" podCreationTimestamp="2025-12-10 15:47:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:47:48.020011506 +0000 UTC m=+1464.620895148" watchObservedRunningTime="2025-12-10 15:47:48.023913433 +0000 UTC m=+1464.624797065" Dec 10 15:47:51 crc kubenswrapper[4755]: I1210 15:47:51.652929 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 10 15:47:51 crc kubenswrapper[4755]: I1210 15:47:51.660510 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 10 15:47:51 crc kubenswrapper[4755]: I1210 15:47:51.661171 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 10 15:47:52 crc kubenswrapper[4755]: I1210 15:47:52.055593 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 10 15:47:53 crc kubenswrapper[4755]: I1210 15:47:53.221340 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 10 15:47:53 crc kubenswrapper[4755]: I1210 15:47:53.222275 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 10 15:47:53 crc kubenswrapper[4755]: I1210 15:47:53.222372 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 10 15:47:53 crc kubenswrapper[4755]: I1210 15:47:53.232392 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 10 15:47:54 crc kubenswrapper[4755]: I1210 15:47:54.068299 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 10 15:47:54 crc kubenswrapper[4755]: I1210 15:47:54.074276 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 10 15:47:56 crc kubenswrapper[4755]: I1210 15:47:56.440953 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-distributor-66dfd9bb-5zwd4"] Dec 10 15:47:56 crc kubenswrapper[4755]: I1210 15:47:56.442715 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-5zwd4" Dec 10 15:47:56 crc kubenswrapper[4755]: I1210 15:47:56.453402 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-distributor-66dfd9bb-5zwd4"] Dec 10 15:47:56 crc kubenswrapper[4755]: I1210 15:47:56.536401 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-lokistack-ingester-0"] Dec 10 15:47:56 crc kubenswrapper[4755]: I1210 15:47:56.536869 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="e5a3871d-6b81-4b3d-9044-fcbcf437effb" containerName="loki-ingester" containerID="cri-o://3c409743e4ab358dd29fc43502060ffc2ead257951f6999bad0e97dcba14f061" gracePeriod=30 Dec 10 15:47:56 crc kubenswrapper[4755]: I1210 15:47:56.551401 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crcqs\" (UniqueName: \"kubernetes.io/projected/56a3d20e-f422-40f4-bbe3-fc61da743389-kube-api-access-crcqs\") pod \"cloudkitty-lokistack-distributor-66dfd9bb-5zwd4\" (UID: \"56a3d20e-f422-40f4-bbe3-fc61da743389\") " pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-5zwd4" Dec 10 15:47:56 crc kubenswrapper[4755]: I1210 15:47:56.551569 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-distributor-http\" (UniqueName: \"kubernetes.io/secret/56a3d20e-f422-40f4-bbe3-fc61da743389-cloudkitty-lokistack-distributor-http\") pod \"cloudkitty-lokistack-distributor-66dfd9bb-5zwd4\" (UID: \"56a3d20e-f422-40f4-bbe3-fc61da743389\") " pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-5zwd4" Dec 10 15:47:56 crc kubenswrapper[4755]: I1210 15:47:56.552321 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56a3d20e-f422-40f4-bbe3-fc61da743389-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-distributor-66dfd9bb-5zwd4\" (UID: \"56a3d20e-f422-40f4-bbe3-fc61da743389\") " pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-5zwd4" Dec 10 15:47:56 crc kubenswrapper[4755]: I1210 15:47:56.552479 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/56a3d20e-f422-40f4-bbe3-fc61da743389-cloudkitty-lokistack-distributor-grpc\") pod \"cloudkitty-lokistack-distributor-66dfd9bb-5zwd4\" (UID: \"56a3d20e-f422-40f4-bbe3-fc61da743389\") " pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-5zwd4" Dec 10 15:47:56 crc kubenswrapper[4755]: I1210 15:47:56.552837 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56a3d20e-f422-40f4-bbe3-fc61da743389-config\") pod \"cloudkitty-lokistack-distributor-66dfd9bb-5zwd4\" (UID: \"56a3d20e-f422-40f4-bbe3-fc61da743389\") " pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-5zwd4" Dec 10 15:47:56 crc kubenswrapper[4755]: I1210 15:47:56.570653 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-querier-795fd8f8cc-qjtx7"] Dec 10 15:47:56 crc kubenswrapper[4755]: I1210 15:47:56.572410 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-qjtx7" Dec 10 15:47:56 crc kubenswrapper[4755]: I1210 15:47:56.591643 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-lokistack-compactor-0"] Dec 10 15:47:56 crc kubenswrapper[4755]: I1210 15:47:56.592135 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cloudkitty-lokistack-compactor-0" podUID="31bbbf2c-5266-4ea7-8428-ed2607013a35" containerName="loki-compactor" containerID="cri-o://37de70e4c1ac2932d37f73fa7dba2bcd1de89ae938c16517de1bd1feac16cf52" gracePeriod=30 Dec 10 15:47:56 crc kubenswrapper[4755]: I1210 15:47:56.626889 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-querier-795fd8f8cc-qjtx7"] Dec 10 15:47:56 crc kubenswrapper[4755]: I1210 15:47:56.655244 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56a3d20e-f422-40f4-bbe3-fc61da743389-config\") pod \"cloudkitty-lokistack-distributor-66dfd9bb-5zwd4\" (UID: \"56a3d20e-f422-40f4-bbe3-fc61da743389\") " pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-5zwd4" Dec 10 15:47:56 crc kubenswrapper[4755]: I1210 15:47:56.655333 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crcqs\" (UniqueName: \"kubernetes.io/projected/56a3d20e-f422-40f4-bbe3-fc61da743389-kube-api-access-crcqs\") pod \"cloudkitty-lokistack-distributor-66dfd9bb-5zwd4\" (UID: \"56a3d20e-f422-40f4-bbe3-fc61da743389\") " pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-5zwd4" Dec 10 15:47:56 crc kubenswrapper[4755]: I1210 15:47:56.655512 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-distributor-http\" (UniqueName: \"kubernetes.io/secret/56a3d20e-f422-40f4-bbe3-fc61da743389-cloudkitty-lokistack-distributor-http\") pod \"cloudkitty-lokistack-distributor-66dfd9bb-5zwd4\" (UID: \"56a3d20e-f422-40f4-bbe3-fc61da743389\") " pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-5zwd4" Dec 10 15:47:56 crc kubenswrapper[4755]: I1210 15:47:56.655542 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56a3d20e-f422-40f4-bbe3-fc61da743389-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-distributor-66dfd9bb-5zwd4\" (UID: \"56a3d20e-f422-40f4-bbe3-fc61da743389\") " pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-5zwd4" Dec 10 15:47:56 crc kubenswrapper[4755]: I1210 15:47:56.655596 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/56a3d20e-f422-40f4-bbe3-fc61da743389-cloudkitty-lokistack-distributor-grpc\") pod \"cloudkitty-lokistack-distributor-66dfd9bb-5zwd4\" (UID: \"56a3d20e-f422-40f4-bbe3-fc61da743389\") " pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-5zwd4" Dec 10 15:47:56 crc kubenswrapper[4755]: I1210 15:47:56.659835 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56a3d20e-f422-40f4-bbe3-fc61da743389-config\") pod \"cloudkitty-lokistack-distributor-66dfd9bb-5zwd4\" (UID: \"56a3d20e-f422-40f4-bbe3-fc61da743389\") " pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-5zwd4" Dec 10 15:47:56 crc kubenswrapper[4755]: I1210 15:47:56.660690 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56a3d20e-f422-40f4-bbe3-fc61da743389-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-distributor-66dfd9bb-5zwd4\" (UID: \"56a3d20e-f422-40f4-bbe3-fc61da743389\") " pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-5zwd4" Dec 10 15:47:56 crc kubenswrapper[4755]: I1210 15:47:56.664038 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-query-frontend-5cd44666df-5kvgz"] Dec 10 15:47:56 crc kubenswrapper[4755]: I1210 15:47:56.665167 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-distributor-http\" (UniqueName: \"kubernetes.io/secret/56a3d20e-f422-40f4-bbe3-fc61da743389-cloudkitty-lokistack-distributor-http\") pod \"cloudkitty-lokistack-distributor-66dfd9bb-5zwd4\" (UID: \"56a3d20e-f422-40f4-bbe3-fc61da743389\") " pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-5zwd4" Dec 10 15:47:56 crc kubenswrapper[4755]: I1210 15:47:56.666132 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-5kvgz" Dec 10 15:47:56 crc kubenswrapper[4755]: I1210 15:47:56.675180 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/56a3d20e-f422-40f4-bbe3-fc61da743389-cloudkitty-lokistack-distributor-grpc\") pod \"cloudkitty-lokistack-distributor-66dfd9bb-5zwd4\" (UID: \"56a3d20e-f422-40f4-bbe3-fc61da743389\") " pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-5zwd4" Dec 10 15:47:56 crc kubenswrapper[4755]: I1210 15:47:56.686551 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-query-frontend-5cd44666df-5kvgz"] Dec 10 15:47:56 crc kubenswrapper[4755]: I1210 15:47:56.700922 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-lokistack-index-gateway-0"] Dec 10 15:47:56 crc kubenswrapper[4755]: I1210 15:47:56.701166 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cloudkitty-lokistack-index-gateway-0" podUID="4e702de9-8dda-4370-b806-41083a70ac41" containerName="loki-index-gateway" containerID="cri-o://7be77a14baab6fbe2e6434c6e42e63b8dd0c8bc56ea6074c47b153b2a21ed53e" gracePeriod=30 Dec 10 15:47:56 crc kubenswrapper[4755]: I1210 15:47:56.709488 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crcqs\" (UniqueName: \"kubernetes.io/projected/56a3d20e-f422-40f4-bbe3-fc61da743389-kube-api-access-crcqs\") pod \"cloudkitty-lokistack-distributor-66dfd9bb-5zwd4\" (UID: \"56a3d20e-f422-40f4-bbe3-fc61da743389\") " pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-5zwd4" Dec 10 15:47:56 crc kubenswrapper[4755]: I1210 15:47:56.743400 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7db4f4db8c-c65qj"] Dec 10 15:47:56 crc kubenswrapper[4755]: I1210 15:47:56.745521 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-c65qj" Dec 10 15:47:56 crc kubenswrapper[4755]: I1210 15:47:56.757657 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-querier-http\" (UniqueName: \"kubernetes.io/secret/051535b2-1182-4452-a267-16d22047e3d3-cloudkitty-lokistack-querier-http\") pod \"cloudkitty-lokistack-querier-795fd8f8cc-qjtx7\" (UID: \"051535b2-1182-4452-a267-16d22047e3d3\") " pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-qjtx7" Dec 10 15:47:56 crc kubenswrapper[4755]: I1210 15:47:56.757735 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/051535b2-1182-4452-a267-16d22047e3d3-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-querier-795fd8f8cc-qjtx7\" (UID: \"051535b2-1182-4452-a267-16d22047e3d3\") " pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-qjtx7" Dec 10 15:47:56 crc kubenswrapper[4755]: I1210 15:47:56.757839 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-querier-grpc\" (UniqueName: \"kubernetes.io/secret/051535b2-1182-4452-a267-16d22047e3d3-cloudkitty-lokistack-querier-grpc\") pod \"cloudkitty-lokistack-querier-795fd8f8cc-qjtx7\" (UID: \"051535b2-1182-4452-a267-16d22047e3d3\") " pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-qjtx7" Dec 10 15:47:56 crc kubenswrapper[4755]: I1210 15:47:56.758019 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/051535b2-1182-4452-a267-16d22047e3d3-config\") pod \"cloudkitty-lokistack-querier-795fd8f8cc-qjtx7\" (UID: \"051535b2-1182-4452-a267-16d22047e3d3\") " pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-qjtx7" Dec 10 15:47:56 crc kubenswrapper[4755]: I1210 15:47:56.758049 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87nlc\" (UniqueName: \"kubernetes.io/projected/051535b2-1182-4452-a267-16d22047e3d3-kube-api-access-87nlc\") pod \"cloudkitty-lokistack-querier-795fd8f8cc-qjtx7\" (UID: \"051535b2-1182-4452-a267-16d22047e3d3\") " pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-qjtx7" Dec 10 15:47:56 crc kubenswrapper[4755]: I1210 15:47:56.758081 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/051535b2-1182-4452-a267-16d22047e3d3-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-querier-795fd8f8cc-qjtx7\" (UID: \"051535b2-1182-4452-a267-16d22047e3d3\") " pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-qjtx7" Dec 10 15:47:56 crc kubenswrapper[4755]: I1210 15:47:56.772505 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-5zwd4" Dec 10 15:47:56 crc kubenswrapper[4755]: I1210 15:47:56.773891 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-558c4df967-tdf8t" Dec 10 15:47:56 crc kubenswrapper[4755]: I1210 15:47:56.795704 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7db4f4db8c-c65qj"] Dec 10 15:47:56 crc kubenswrapper[4755]: I1210 15:47:56.859717 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/779f3508-3735-4419-8503-834dc6f5b298-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-query-frontend-5cd44666df-5kvgz\" (UID: \"779f3508-3735-4419-8503-834dc6f5b298\") " pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-5kvgz" Dec 10 15:47:56 crc kubenswrapper[4755]: I1210 15:47:56.859789 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/779f3508-3735-4419-8503-834dc6f5b298-config\") pod \"cloudkitty-lokistack-query-frontend-5cd44666df-5kvgz\" (UID: \"779f3508-3735-4419-8503-834dc6f5b298\") " pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-5kvgz" Dec 10 15:47:56 crc kubenswrapper[4755]: I1210 15:47:56.859819 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/996e9361-db27-4210-8a4d-92a76a7874aa-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-c65qj\" (UID: \"996e9361-db27-4210-8a4d-92a76a7874aa\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-c65qj" Dec 10 15:47:56 crc kubenswrapper[4755]: I1210 15:47:56.859915 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/051535b2-1182-4452-a267-16d22047e3d3-config\") pod \"cloudkitty-lokistack-querier-795fd8f8cc-qjtx7\" (UID: \"051535b2-1182-4452-a267-16d22047e3d3\") " pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-qjtx7" Dec 10 15:47:56 crc kubenswrapper[4755]: I1210 15:47:56.859936 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcg4s\" (UniqueName: \"kubernetes.io/projected/779f3508-3735-4419-8503-834dc6f5b298-kube-api-access-fcg4s\") pod \"cloudkitty-lokistack-query-frontend-5cd44666df-5kvgz\" (UID: \"779f3508-3735-4419-8503-834dc6f5b298\") " pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-5kvgz" Dec 10 15:47:56 crc kubenswrapper[4755]: I1210 15:47:56.859958 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87nlc\" (UniqueName: \"kubernetes.io/projected/051535b2-1182-4452-a267-16d22047e3d3-kube-api-access-87nlc\") pod \"cloudkitty-lokistack-querier-795fd8f8cc-qjtx7\" (UID: \"051535b2-1182-4452-a267-16d22047e3d3\") " pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-qjtx7" Dec 10 15:47:56 crc kubenswrapper[4755]: I1210 15:47:56.859977 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/996e9361-db27-4210-8a4d-92a76a7874aa-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-c65qj\" (UID: \"996e9361-db27-4210-8a4d-92a76a7874aa\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-c65qj" Dec 10 15:47:56 crc kubenswrapper[4755]: I1210 15:47:56.859999 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/051535b2-1182-4452-a267-16d22047e3d3-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-querier-795fd8f8cc-qjtx7\" (UID: \"051535b2-1182-4452-a267-16d22047e3d3\") " pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-qjtx7" Dec 10 15:47:56 crc kubenswrapper[4755]: I1210 15:47:56.860047 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/996e9361-db27-4210-8a4d-92a76a7874aa-tls-secret\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-c65qj\" (UID: \"996e9361-db27-4210-8a4d-92a76a7874aa\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-c65qj" Dec 10 15:47:56 crc kubenswrapper[4755]: I1210 15:47:56.860092 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vblp5\" (UniqueName: \"kubernetes.io/projected/996e9361-db27-4210-8a4d-92a76a7874aa-kube-api-access-vblp5\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-c65qj\" (UID: \"996e9361-db27-4210-8a4d-92a76a7874aa\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-c65qj" Dec 10 15:47:56 crc kubenswrapper[4755]: I1210 15:47:56.860123 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-querier-http\" (UniqueName: \"kubernetes.io/secret/051535b2-1182-4452-a267-16d22047e3d3-cloudkitty-lokistack-querier-http\") pod \"cloudkitty-lokistack-querier-795fd8f8cc-qjtx7\" (UID: \"051535b2-1182-4452-a267-16d22047e3d3\") " pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-qjtx7" Dec 10 15:47:56 crc kubenswrapper[4755]: I1210 15:47:56.860144 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/996e9361-db27-4210-8a4d-92a76a7874aa-rbac\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-c65qj\" (UID: \"996e9361-db27-4210-8a4d-92a76a7874aa\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-c65qj" Dec 10 15:47:56 crc kubenswrapper[4755]: I1210 15:47:56.860165 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/996e9361-db27-4210-8a4d-92a76a7874aa-tenants\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-c65qj\" (UID: \"996e9361-db27-4210-8a4d-92a76a7874aa\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-c65qj" Dec 10 15:47:56 crc kubenswrapper[4755]: I1210 15:47:56.860181 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/996e9361-db27-4210-8a4d-92a76a7874aa-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-c65qj\" (UID: \"996e9361-db27-4210-8a4d-92a76a7874aa\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-c65qj" Dec 10 15:47:56 crc kubenswrapper[4755]: I1210 15:47:56.860203 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/996e9361-db27-4210-8a4d-92a76a7874aa-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-c65qj\" (UID: \"996e9361-db27-4210-8a4d-92a76a7874aa\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-c65qj" Dec 10 15:47:56 crc kubenswrapper[4755]: I1210 15:47:56.860220 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/779f3508-3735-4419-8503-834dc6f5b298-cloudkitty-lokistack-query-frontend-http\") pod \"cloudkitty-lokistack-query-frontend-5cd44666df-5kvgz\" (UID: \"779f3508-3735-4419-8503-834dc6f5b298\") " pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-5kvgz" Dec 10 15:47:56 crc kubenswrapper[4755]: I1210 15:47:56.860242 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/051535b2-1182-4452-a267-16d22047e3d3-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-querier-795fd8f8cc-qjtx7\" (UID: \"051535b2-1182-4452-a267-16d22047e3d3\") " pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-qjtx7" Dec 10 15:47:56 crc kubenswrapper[4755]: I1210 15:47:56.860266 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/779f3508-3735-4419-8503-834dc6f5b298-cloudkitty-lokistack-query-frontend-grpc\") pod \"cloudkitty-lokistack-query-frontend-5cd44666df-5kvgz\" (UID: \"779f3508-3735-4419-8503-834dc6f5b298\") " pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-5kvgz" Dec 10 15:47:56 crc kubenswrapper[4755]: I1210 15:47:56.860311 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-querier-grpc\" (UniqueName: \"kubernetes.io/secret/051535b2-1182-4452-a267-16d22047e3d3-cloudkitty-lokistack-querier-grpc\") pod \"cloudkitty-lokistack-querier-795fd8f8cc-qjtx7\" (UID: \"051535b2-1182-4452-a267-16d22047e3d3\") " pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-qjtx7" Dec 10 15:47:56 crc kubenswrapper[4755]: I1210 15:47:56.860331 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/996e9361-db27-4210-8a4d-92a76a7874aa-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-c65qj\" (UID: \"996e9361-db27-4210-8a4d-92a76a7874aa\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-c65qj" Dec 10 15:47:56 crc kubenswrapper[4755]: I1210 15:47:56.862020 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/051535b2-1182-4452-a267-16d22047e3d3-config\") pod \"cloudkitty-lokistack-querier-795fd8f8cc-qjtx7\" (UID: \"051535b2-1182-4452-a267-16d22047e3d3\") " pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-qjtx7" Dec 10 15:47:56 crc kubenswrapper[4755]: I1210 15:47:56.862911 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/051535b2-1182-4452-a267-16d22047e3d3-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-querier-795fd8f8cc-qjtx7\" (UID: \"051535b2-1182-4452-a267-16d22047e3d3\") " pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-qjtx7" Dec 10 15:47:56 crc kubenswrapper[4755]: I1210 15:47:56.869035 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/051535b2-1182-4452-a267-16d22047e3d3-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-querier-795fd8f8cc-qjtx7\" (UID: \"051535b2-1182-4452-a267-16d22047e3d3\") " pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-qjtx7" Dec 10 15:47:56 crc kubenswrapper[4755]: I1210 15:47:56.873458 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-querier-http\" (UniqueName: \"kubernetes.io/secret/051535b2-1182-4452-a267-16d22047e3d3-cloudkitty-lokistack-querier-http\") pod \"cloudkitty-lokistack-querier-795fd8f8cc-qjtx7\" (UID: \"051535b2-1182-4452-a267-16d22047e3d3\") " pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-qjtx7" Dec 10 15:47:56 crc kubenswrapper[4755]: I1210 15:47:56.873994 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-querier-grpc\" (UniqueName: \"kubernetes.io/secret/051535b2-1182-4452-a267-16d22047e3d3-cloudkitty-lokistack-querier-grpc\") pod \"cloudkitty-lokistack-querier-795fd8f8cc-qjtx7\" (UID: \"051535b2-1182-4452-a267-16d22047e3d3\") " pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-qjtx7" Dec 10 15:47:56 crc kubenswrapper[4755]: I1210 15:47:56.886369 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-85977f6c59-57kcn"] Dec 10 15:47:56 crc kubenswrapper[4755]: I1210 15:47:56.887558 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operators-redhat/loki-operator-controller-manager-85977f6c59-57kcn" podUID="9778155f-9c4a-4cb4-9085-12d776d78435" containerName="kube-rbac-proxy" containerID="cri-o://c584690dd349726272bedf88c63d1eac00ce1b0cf364a9a13654832cf4c77f90" gracePeriod=10 Dec 10 15:47:56 crc kubenswrapper[4755]: I1210 15:47:56.887074 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operators-redhat/loki-operator-controller-manager-85977f6c59-57kcn" podUID="9778155f-9c4a-4cb4-9085-12d776d78435" containerName="manager" containerID="cri-o://50ebf44b14ce8a47ca2117678b8c7de8624398e9781b1d16df545f1d2d01af86" gracePeriod=10 Dec 10 15:47:56 crc kubenswrapper[4755]: I1210 15:47:56.940063 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87nlc\" (UniqueName: \"kubernetes.io/projected/051535b2-1182-4452-a267-16d22047e3d3-kube-api-access-87nlc\") pod \"cloudkitty-lokistack-querier-795fd8f8cc-qjtx7\" (UID: \"051535b2-1182-4452-a267-16d22047e3d3\") " pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-qjtx7" Dec 10 15:47:56 crc kubenswrapper[4755]: I1210 15:47:56.966636 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcg4s\" (UniqueName: \"kubernetes.io/projected/779f3508-3735-4419-8503-834dc6f5b298-kube-api-access-fcg4s\") pod \"cloudkitty-lokistack-query-frontend-5cd44666df-5kvgz\" (UID: \"779f3508-3735-4419-8503-834dc6f5b298\") " pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-5kvgz" Dec 10 15:47:56 crc kubenswrapper[4755]: I1210 15:47:56.966688 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/996e9361-db27-4210-8a4d-92a76a7874aa-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-c65qj\" (UID: \"996e9361-db27-4210-8a4d-92a76a7874aa\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-c65qj" Dec 10 15:47:56 crc kubenswrapper[4755]: I1210 15:47:56.966752 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/996e9361-db27-4210-8a4d-92a76a7874aa-tls-secret\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-c65qj\" (UID: \"996e9361-db27-4210-8a4d-92a76a7874aa\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-c65qj" Dec 10 15:47:56 crc kubenswrapper[4755]: I1210 15:47:56.966778 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vblp5\" (UniqueName: \"kubernetes.io/projected/996e9361-db27-4210-8a4d-92a76a7874aa-kube-api-access-vblp5\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-c65qj\" (UID: \"996e9361-db27-4210-8a4d-92a76a7874aa\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-c65qj" Dec 10 15:47:56 crc kubenswrapper[4755]: I1210 15:47:56.966804 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/996e9361-db27-4210-8a4d-92a76a7874aa-rbac\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-c65qj\" (UID: \"996e9361-db27-4210-8a4d-92a76a7874aa\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-c65qj" Dec 10 15:47:56 crc kubenswrapper[4755]: I1210 15:47:56.966826 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/996e9361-db27-4210-8a4d-92a76a7874aa-tenants\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-c65qj\" (UID: \"996e9361-db27-4210-8a4d-92a76a7874aa\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-c65qj" Dec 10 15:47:56 crc kubenswrapper[4755]: I1210 15:47:56.966842 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/996e9361-db27-4210-8a4d-92a76a7874aa-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-c65qj\" (UID: \"996e9361-db27-4210-8a4d-92a76a7874aa\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-c65qj" Dec 10 15:47:56 crc kubenswrapper[4755]: I1210 15:47:56.966864 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/996e9361-db27-4210-8a4d-92a76a7874aa-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-c65qj\" (UID: \"996e9361-db27-4210-8a4d-92a76a7874aa\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-c65qj" Dec 10 15:47:56 crc kubenswrapper[4755]: I1210 15:47:56.966882 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/779f3508-3735-4419-8503-834dc6f5b298-cloudkitty-lokistack-query-frontend-http\") pod \"cloudkitty-lokistack-query-frontend-5cd44666df-5kvgz\" (UID: \"779f3508-3735-4419-8503-834dc6f5b298\") " pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-5kvgz" Dec 10 15:47:56 crc kubenswrapper[4755]: I1210 15:47:56.966926 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/779f3508-3735-4419-8503-834dc6f5b298-cloudkitty-lokistack-query-frontend-grpc\") pod \"cloudkitty-lokistack-query-frontend-5cd44666df-5kvgz\" (UID: \"779f3508-3735-4419-8503-834dc6f5b298\") " pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-5kvgz" Dec 10 15:47:56 crc kubenswrapper[4755]: I1210 15:47:56.966991 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/996e9361-db27-4210-8a4d-92a76a7874aa-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-c65qj\" (UID: \"996e9361-db27-4210-8a4d-92a76a7874aa\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-c65qj" Dec 10 15:47:56 crc kubenswrapper[4755]: I1210 15:47:56.967035 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/779f3508-3735-4419-8503-834dc6f5b298-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-query-frontend-5cd44666df-5kvgz\" (UID: \"779f3508-3735-4419-8503-834dc6f5b298\") " pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-5kvgz" Dec 10 15:47:56 crc kubenswrapper[4755]: I1210 15:47:56.967066 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/779f3508-3735-4419-8503-834dc6f5b298-config\") pod \"cloudkitty-lokistack-query-frontend-5cd44666df-5kvgz\" (UID: \"779f3508-3735-4419-8503-834dc6f5b298\") " pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-5kvgz" Dec 10 15:47:56 crc kubenswrapper[4755]: I1210 15:47:56.967085 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/996e9361-db27-4210-8a4d-92a76a7874aa-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-c65qj\" (UID: \"996e9361-db27-4210-8a4d-92a76a7874aa\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-c65qj" Dec 10 15:47:56 crc kubenswrapper[4755]: I1210 15:47:56.968795 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/996e9361-db27-4210-8a4d-92a76a7874aa-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-c65qj\" (UID: \"996e9361-db27-4210-8a4d-92a76a7874aa\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-c65qj" Dec 10 15:47:56 crc kubenswrapper[4755]: I1210 15:47:56.974068 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/996e9361-db27-4210-8a4d-92a76a7874aa-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-c65qj\" (UID: \"996e9361-db27-4210-8a4d-92a76a7874aa\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-c65qj" Dec 10 15:47:57 crc kubenswrapper[4755]: I1210 15:47:56.976242 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/996e9361-db27-4210-8a4d-92a76a7874aa-rbac\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-c65qj\" (UID: \"996e9361-db27-4210-8a4d-92a76a7874aa\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-c65qj" Dec 10 15:47:57 crc kubenswrapper[4755]: I1210 15:47:56.978588 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/996e9361-db27-4210-8a4d-92a76a7874aa-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-c65qj\" (UID: \"996e9361-db27-4210-8a4d-92a76a7874aa\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-c65qj" Dec 10 15:47:57 crc kubenswrapper[4755]: I1210 15:47:56.979316 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/996e9361-db27-4210-8a4d-92a76a7874aa-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-c65qj\" (UID: \"996e9361-db27-4210-8a4d-92a76a7874aa\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-c65qj" Dec 10 15:47:57 crc kubenswrapper[4755]: I1210 15:47:56.981035 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/779f3508-3735-4419-8503-834dc6f5b298-config\") pod \"cloudkitty-lokistack-query-frontend-5cd44666df-5kvgz\" (UID: \"779f3508-3735-4419-8503-834dc6f5b298\") " pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-5kvgz" Dec 10 15:47:57 crc kubenswrapper[4755]: I1210 15:47:56.981363 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/779f3508-3735-4419-8503-834dc6f5b298-cloudkitty-lokistack-query-frontend-grpc\") pod \"cloudkitty-lokistack-query-frontend-5cd44666df-5kvgz\" (UID: \"779f3508-3735-4419-8503-834dc6f5b298\") " pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-5kvgz" Dec 10 15:47:57 crc kubenswrapper[4755]: I1210 15:47:56.981897 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/996e9361-db27-4210-8a4d-92a76a7874aa-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-c65qj\" (UID: \"996e9361-db27-4210-8a4d-92a76a7874aa\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-c65qj" Dec 10 15:47:57 crc kubenswrapper[4755]: I1210 15:47:56.985784 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/779f3508-3735-4419-8503-834dc6f5b298-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-query-frontend-5cd44666df-5kvgz\" (UID: \"779f3508-3735-4419-8503-834dc6f5b298\") " pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-5kvgz" Dec 10 15:47:57 crc kubenswrapper[4755]: I1210 15:47:56.998689 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/996e9361-db27-4210-8a4d-92a76a7874aa-tenants\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-c65qj\" (UID: \"996e9361-db27-4210-8a4d-92a76a7874aa\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-c65qj" Dec 10 15:47:57 crc kubenswrapper[4755]: I1210 15:47:57.000243 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/779f3508-3735-4419-8503-834dc6f5b298-cloudkitty-lokistack-query-frontend-http\") pod \"cloudkitty-lokistack-query-frontend-5cd44666df-5kvgz\" (UID: \"779f3508-3735-4419-8503-834dc6f5b298\") " pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-5kvgz" Dec 10 15:47:57 crc kubenswrapper[4755]: I1210 15:47:57.000484 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/996e9361-db27-4210-8a4d-92a76a7874aa-tls-secret\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-c65qj\" (UID: \"996e9361-db27-4210-8a4d-92a76a7874aa\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-c65qj" Dec 10 15:47:57 crc kubenswrapper[4755]: I1210 15:47:57.023695 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vblp5\" (UniqueName: \"kubernetes.io/projected/996e9361-db27-4210-8a4d-92a76a7874aa-kube-api-access-vblp5\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-c65qj\" (UID: \"996e9361-db27-4210-8a4d-92a76a7874aa\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-c65qj" Dec 10 15:47:57 crc kubenswrapper[4755]: I1210 15:47:57.024824 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcg4s\" (UniqueName: \"kubernetes.io/projected/779f3508-3735-4419-8503-834dc6f5b298-kube-api-access-fcg4s\") pod \"cloudkitty-lokistack-query-frontend-5cd44666df-5kvgz\" (UID: \"779f3508-3735-4419-8503-834dc6f5b298\") " pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-5kvgz" Dec 10 15:47:57 crc kubenswrapper[4755]: I1210 15:47:57.037143 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-c65qj" Dec 10 15:47:57 crc kubenswrapper[4755]: I1210 15:47:57.216442 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-qjtx7" Dec 10 15:47:57 crc kubenswrapper[4755]: I1210 15:47:57.302536 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-5kvgz" Dec 10 15:47:57 crc kubenswrapper[4755]: I1210 15:47:57.427161 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-distributor-66dfd9bb-5zwd4"] Dec 10 15:47:57 crc kubenswrapper[4755]: I1210 15:47:57.704261 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7db4f4db8c-c65qj"] Dec 10 15:47:57 crc kubenswrapper[4755]: I1210 15:47:57.883414 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-querier-795fd8f8cc-qjtx7"] Dec 10 15:47:58 crc kubenswrapper[4755]: I1210 15:47:58.152804 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-c65qj" event={"ID":"996e9361-db27-4210-8a4d-92a76a7874aa","Type":"ContainerStarted","Data":"59753d0c0cef6ca4938205c3fd235a5a8a642a333a3cd37b85dc3345f7b6663f"} Dec 10 15:47:58 crc kubenswrapper[4755]: I1210 15:47:58.154596 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-query-frontend-5cd44666df-5kvgz"] Dec 10 15:47:58 crc kubenswrapper[4755]: I1210 15:47:58.162713 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-5zwd4" event={"ID":"56a3d20e-f422-40f4-bbe3-fc61da743389","Type":"ContainerStarted","Data":"43a83db819c983333020701fba8b638dfe5f705e4d779dcea10a5f6f1b4d8bd8"} Dec 10 15:47:58 crc kubenswrapper[4755]: I1210 15:47:58.182800 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-qjtx7" event={"ID":"051535b2-1182-4452-a267-16d22047e3d3","Type":"ContainerStarted","Data":"1de1d0b3da5a762b59b4cf2f7cb45718f3b66a5f0373366ae3b2cd72ecf3c32d"} Dec 10 15:47:58 crc kubenswrapper[4755]: I1210 15:47:58.205609 4755 generic.go:334] "Generic (PLEG): container finished" podID="9778155f-9c4a-4cb4-9085-12d776d78435" containerID="c584690dd349726272bedf88c63d1eac00ce1b0cf364a9a13654832cf4c77f90" exitCode=0 Dec 10 15:47:58 crc kubenswrapper[4755]: I1210 15:47:58.205641 4755 generic.go:334] "Generic (PLEG): container finished" podID="9778155f-9c4a-4cb4-9085-12d776d78435" containerID="50ebf44b14ce8a47ca2117678b8c7de8624398e9781b1d16df545f1d2d01af86" exitCode=0 Dec 10 15:47:58 crc kubenswrapper[4755]: I1210 15:47:58.205664 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-85977f6c59-57kcn" event={"ID":"9778155f-9c4a-4cb4-9085-12d776d78435","Type":"ContainerDied","Data":"c584690dd349726272bedf88c63d1eac00ce1b0cf364a9a13654832cf4c77f90"} Dec 10 15:47:58 crc kubenswrapper[4755]: I1210 15:47:58.205689 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-85977f6c59-57kcn" event={"ID":"9778155f-9c4a-4cb4-9085-12d776d78435","Type":"ContainerDied","Data":"50ebf44b14ce8a47ca2117678b8c7de8624398e9781b1d16df545f1d2d01af86"} Dec 10 15:47:58 crc kubenswrapper[4755]: I1210 15:47:58.407871 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-85977f6c59-57kcn" Dec 10 15:47:58 crc kubenswrapper[4755]: I1210 15:47:58.540519 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9778155f-9c4a-4cb4-9085-12d776d78435-loki-operator-metrics-cert\") pod \"9778155f-9c4a-4cb4-9085-12d776d78435\" (UID: \"9778155f-9c4a-4cb4-9085-12d776d78435\") " Dec 10 15:47:58 crc kubenswrapper[4755]: I1210 15:47:58.540692 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lgskn\" (UniqueName: \"kubernetes.io/projected/9778155f-9c4a-4cb4-9085-12d776d78435-kube-api-access-lgskn\") pod \"9778155f-9c4a-4cb4-9085-12d776d78435\" (UID: \"9778155f-9c4a-4cb4-9085-12d776d78435\") " Dec 10 15:47:58 crc kubenswrapper[4755]: I1210 15:47:58.540822 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9778155f-9c4a-4cb4-9085-12d776d78435-webhook-cert\") pod \"9778155f-9c4a-4cb4-9085-12d776d78435\" (UID: \"9778155f-9c4a-4cb4-9085-12d776d78435\") " Dec 10 15:47:58 crc kubenswrapper[4755]: I1210 15:47:58.540934 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/9778155f-9c4a-4cb4-9085-12d776d78435-manager-config\") pod \"9778155f-9c4a-4cb4-9085-12d776d78435\" (UID: \"9778155f-9c4a-4cb4-9085-12d776d78435\") " Dec 10 15:47:58 crc kubenswrapper[4755]: I1210 15:47:58.541020 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9778155f-9c4a-4cb4-9085-12d776d78435-apiservice-cert\") pod \"9778155f-9c4a-4cb4-9085-12d776d78435\" (UID: \"9778155f-9c4a-4cb4-9085-12d776d78435\") " Dec 10 15:47:58 crc kubenswrapper[4755]: I1210 15:47:58.548391 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9778155f-9c4a-4cb4-9085-12d776d78435-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "9778155f-9c4a-4cb4-9085-12d776d78435" (UID: "9778155f-9c4a-4cb4-9085-12d776d78435"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:47:58 crc kubenswrapper[4755]: I1210 15:47:58.548524 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9778155f-9c4a-4cb4-9085-12d776d78435-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "9778155f-9c4a-4cb4-9085-12d776d78435" (UID: "9778155f-9c4a-4cb4-9085-12d776d78435"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:47:58 crc kubenswrapper[4755]: I1210 15:47:58.548766 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9778155f-9c4a-4cb4-9085-12d776d78435-kube-api-access-lgskn" (OuterVolumeSpecName: "kube-api-access-lgskn") pod "9778155f-9c4a-4cb4-9085-12d776d78435" (UID: "9778155f-9c4a-4cb4-9085-12d776d78435"). InnerVolumeSpecName "kube-api-access-lgskn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:47:58 crc kubenswrapper[4755]: I1210 15:47:58.548776 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9778155f-9c4a-4cb4-9085-12d776d78435-loki-operator-metrics-cert" (OuterVolumeSpecName: "loki-operator-metrics-cert") pod "9778155f-9c4a-4cb4-9085-12d776d78435" (UID: "9778155f-9c4a-4cb4-9085-12d776d78435"). InnerVolumeSpecName "loki-operator-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:47:58 crc kubenswrapper[4755]: I1210 15:47:58.570237 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9778155f-9c4a-4cb4-9085-12d776d78435-manager-config" (OuterVolumeSpecName: "manager-config") pod "9778155f-9c4a-4cb4-9085-12d776d78435" (UID: "9778155f-9c4a-4cb4-9085-12d776d78435"). InnerVolumeSpecName "manager-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:47:58 crc kubenswrapper[4755]: I1210 15:47:58.648785 4755 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9778155f-9c4a-4cb4-9085-12d776d78435-webhook-cert\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:58 crc kubenswrapper[4755]: I1210 15:47:58.648835 4755 reconciler_common.go:293] "Volume detached for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/9778155f-9c4a-4cb4-9085-12d776d78435-manager-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:58 crc kubenswrapper[4755]: I1210 15:47:58.648854 4755 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9778155f-9c4a-4cb4-9085-12d776d78435-apiservice-cert\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:58 crc kubenswrapper[4755]: I1210 15:47:58.648866 4755 reconciler_common.go:293] "Volume detached for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9778155f-9c4a-4cb4-9085-12d776d78435-loki-operator-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:58 crc kubenswrapper[4755]: I1210 15:47:58.648879 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lgskn\" (UniqueName: \"kubernetes.io/projected/9778155f-9c4a-4cb4-9085-12d776d78435-kube-api-access-lgskn\") on node \"crc\" DevicePath \"\"" Dec 10 15:47:59 crc kubenswrapper[4755]: I1210 15:47:59.219598 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-85977f6c59-57kcn" Dec 10 15:47:59 crc kubenswrapper[4755]: I1210 15:47:59.219586 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-85977f6c59-57kcn" event={"ID":"9778155f-9c4a-4cb4-9085-12d776d78435","Type":"ContainerDied","Data":"845364af72e04aeb67028bdf5515ca76657ed7d72a840cc4e145060d683bfdca"} Dec 10 15:47:59 crc kubenswrapper[4755]: I1210 15:47:59.219722 4755 scope.go:117] "RemoveContainer" containerID="c584690dd349726272bedf88c63d1eac00ce1b0cf364a9a13654832cf4c77f90" Dec 10 15:47:59 crc kubenswrapper[4755]: I1210 15:47:59.221427 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-5kvgz" event={"ID":"779f3508-3735-4419-8503-834dc6f5b298","Type":"ContainerStarted","Data":"f3a646a8fbc93c4881657cce3dbaa10c8082b165d99696c67e55ee1a94b55a42"} Dec 10 15:47:59 crc kubenswrapper[4755]: I1210 15:47:59.250410 4755 scope.go:117] "RemoveContainer" containerID="50ebf44b14ce8a47ca2117678b8c7de8624398e9781b1d16df545f1d2d01af86" Dec 10 15:47:59 crc kubenswrapper[4755]: I1210 15:47:59.268582 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-85977f6c59-57kcn"] Dec 10 15:47:59 crc kubenswrapper[4755]: I1210 15:47:59.288334 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-85977f6c59-57kcn"] Dec 10 15:47:59 crc kubenswrapper[4755]: I1210 15:47:59.354113 4755 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="e5a3871d-6b81-4b3d-9044-fcbcf437effb" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 10 15:47:59 crc kubenswrapper[4755]: I1210 15:47:59.567844 4755 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-compactor-0" podUID="31bbbf2c-5266-4ea7-8428-ed2607013a35" containerName="loki-compactor" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 10 15:47:59 crc kubenswrapper[4755]: I1210 15:47:59.572753 4755 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-index-gateway-0" podUID="4e702de9-8dda-4370-b806-41083a70ac41" containerName="loki-index-gateway" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 10 15:47:59 crc kubenswrapper[4755]: I1210 15:47:59.776279 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9778155f-9c4a-4cb4-9085-12d776d78435" path="/var/lib/kubelet/pods/9778155f-9c4a-4cb4-9085-12d776d78435/volumes" Dec 10 15:48:04 crc kubenswrapper[4755]: I1210 15:48:04.299576 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-5kvgz" event={"ID":"779f3508-3735-4419-8503-834dc6f5b298","Type":"ContainerStarted","Data":"cf278020b874835e27085a6893b73bd092a56dfac79be832c761d521091da9d9"} Dec 10 15:48:04 crc kubenswrapper[4755]: I1210 15:48:04.299998 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-5kvgz" Dec 10 15:48:04 crc kubenswrapper[4755]: I1210 15:48:04.301626 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-c65qj" event={"ID":"996e9361-db27-4210-8a4d-92a76a7874aa","Type":"ContainerStarted","Data":"c6f48f1545a5c0c32de9339828dc8405d56b9408234f9c803d74e592d5440e5a"} Dec 10 15:48:04 crc kubenswrapper[4755]: I1210 15:48:04.301848 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-c65qj" Dec 10 15:48:04 crc kubenswrapper[4755]: I1210 15:48:04.303499 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-5zwd4" event={"ID":"56a3d20e-f422-40f4-bbe3-fc61da743389","Type":"ContainerStarted","Data":"396a2a1deb0a1a114b8e10ba946107629381d3e85e8e2ddeb4d095b9644d7436"} Dec 10 15:48:04 crc kubenswrapper[4755]: I1210 15:48:04.303594 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-5zwd4" Dec 10 15:48:04 crc kubenswrapper[4755]: I1210 15:48:04.304701 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-qjtx7" event={"ID":"051535b2-1182-4452-a267-16d22047e3d3","Type":"ContainerStarted","Data":"511b928245196f18757f5e38603554e2b5947abd810e5f28508451257e8fdc64"} Dec 10 15:48:04 crc kubenswrapper[4755]: I1210 15:48:04.304818 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-qjtx7" Dec 10 15:48:04 crc kubenswrapper[4755]: I1210 15:48:04.315310 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-c65qj" Dec 10 15:48:04 crc kubenswrapper[4755]: I1210 15:48:04.323327 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-5kvgz" podStartSLOduration=2.775386412 podStartE2EDuration="8.323303885s" podCreationTimestamp="2025-12-10 15:47:56 +0000 UTC" firstStartedPulling="2025-12-10 15:47:58.182613527 +0000 UTC m=+1474.783497159" lastFinishedPulling="2025-12-10 15:48:03.730531 +0000 UTC m=+1480.331414632" observedRunningTime="2025-12-10 15:48:04.316420988 +0000 UTC m=+1480.917304630" watchObservedRunningTime="2025-12-10 15:48:04.323303885 +0000 UTC m=+1480.924187517" Dec 10 15:48:04 crc kubenswrapper[4755]: I1210 15:48:04.370073 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-qjtx7" podStartSLOduration=2.526647014 podStartE2EDuration="8.37005206s" podCreationTimestamp="2025-12-10 15:47:56 +0000 UTC" firstStartedPulling="2025-12-10 15:47:57.886395044 +0000 UTC m=+1474.487278676" lastFinishedPulling="2025-12-10 15:48:03.72980009 +0000 UTC m=+1480.330683722" observedRunningTime="2025-12-10 15:48:04.368409665 +0000 UTC m=+1480.969293287" watchObservedRunningTime="2025-12-10 15:48:04.37005206 +0000 UTC m=+1480.970935692" Dec 10 15:48:04 crc kubenswrapper[4755]: I1210 15:48:04.377564 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-5zwd4" podStartSLOduration=2.103440159 podStartE2EDuration="8.377542463s" podCreationTimestamp="2025-12-10 15:47:56 +0000 UTC" firstStartedPulling="2025-12-10 15:47:57.456666543 +0000 UTC m=+1474.057550175" lastFinishedPulling="2025-12-10 15:48:03.730768847 +0000 UTC m=+1480.331652479" observedRunningTime="2025-12-10 15:48:04.352848531 +0000 UTC m=+1480.953732183" watchObservedRunningTime="2025-12-10 15:48:04.377542463 +0000 UTC m=+1480.978426095" Dec 10 15:48:04 crc kubenswrapper[4755]: I1210 15:48:04.408520 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-c65qj" podStartSLOduration=2.396969939 podStartE2EDuration="8.408496607s" podCreationTimestamp="2025-12-10 15:47:56 +0000 UTC" firstStartedPulling="2025-12-10 15:47:57.718363354 +0000 UTC m=+1474.319246986" lastFinishedPulling="2025-12-10 15:48:03.729890022 +0000 UTC m=+1480.330773654" observedRunningTime="2025-12-10 15:48:04.392222674 +0000 UTC m=+1480.993106296" watchObservedRunningTime="2025-12-10 15:48:04.408496607 +0000 UTC m=+1481.009380239" Dec 10 15:48:04 crc kubenswrapper[4755]: I1210 15:48:04.454855 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-lokistack-gateway-bc75944f-sxgmq"] Dec 10 15:48:04 crc kubenswrapper[4755]: I1210 15:48:04.455101 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cloudkitty-lokistack-gateway-bc75944f-sxgmq" podUID="614e240f-4195-4915-9e2e-d142c9df25bc" containerName="gateway" containerID="cri-o://fac4c67f3712b705b2d5b858ede9fd435f66dc6c2b919dfb7e17194b94910a9a" gracePeriod=30 Dec 10 15:48:04 crc kubenswrapper[4755]: I1210 15:48:04.487602 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7db4f4db8c-cnjnh"] Dec 10 15:48:04 crc kubenswrapper[4755]: E1210 15:48:04.488144 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9778155f-9c4a-4cb4-9085-12d776d78435" containerName="manager" Dec 10 15:48:04 crc kubenswrapper[4755]: I1210 15:48:04.488157 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="9778155f-9c4a-4cb4-9085-12d776d78435" containerName="manager" Dec 10 15:48:04 crc kubenswrapper[4755]: E1210 15:48:04.488170 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9778155f-9c4a-4cb4-9085-12d776d78435" containerName="kube-rbac-proxy" Dec 10 15:48:04 crc kubenswrapper[4755]: I1210 15:48:04.488176 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="9778155f-9c4a-4cb4-9085-12d776d78435" containerName="kube-rbac-proxy" Dec 10 15:48:04 crc kubenswrapper[4755]: I1210 15:48:04.488392 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="9778155f-9c4a-4cb4-9085-12d776d78435" containerName="manager" Dec 10 15:48:04 crc kubenswrapper[4755]: I1210 15:48:04.488415 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="9778155f-9c4a-4cb4-9085-12d776d78435" containerName="kube-rbac-proxy" Dec 10 15:48:04 crc kubenswrapper[4755]: I1210 15:48:04.489383 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-cnjnh" Dec 10 15:48:04 crc kubenswrapper[4755]: I1210 15:48:04.505384 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7db4f4db8c-cnjnh"] Dec 10 15:48:04 crc kubenswrapper[4755]: I1210 15:48:04.597522 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6414fa30-9e0e-4dc8-99aa-d35799f2cb46-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-cnjnh\" (UID: \"6414fa30-9e0e-4dc8-99aa-d35799f2cb46\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-cnjnh" Dec 10 15:48:04 crc kubenswrapper[4755]: I1210 15:48:04.597607 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/6414fa30-9e0e-4dc8-99aa-d35799f2cb46-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-cnjnh\" (UID: \"6414fa30-9e0e-4dc8-99aa-d35799f2cb46\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-cnjnh" Dec 10 15:48:04 crc kubenswrapper[4755]: I1210 15:48:04.597631 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6414fa30-9e0e-4dc8-99aa-d35799f2cb46-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-cnjnh\" (UID: \"6414fa30-9e0e-4dc8-99aa-d35799f2cb46\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-cnjnh" Dec 10 15:48:04 crc kubenswrapper[4755]: I1210 15:48:04.597696 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/6414fa30-9e0e-4dc8-99aa-d35799f2cb46-rbac\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-cnjnh\" (UID: \"6414fa30-9e0e-4dc8-99aa-d35799f2cb46\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-cnjnh" Dec 10 15:48:04 crc kubenswrapper[4755]: I1210 15:48:04.597720 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/6414fa30-9e0e-4dc8-99aa-d35799f2cb46-tls-secret\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-cnjnh\" (UID: \"6414fa30-9e0e-4dc8-99aa-d35799f2cb46\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-cnjnh" Dec 10 15:48:04 crc kubenswrapper[4755]: I1210 15:48:04.598144 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkqt9\" (UniqueName: \"kubernetes.io/projected/6414fa30-9e0e-4dc8-99aa-d35799f2cb46-kube-api-access-rkqt9\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-cnjnh\" (UID: \"6414fa30-9e0e-4dc8-99aa-d35799f2cb46\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-cnjnh" Dec 10 15:48:04 crc kubenswrapper[4755]: I1210 15:48:04.598203 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/6414fa30-9e0e-4dc8-99aa-d35799f2cb46-tenants\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-cnjnh\" (UID: \"6414fa30-9e0e-4dc8-99aa-d35799f2cb46\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-cnjnh" Dec 10 15:48:04 crc kubenswrapper[4755]: I1210 15:48:04.598312 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/6414fa30-9e0e-4dc8-99aa-d35799f2cb46-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-cnjnh\" (UID: \"6414fa30-9e0e-4dc8-99aa-d35799f2cb46\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-cnjnh" Dec 10 15:48:04 crc kubenswrapper[4755]: I1210 15:48:04.598401 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6414fa30-9e0e-4dc8-99aa-d35799f2cb46-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-cnjnh\" (UID: \"6414fa30-9e0e-4dc8-99aa-d35799f2cb46\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-cnjnh" Dec 10 15:48:04 crc kubenswrapper[4755]: I1210 15:48:04.700051 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/6414fa30-9e0e-4dc8-99aa-d35799f2cb46-rbac\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-cnjnh\" (UID: \"6414fa30-9e0e-4dc8-99aa-d35799f2cb46\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-cnjnh" Dec 10 15:48:04 crc kubenswrapper[4755]: I1210 15:48:04.700143 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/6414fa30-9e0e-4dc8-99aa-d35799f2cb46-tls-secret\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-cnjnh\" (UID: \"6414fa30-9e0e-4dc8-99aa-d35799f2cb46\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-cnjnh" Dec 10 15:48:04 crc kubenswrapper[4755]: I1210 15:48:04.700234 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkqt9\" (UniqueName: \"kubernetes.io/projected/6414fa30-9e0e-4dc8-99aa-d35799f2cb46-kube-api-access-rkqt9\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-cnjnh\" (UID: \"6414fa30-9e0e-4dc8-99aa-d35799f2cb46\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-cnjnh" Dec 10 15:48:04 crc kubenswrapper[4755]: I1210 15:48:04.700264 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/6414fa30-9e0e-4dc8-99aa-d35799f2cb46-tenants\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-cnjnh\" (UID: \"6414fa30-9e0e-4dc8-99aa-d35799f2cb46\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-cnjnh" Dec 10 15:48:04 crc kubenswrapper[4755]: I1210 15:48:04.700318 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/6414fa30-9e0e-4dc8-99aa-d35799f2cb46-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-cnjnh\" (UID: \"6414fa30-9e0e-4dc8-99aa-d35799f2cb46\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-cnjnh" Dec 10 15:48:04 crc kubenswrapper[4755]: I1210 15:48:04.700369 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6414fa30-9e0e-4dc8-99aa-d35799f2cb46-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-cnjnh\" (UID: \"6414fa30-9e0e-4dc8-99aa-d35799f2cb46\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-cnjnh" Dec 10 15:48:04 crc kubenswrapper[4755]: I1210 15:48:04.700459 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6414fa30-9e0e-4dc8-99aa-d35799f2cb46-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-cnjnh\" (UID: \"6414fa30-9e0e-4dc8-99aa-d35799f2cb46\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-cnjnh" Dec 10 15:48:04 crc kubenswrapper[4755]: I1210 15:48:04.700537 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/6414fa30-9e0e-4dc8-99aa-d35799f2cb46-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-cnjnh\" (UID: \"6414fa30-9e0e-4dc8-99aa-d35799f2cb46\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-cnjnh" Dec 10 15:48:04 crc kubenswrapper[4755]: I1210 15:48:04.700565 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6414fa30-9e0e-4dc8-99aa-d35799f2cb46-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-cnjnh\" (UID: \"6414fa30-9e0e-4dc8-99aa-d35799f2cb46\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-cnjnh" Dec 10 15:48:04 crc kubenswrapper[4755]: I1210 15:48:04.701540 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6414fa30-9e0e-4dc8-99aa-d35799f2cb46-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-cnjnh\" (UID: \"6414fa30-9e0e-4dc8-99aa-d35799f2cb46\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-cnjnh" Dec 10 15:48:04 crc kubenswrapper[4755]: I1210 15:48:04.701569 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/6414fa30-9e0e-4dc8-99aa-d35799f2cb46-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-cnjnh\" (UID: \"6414fa30-9e0e-4dc8-99aa-d35799f2cb46\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-cnjnh" Dec 10 15:48:04 crc kubenswrapper[4755]: I1210 15:48:04.701655 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6414fa30-9e0e-4dc8-99aa-d35799f2cb46-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-cnjnh\" (UID: \"6414fa30-9e0e-4dc8-99aa-d35799f2cb46\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-cnjnh" Dec 10 15:48:04 crc kubenswrapper[4755]: I1210 15:48:04.702091 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6414fa30-9e0e-4dc8-99aa-d35799f2cb46-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-cnjnh\" (UID: \"6414fa30-9e0e-4dc8-99aa-d35799f2cb46\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-cnjnh" Dec 10 15:48:04 crc kubenswrapper[4755]: I1210 15:48:04.706217 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/6414fa30-9e0e-4dc8-99aa-d35799f2cb46-rbac\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-cnjnh\" (UID: \"6414fa30-9e0e-4dc8-99aa-d35799f2cb46\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-cnjnh" Dec 10 15:48:04 crc kubenswrapper[4755]: I1210 15:48:04.706291 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/6414fa30-9e0e-4dc8-99aa-d35799f2cb46-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-cnjnh\" (UID: \"6414fa30-9e0e-4dc8-99aa-d35799f2cb46\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-cnjnh" Dec 10 15:48:04 crc kubenswrapper[4755]: I1210 15:48:04.706665 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/6414fa30-9e0e-4dc8-99aa-d35799f2cb46-tls-secret\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-cnjnh\" (UID: \"6414fa30-9e0e-4dc8-99aa-d35799f2cb46\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-cnjnh" Dec 10 15:48:04 crc kubenswrapper[4755]: I1210 15:48:04.709026 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/6414fa30-9e0e-4dc8-99aa-d35799f2cb46-tenants\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-cnjnh\" (UID: \"6414fa30-9e0e-4dc8-99aa-d35799f2cb46\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-cnjnh" Dec 10 15:48:04 crc kubenswrapper[4755]: I1210 15:48:04.718135 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkqt9\" (UniqueName: \"kubernetes.io/projected/6414fa30-9e0e-4dc8-99aa-d35799f2cb46-kube-api-access-rkqt9\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-cnjnh\" (UID: \"6414fa30-9e0e-4dc8-99aa-d35799f2cb46\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-cnjnh" Dec 10 15:48:04 crc kubenswrapper[4755]: I1210 15:48:04.815836 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-cnjnh" Dec 10 15:48:05 crc kubenswrapper[4755]: I1210 15:48:05.289777 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7db4f4db8c-cnjnh"] Dec 10 15:48:05 crc kubenswrapper[4755]: W1210 15:48:05.298656 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6414fa30_9e0e_4dc8_99aa_d35799f2cb46.slice/crio-0c96df3d8279d37c2b912bbe8957799a07ed091489c9ab65624ef91da25570be WatchSource:0}: Error finding container 0c96df3d8279d37c2b912bbe8957799a07ed091489c9ab65624ef91da25570be: Status 404 returned error can't find the container with id 0c96df3d8279d37c2b912bbe8957799a07ed091489c9ab65624ef91da25570be Dec 10 15:48:05 crc kubenswrapper[4755]: I1210 15:48:05.337228 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-cnjnh" event={"ID":"6414fa30-9e0e-4dc8-99aa-d35799f2cb46","Type":"ContainerStarted","Data":"0c96df3d8279d37c2b912bbe8957799a07ed091489c9ab65624ef91da25570be"} Dec 10 15:48:06 crc kubenswrapper[4755]: I1210 15:48:06.081921 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-gateway-bc75944f-sxgmq" Dec 10 15:48:06 crc kubenswrapper[4755]: I1210 15:48:06.139176 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/614e240f-4195-4915-9e2e-d142c9df25bc-tenants\") pod \"614e240f-4195-4915-9e2e-d142c9df25bc\" (UID: \"614e240f-4195-4915-9e2e-d142c9df25bc\") " Dec 10 15:48:06 crc kubenswrapper[4755]: I1210 15:48:06.139231 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-495wc\" (UniqueName: \"kubernetes.io/projected/614e240f-4195-4915-9e2e-d142c9df25bc-kube-api-access-495wc\") pod \"614e240f-4195-4915-9e2e-d142c9df25bc\" (UID: \"614e240f-4195-4915-9e2e-d142c9df25bc\") " Dec 10 15:48:06 crc kubenswrapper[4755]: I1210 15:48:06.139257 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/614e240f-4195-4915-9e2e-d142c9df25bc-cloudkitty-lokistack-ca-bundle\") pod \"614e240f-4195-4915-9e2e-d142c9df25bc\" (UID: \"614e240f-4195-4915-9e2e-d142c9df25bc\") " Dec 10 15:48:06 crc kubenswrapper[4755]: I1210 15:48:06.139327 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/614e240f-4195-4915-9e2e-d142c9df25bc-cloudkitty-lokistack-gateway-client-http\") pod \"614e240f-4195-4915-9e2e-d142c9df25bc\" (UID: \"614e240f-4195-4915-9e2e-d142c9df25bc\") " Dec 10 15:48:06 crc kubenswrapper[4755]: I1210 15:48:06.139373 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/614e240f-4195-4915-9e2e-d142c9df25bc-rbac\") pod \"614e240f-4195-4915-9e2e-d142c9df25bc\" (UID: \"614e240f-4195-4915-9e2e-d142c9df25bc\") " Dec 10 15:48:06 crc kubenswrapper[4755]: I1210 15:48:06.139397 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/614e240f-4195-4915-9e2e-d142c9df25bc-tls-secret\") pod \"614e240f-4195-4915-9e2e-d142c9df25bc\" (UID: \"614e240f-4195-4915-9e2e-d142c9df25bc\") " Dec 10 15:48:06 crc kubenswrapper[4755]: I1210 15:48:06.139447 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/614e240f-4195-4915-9e2e-d142c9df25bc-lokistack-gateway\") pod \"614e240f-4195-4915-9e2e-d142c9df25bc\" (UID: \"614e240f-4195-4915-9e2e-d142c9df25bc\") " Dec 10 15:48:06 crc kubenswrapper[4755]: I1210 15:48:06.139644 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/614e240f-4195-4915-9e2e-d142c9df25bc-cloudkitty-ca-bundle\") pod \"614e240f-4195-4915-9e2e-d142c9df25bc\" (UID: \"614e240f-4195-4915-9e2e-d142c9df25bc\") " Dec 10 15:48:06 crc kubenswrapper[4755]: I1210 15:48:06.139699 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/614e240f-4195-4915-9e2e-d142c9df25bc-cloudkitty-lokistack-gateway-ca-bundle\") pod \"614e240f-4195-4915-9e2e-d142c9df25bc\" (UID: \"614e240f-4195-4915-9e2e-d142c9df25bc\") " Dec 10 15:48:06 crc kubenswrapper[4755]: I1210 15:48:06.140942 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/614e240f-4195-4915-9e2e-d142c9df25bc-cloudkitty-ca-bundle" (OuterVolumeSpecName: "cloudkitty-ca-bundle") pod "614e240f-4195-4915-9e2e-d142c9df25bc" (UID: "614e240f-4195-4915-9e2e-d142c9df25bc"). InnerVolumeSpecName "cloudkitty-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:48:06 crc kubenswrapper[4755]: I1210 15:48:06.141241 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/614e240f-4195-4915-9e2e-d142c9df25bc-cloudkitty-lokistack-ca-bundle" (OuterVolumeSpecName: "cloudkitty-lokistack-ca-bundle") pod "614e240f-4195-4915-9e2e-d142c9df25bc" (UID: "614e240f-4195-4915-9e2e-d142c9df25bc"). InnerVolumeSpecName "cloudkitty-lokistack-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:48:06 crc kubenswrapper[4755]: I1210 15:48:06.141857 4755 reconciler_common.go:293] "Volume detached for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/614e240f-4195-4915-9e2e-d142c9df25bc-cloudkitty-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:06 crc kubenswrapper[4755]: I1210 15:48:06.141875 4755 reconciler_common.go:293] "Volume detached for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/614e240f-4195-4915-9e2e-d142c9df25bc-cloudkitty-lokistack-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:06 crc kubenswrapper[4755]: I1210 15:48:06.142011 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/614e240f-4195-4915-9e2e-d142c9df25bc-cloudkitty-lokistack-gateway-ca-bundle" (OuterVolumeSpecName: "cloudkitty-lokistack-gateway-ca-bundle") pod "614e240f-4195-4915-9e2e-d142c9df25bc" (UID: "614e240f-4195-4915-9e2e-d142c9df25bc"). InnerVolumeSpecName "cloudkitty-lokistack-gateway-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:48:06 crc kubenswrapper[4755]: I1210 15:48:06.153420 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/614e240f-4195-4915-9e2e-d142c9df25bc-kube-api-access-495wc" (OuterVolumeSpecName: "kube-api-access-495wc") pod "614e240f-4195-4915-9e2e-d142c9df25bc" (UID: "614e240f-4195-4915-9e2e-d142c9df25bc"). InnerVolumeSpecName "kube-api-access-495wc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:48:06 crc kubenswrapper[4755]: I1210 15:48:06.153641 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/614e240f-4195-4915-9e2e-d142c9df25bc-tls-secret" (OuterVolumeSpecName: "tls-secret") pod "614e240f-4195-4915-9e2e-d142c9df25bc" (UID: "614e240f-4195-4915-9e2e-d142c9df25bc"). InnerVolumeSpecName "tls-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:48:06 crc kubenswrapper[4755]: I1210 15:48:06.153824 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/614e240f-4195-4915-9e2e-d142c9df25bc-cloudkitty-lokistack-gateway-client-http" (OuterVolumeSpecName: "cloudkitty-lokistack-gateway-client-http") pod "614e240f-4195-4915-9e2e-d142c9df25bc" (UID: "614e240f-4195-4915-9e2e-d142c9df25bc"). InnerVolumeSpecName "cloudkitty-lokistack-gateway-client-http". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:48:06 crc kubenswrapper[4755]: I1210 15:48:06.171188 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/614e240f-4195-4915-9e2e-d142c9df25bc-lokistack-gateway" (OuterVolumeSpecName: "lokistack-gateway") pod "614e240f-4195-4915-9e2e-d142c9df25bc" (UID: "614e240f-4195-4915-9e2e-d142c9df25bc"). InnerVolumeSpecName "lokistack-gateway". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:48:06 crc kubenswrapper[4755]: I1210 15:48:06.174307 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/614e240f-4195-4915-9e2e-d142c9df25bc-tenants" (OuterVolumeSpecName: "tenants") pod "614e240f-4195-4915-9e2e-d142c9df25bc" (UID: "614e240f-4195-4915-9e2e-d142c9df25bc"). InnerVolumeSpecName "tenants". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:48:06 crc kubenswrapper[4755]: I1210 15:48:06.184823 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/614e240f-4195-4915-9e2e-d142c9df25bc-rbac" (OuterVolumeSpecName: "rbac") pod "614e240f-4195-4915-9e2e-d142c9df25bc" (UID: "614e240f-4195-4915-9e2e-d142c9df25bc"). InnerVolumeSpecName "rbac". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:48:06 crc kubenswrapper[4755]: I1210 15:48:06.244324 4755 reconciler_common.go:293] "Volume detached for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/614e240f-4195-4915-9e2e-d142c9df25bc-rbac\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:06 crc kubenswrapper[4755]: I1210 15:48:06.244607 4755 reconciler_common.go:293] "Volume detached for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/614e240f-4195-4915-9e2e-d142c9df25bc-tls-secret\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:06 crc kubenswrapper[4755]: I1210 15:48:06.244676 4755 reconciler_common.go:293] "Volume detached for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/614e240f-4195-4915-9e2e-d142c9df25bc-lokistack-gateway\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:06 crc kubenswrapper[4755]: I1210 15:48:06.244760 4755 reconciler_common.go:293] "Volume detached for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/614e240f-4195-4915-9e2e-d142c9df25bc-cloudkitty-lokistack-gateway-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:06 crc kubenswrapper[4755]: I1210 15:48:06.244834 4755 reconciler_common.go:293] "Volume detached for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/614e240f-4195-4915-9e2e-d142c9df25bc-tenants\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:06 crc kubenswrapper[4755]: I1210 15:48:06.244886 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-495wc\" (UniqueName: \"kubernetes.io/projected/614e240f-4195-4915-9e2e-d142c9df25bc-kube-api-access-495wc\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:06 crc kubenswrapper[4755]: I1210 15:48:06.244937 4755 reconciler_common.go:293] "Volume detached for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/614e240f-4195-4915-9e2e-d142c9df25bc-cloudkitty-lokistack-gateway-client-http\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:06 crc kubenswrapper[4755]: I1210 15:48:06.349674 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-cnjnh" event={"ID":"6414fa30-9e0e-4dc8-99aa-d35799f2cb46","Type":"ContainerStarted","Data":"44b8dcfaaa5fcfcffa43017aed59c1d6bd2cd19a5eeabe93aa2d1f408907e4aa"} Dec 10 15:48:06 crc kubenswrapper[4755]: I1210 15:48:06.351625 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-cnjnh" Dec 10 15:48:06 crc kubenswrapper[4755]: I1210 15:48:06.355508 4755 generic.go:334] "Generic (PLEG): container finished" podID="614e240f-4195-4915-9e2e-d142c9df25bc" containerID="fac4c67f3712b705b2d5b858ede9fd435f66dc6c2b919dfb7e17194b94910a9a" exitCode=0 Dec 10 15:48:06 crc kubenswrapper[4755]: I1210 15:48:06.355658 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-gateway-bc75944f-sxgmq" Dec 10 15:48:06 crc kubenswrapper[4755]: I1210 15:48:06.355665 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-gateway-bc75944f-sxgmq" event={"ID":"614e240f-4195-4915-9e2e-d142c9df25bc","Type":"ContainerDied","Data":"fac4c67f3712b705b2d5b858ede9fd435f66dc6c2b919dfb7e17194b94910a9a"} Dec 10 15:48:06 crc kubenswrapper[4755]: I1210 15:48:06.355936 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-gateway-bc75944f-sxgmq" event={"ID":"614e240f-4195-4915-9e2e-d142c9df25bc","Type":"ContainerDied","Data":"1f7ac041a8355516479dd9b6caf9b2fe987070eca13c3855e6099fee67198e2d"} Dec 10 15:48:06 crc kubenswrapper[4755]: I1210 15:48:06.355984 4755 scope.go:117] "RemoveContainer" containerID="fac4c67f3712b705b2d5b858ede9fd435f66dc6c2b919dfb7e17194b94910a9a" Dec 10 15:48:06 crc kubenswrapper[4755]: I1210 15:48:06.364158 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-cnjnh" Dec 10 15:48:06 crc kubenswrapper[4755]: I1210 15:48:06.385833 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-cnjnh" podStartSLOduration=2.385814467 podStartE2EDuration="2.385814467s" podCreationTimestamp="2025-12-10 15:48:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:48:06.36976936 +0000 UTC m=+1482.970653002" watchObservedRunningTime="2025-12-10 15:48:06.385814467 +0000 UTC m=+1482.986698099" Dec 10 15:48:06 crc kubenswrapper[4755]: I1210 15:48:06.393884 4755 scope.go:117] "RemoveContainer" containerID="fac4c67f3712b705b2d5b858ede9fd435f66dc6c2b919dfb7e17194b94910a9a" Dec 10 15:48:06 crc kubenswrapper[4755]: E1210 15:48:06.394583 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fac4c67f3712b705b2d5b858ede9fd435f66dc6c2b919dfb7e17194b94910a9a\": container with ID starting with fac4c67f3712b705b2d5b858ede9fd435f66dc6c2b919dfb7e17194b94910a9a not found: ID does not exist" containerID="fac4c67f3712b705b2d5b858ede9fd435f66dc6c2b919dfb7e17194b94910a9a" Dec 10 15:48:06 crc kubenswrapper[4755]: I1210 15:48:06.394621 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fac4c67f3712b705b2d5b858ede9fd435f66dc6c2b919dfb7e17194b94910a9a"} err="failed to get container status \"fac4c67f3712b705b2d5b858ede9fd435f66dc6c2b919dfb7e17194b94910a9a\": rpc error: code = NotFound desc = could not find container \"fac4c67f3712b705b2d5b858ede9fd435f66dc6c2b919dfb7e17194b94910a9a\": container with ID starting with fac4c67f3712b705b2d5b858ede9fd435f66dc6c2b919dfb7e17194b94910a9a not found: ID does not exist" Dec 10 15:48:06 crc kubenswrapper[4755]: I1210 15:48:06.475973 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-lokistack-gateway-bc75944f-sxgmq"] Dec 10 15:48:06 crc kubenswrapper[4755]: I1210 15:48:06.488775 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-lokistack-gateway-bc75944f-sxgmq"] Dec 10 15:48:06 crc kubenswrapper[4755]: I1210 15:48:06.504436 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-lokistack-gateway-bc75944f-hm8ng"] Dec 10 15:48:06 crc kubenswrapper[4755]: I1210 15:48:06.504663 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cloudkitty-lokistack-gateway-bc75944f-hm8ng" podUID="145f8d2b-2e95-4227-8c76-1f3ee8eab754" containerName="gateway" containerID="cri-o://6dbc5523bfe7760f1e4d754bba1ea83694ed3f21bd02f816ade743f508382e9a" gracePeriod=30 Dec 10 15:48:07 crc kubenswrapper[4755]: I1210 15:48:07.008529 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-db-sync-jr6l4"] Dec 10 15:48:07 crc kubenswrapper[4755]: I1210 15:48:07.020182 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-db-sync-jr6l4"] Dec 10 15:48:07 crc kubenswrapper[4755]: I1210 15:48:07.108967 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-db-sync-jfc28"] Dec 10 15:48:07 crc kubenswrapper[4755]: E1210 15:48:07.109421 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="614e240f-4195-4915-9e2e-d142c9df25bc" containerName="gateway" Dec 10 15:48:07 crc kubenswrapper[4755]: I1210 15:48:07.109437 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="614e240f-4195-4915-9e2e-d142c9df25bc" containerName="gateway" Dec 10 15:48:07 crc kubenswrapper[4755]: I1210 15:48:07.109651 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="614e240f-4195-4915-9e2e-d142c9df25bc" containerName="gateway" Dec 10 15:48:07 crc kubenswrapper[4755]: I1210 15:48:07.110479 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-jfc28" Dec 10 15:48:07 crc kubenswrapper[4755]: I1210 15:48:07.112479 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Dec 10 15:48:07 crc kubenswrapper[4755]: I1210 15:48:07.130542 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-sync-jfc28"] Dec 10 15:48:07 crc kubenswrapper[4755]: I1210 15:48:07.175754 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/998863b6-4f48-4c8b-8011-a40377686b99-certs\") pod \"cloudkitty-db-sync-jfc28\" (UID: \"998863b6-4f48-4c8b-8011-a40377686b99\") " pod="openstack/cloudkitty-db-sync-jfc28" Dec 10 15:48:07 crc kubenswrapper[4755]: I1210 15:48:07.176074 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/998863b6-4f48-4c8b-8011-a40377686b99-scripts\") pod \"cloudkitty-db-sync-jfc28\" (UID: \"998863b6-4f48-4c8b-8011-a40377686b99\") " pod="openstack/cloudkitty-db-sync-jfc28" Dec 10 15:48:07 crc kubenswrapper[4755]: I1210 15:48:07.176263 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/998863b6-4f48-4c8b-8011-a40377686b99-config-data\") pod \"cloudkitty-db-sync-jfc28\" (UID: \"998863b6-4f48-4c8b-8011-a40377686b99\") " pod="openstack/cloudkitty-db-sync-jfc28" Dec 10 15:48:07 crc kubenswrapper[4755]: I1210 15:48:07.176448 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mz4t5\" (UniqueName: \"kubernetes.io/projected/998863b6-4f48-4c8b-8011-a40377686b99-kube-api-access-mz4t5\") pod \"cloudkitty-db-sync-jfc28\" (UID: \"998863b6-4f48-4c8b-8011-a40377686b99\") " pod="openstack/cloudkitty-db-sync-jfc28" Dec 10 15:48:07 crc kubenswrapper[4755]: I1210 15:48:07.176668 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/998863b6-4f48-4c8b-8011-a40377686b99-combined-ca-bundle\") pod \"cloudkitty-db-sync-jfc28\" (UID: \"998863b6-4f48-4c8b-8011-a40377686b99\") " pod="openstack/cloudkitty-db-sync-jfc28" Dec 10 15:48:07 crc kubenswrapper[4755]: I1210 15:48:07.278505 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/998863b6-4f48-4c8b-8011-a40377686b99-combined-ca-bundle\") pod \"cloudkitty-db-sync-jfc28\" (UID: \"998863b6-4f48-4c8b-8011-a40377686b99\") " pod="openstack/cloudkitty-db-sync-jfc28" Dec 10 15:48:07 crc kubenswrapper[4755]: I1210 15:48:07.278689 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/998863b6-4f48-4c8b-8011-a40377686b99-certs\") pod \"cloudkitty-db-sync-jfc28\" (UID: \"998863b6-4f48-4c8b-8011-a40377686b99\") " pod="openstack/cloudkitty-db-sync-jfc28" Dec 10 15:48:07 crc kubenswrapper[4755]: I1210 15:48:07.278764 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/998863b6-4f48-4c8b-8011-a40377686b99-scripts\") pod \"cloudkitty-db-sync-jfc28\" (UID: \"998863b6-4f48-4c8b-8011-a40377686b99\") " pod="openstack/cloudkitty-db-sync-jfc28" Dec 10 15:48:07 crc kubenswrapper[4755]: I1210 15:48:07.278794 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/998863b6-4f48-4c8b-8011-a40377686b99-config-data\") pod \"cloudkitty-db-sync-jfc28\" (UID: \"998863b6-4f48-4c8b-8011-a40377686b99\") " pod="openstack/cloudkitty-db-sync-jfc28" Dec 10 15:48:07 crc kubenswrapper[4755]: I1210 15:48:07.278861 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mz4t5\" (UniqueName: \"kubernetes.io/projected/998863b6-4f48-4c8b-8011-a40377686b99-kube-api-access-mz4t5\") pod \"cloudkitty-db-sync-jfc28\" (UID: \"998863b6-4f48-4c8b-8011-a40377686b99\") " pod="openstack/cloudkitty-db-sync-jfc28" Dec 10 15:48:07 crc kubenswrapper[4755]: I1210 15:48:07.284745 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/998863b6-4f48-4c8b-8011-a40377686b99-certs\") pod \"cloudkitty-db-sync-jfc28\" (UID: \"998863b6-4f48-4c8b-8011-a40377686b99\") " pod="openstack/cloudkitty-db-sync-jfc28" Dec 10 15:48:07 crc kubenswrapper[4755]: I1210 15:48:07.284868 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/998863b6-4f48-4c8b-8011-a40377686b99-scripts\") pod \"cloudkitty-db-sync-jfc28\" (UID: \"998863b6-4f48-4c8b-8011-a40377686b99\") " pod="openstack/cloudkitty-db-sync-jfc28" Dec 10 15:48:07 crc kubenswrapper[4755]: I1210 15:48:07.285203 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/998863b6-4f48-4c8b-8011-a40377686b99-combined-ca-bundle\") pod \"cloudkitty-db-sync-jfc28\" (UID: \"998863b6-4f48-4c8b-8011-a40377686b99\") " pod="openstack/cloudkitty-db-sync-jfc28" Dec 10 15:48:07 crc kubenswrapper[4755]: I1210 15:48:07.298710 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/998863b6-4f48-4c8b-8011-a40377686b99-config-data\") pod \"cloudkitty-db-sync-jfc28\" (UID: \"998863b6-4f48-4c8b-8011-a40377686b99\") " pod="openstack/cloudkitty-db-sync-jfc28" Dec 10 15:48:07 crc kubenswrapper[4755]: I1210 15:48:07.300006 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mz4t5\" (UniqueName: \"kubernetes.io/projected/998863b6-4f48-4c8b-8011-a40377686b99-kube-api-access-mz4t5\") pod \"cloudkitty-db-sync-jfc28\" (UID: \"998863b6-4f48-4c8b-8011-a40377686b99\") " pod="openstack/cloudkitty-db-sync-jfc28" Dec 10 15:48:07 crc kubenswrapper[4755]: I1210 15:48:07.428751 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-jfc28" Dec 10 15:48:07 crc kubenswrapper[4755]: I1210 15:48:07.771024 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="614e240f-4195-4915-9e2e-d142c9df25bc" path="/var/lib/kubelet/pods/614e240f-4195-4915-9e2e-d142c9df25bc/volumes" Dec 10 15:48:07 crc kubenswrapper[4755]: I1210 15:48:07.772505 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbc4e627-8238-49b1-a0ac-48d07a29c23a" path="/var/lib/kubelet/pods/cbc4e627-8238-49b1-a0ac-48d07a29c23a/volumes" Dec 10 15:48:07 crc kubenswrapper[4755]: I1210 15:48:07.996367 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-sync-jfc28"] Dec 10 15:48:07 crc kubenswrapper[4755]: W1210 15:48:07.997283 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod998863b6_4f48_4c8b_8011_a40377686b99.slice/crio-268a773575e8e7bc4c955a37a02920f4ec547cd4b0d892b69836cd046d01ff24 WatchSource:0}: Error finding container 268a773575e8e7bc4c955a37a02920f4ec547cd4b0d892b69836cd046d01ff24: Status 404 returned error can't find the container with id 268a773575e8e7bc4c955a37a02920f4ec547cd4b0d892b69836cd046d01ff24 Dec 10 15:48:08 crc kubenswrapper[4755]: E1210 15:48:08.126218 4755 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 10 15:48:08 crc kubenswrapper[4755]: E1210 15:48:08.126292 4755 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 10 15:48:08 crc kubenswrapper[4755]: E1210 15:48:08.126497 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mz4t5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-jfc28_openstack(998863b6-4f48-4c8b-8011-a40377686b99): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 10 15:48:08 crc kubenswrapper[4755]: E1210 15:48:08.127888 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 15:48:08 crc kubenswrapper[4755]: I1210 15:48:08.152024 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-gateway-bc75944f-hm8ng" Dec 10 15:48:08 crc kubenswrapper[4755]: I1210 15:48:08.266192 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/145f8d2b-2e95-4227-8c76-1f3ee8eab754-lokistack-gateway\") pod \"145f8d2b-2e95-4227-8c76-1f3ee8eab754\" (UID: \"145f8d2b-2e95-4227-8c76-1f3ee8eab754\") " Dec 10 15:48:08 crc kubenswrapper[4755]: I1210 15:48:08.266274 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jsqc6\" (UniqueName: \"kubernetes.io/projected/145f8d2b-2e95-4227-8c76-1f3ee8eab754-kube-api-access-jsqc6\") pod \"145f8d2b-2e95-4227-8c76-1f3ee8eab754\" (UID: \"145f8d2b-2e95-4227-8c76-1f3ee8eab754\") " Dec 10 15:48:08 crc kubenswrapper[4755]: I1210 15:48:08.266347 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/145f8d2b-2e95-4227-8c76-1f3ee8eab754-cloudkitty-lokistack-gateway-client-http\") pod \"145f8d2b-2e95-4227-8c76-1f3ee8eab754\" (UID: \"145f8d2b-2e95-4227-8c76-1f3ee8eab754\") " Dec 10 15:48:08 crc kubenswrapper[4755]: I1210 15:48:08.266455 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/145f8d2b-2e95-4227-8c76-1f3ee8eab754-tenants\") pod \"145f8d2b-2e95-4227-8c76-1f3ee8eab754\" (UID: \"145f8d2b-2e95-4227-8c76-1f3ee8eab754\") " Dec 10 15:48:08 crc kubenswrapper[4755]: I1210 15:48:08.266514 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/145f8d2b-2e95-4227-8c76-1f3ee8eab754-rbac\") pod \"145f8d2b-2e95-4227-8c76-1f3ee8eab754\" (UID: \"145f8d2b-2e95-4227-8c76-1f3ee8eab754\") " Dec 10 15:48:08 crc kubenswrapper[4755]: I1210 15:48:08.266604 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/145f8d2b-2e95-4227-8c76-1f3ee8eab754-cloudkitty-lokistack-ca-bundle\") pod \"145f8d2b-2e95-4227-8c76-1f3ee8eab754\" (UID: \"145f8d2b-2e95-4227-8c76-1f3ee8eab754\") " Dec 10 15:48:08 crc kubenswrapper[4755]: I1210 15:48:08.266659 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/145f8d2b-2e95-4227-8c76-1f3ee8eab754-tls-secret\") pod \"145f8d2b-2e95-4227-8c76-1f3ee8eab754\" (UID: \"145f8d2b-2e95-4227-8c76-1f3ee8eab754\") " Dec 10 15:48:08 crc kubenswrapper[4755]: I1210 15:48:08.266789 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/145f8d2b-2e95-4227-8c76-1f3ee8eab754-cloudkitty-ca-bundle\") pod \"145f8d2b-2e95-4227-8c76-1f3ee8eab754\" (UID: \"145f8d2b-2e95-4227-8c76-1f3ee8eab754\") " Dec 10 15:48:08 crc kubenswrapper[4755]: I1210 15:48:08.266842 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/145f8d2b-2e95-4227-8c76-1f3ee8eab754-cloudkitty-lokistack-gateway-ca-bundle\") pod \"145f8d2b-2e95-4227-8c76-1f3ee8eab754\" (UID: \"145f8d2b-2e95-4227-8c76-1f3ee8eab754\") " Dec 10 15:48:08 crc kubenswrapper[4755]: I1210 15:48:08.268242 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/145f8d2b-2e95-4227-8c76-1f3ee8eab754-cloudkitty-lokistack-ca-bundle" (OuterVolumeSpecName: "cloudkitty-lokistack-ca-bundle") pod "145f8d2b-2e95-4227-8c76-1f3ee8eab754" (UID: "145f8d2b-2e95-4227-8c76-1f3ee8eab754"). InnerVolumeSpecName "cloudkitty-lokistack-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:48:08 crc kubenswrapper[4755]: I1210 15:48:08.268740 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/145f8d2b-2e95-4227-8c76-1f3ee8eab754-cloudkitty-lokistack-gateway-ca-bundle" (OuterVolumeSpecName: "cloudkitty-lokistack-gateway-ca-bundle") pod "145f8d2b-2e95-4227-8c76-1f3ee8eab754" (UID: "145f8d2b-2e95-4227-8c76-1f3ee8eab754"). InnerVolumeSpecName "cloudkitty-lokistack-gateway-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:48:08 crc kubenswrapper[4755]: I1210 15:48:08.276650 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/145f8d2b-2e95-4227-8c76-1f3ee8eab754-kube-api-access-jsqc6" (OuterVolumeSpecName: "kube-api-access-jsqc6") pod "145f8d2b-2e95-4227-8c76-1f3ee8eab754" (UID: "145f8d2b-2e95-4227-8c76-1f3ee8eab754"). InnerVolumeSpecName "kube-api-access-jsqc6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:48:08 crc kubenswrapper[4755]: I1210 15:48:08.282967 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/145f8d2b-2e95-4227-8c76-1f3ee8eab754-cloudkitty-ca-bundle" (OuterVolumeSpecName: "cloudkitty-ca-bundle") pod "145f8d2b-2e95-4227-8c76-1f3ee8eab754" (UID: "145f8d2b-2e95-4227-8c76-1f3ee8eab754"). InnerVolumeSpecName "cloudkitty-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:48:08 crc kubenswrapper[4755]: I1210 15:48:08.289550 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/145f8d2b-2e95-4227-8c76-1f3ee8eab754-tls-secret" (OuterVolumeSpecName: "tls-secret") pod "145f8d2b-2e95-4227-8c76-1f3ee8eab754" (UID: "145f8d2b-2e95-4227-8c76-1f3ee8eab754"). InnerVolumeSpecName "tls-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:48:08 crc kubenswrapper[4755]: I1210 15:48:08.289636 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/145f8d2b-2e95-4227-8c76-1f3ee8eab754-cloudkitty-lokistack-gateway-client-http" (OuterVolumeSpecName: "cloudkitty-lokistack-gateway-client-http") pod "145f8d2b-2e95-4227-8c76-1f3ee8eab754" (UID: "145f8d2b-2e95-4227-8c76-1f3ee8eab754"). InnerVolumeSpecName "cloudkitty-lokistack-gateway-client-http". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:48:08 crc kubenswrapper[4755]: I1210 15:48:08.370805 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/145f8d2b-2e95-4227-8c76-1f3ee8eab754-lokistack-gateway" (OuterVolumeSpecName: "lokistack-gateway") pod "145f8d2b-2e95-4227-8c76-1f3ee8eab754" (UID: "145f8d2b-2e95-4227-8c76-1f3ee8eab754"). InnerVolumeSpecName "lokistack-gateway". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:48:08 crc kubenswrapper[4755]: I1210 15:48:08.371859 4755 reconciler_common.go:293] "Volume detached for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/145f8d2b-2e95-4227-8c76-1f3ee8eab754-cloudkitty-lokistack-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:08 crc kubenswrapper[4755]: I1210 15:48:08.371886 4755 reconciler_common.go:293] "Volume detached for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/145f8d2b-2e95-4227-8c76-1f3ee8eab754-tls-secret\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:08 crc kubenswrapper[4755]: I1210 15:48:08.371896 4755 reconciler_common.go:293] "Volume detached for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/145f8d2b-2e95-4227-8c76-1f3ee8eab754-cloudkitty-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:08 crc kubenswrapper[4755]: I1210 15:48:08.371905 4755 reconciler_common.go:293] "Volume detached for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/145f8d2b-2e95-4227-8c76-1f3ee8eab754-cloudkitty-lokistack-gateway-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:08 crc kubenswrapper[4755]: I1210 15:48:08.371915 4755 reconciler_common.go:293] "Volume detached for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/145f8d2b-2e95-4227-8c76-1f3ee8eab754-lokistack-gateway\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:08 crc kubenswrapper[4755]: I1210 15:48:08.371925 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jsqc6\" (UniqueName: \"kubernetes.io/projected/145f8d2b-2e95-4227-8c76-1f3ee8eab754-kube-api-access-jsqc6\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:08 crc kubenswrapper[4755]: I1210 15:48:08.371934 4755 reconciler_common.go:293] "Volume detached for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/145f8d2b-2e95-4227-8c76-1f3ee8eab754-cloudkitty-lokistack-gateway-client-http\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:08 crc kubenswrapper[4755]: I1210 15:48:08.383947 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/145f8d2b-2e95-4227-8c76-1f3ee8eab754-rbac" (OuterVolumeSpecName: "rbac") pod "145f8d2b-2e95-4227-8c76-1f3ee8eab754" (UID: "145f8d2b-2e95-4227-8c76-1f3ee8eab754"). InnerVolumeSpecName "rbac". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:48:08 crc kubenswrapper[4755]: I1210 15:48:08.388297 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/145f8d2b-2e95-4227-8c76-1f3ee8eab754-tenants" (OuterVolumeSpecName: "tenants") pod "145f8d2b-2e95-4227-8c76-1f3ee8eab754" (UID: "145f8d2b-2e95-4227-8c76-1f3ee8eab754"). InnerVolumeSpecName "tenants". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:48:08 crc kubenswrapper[4755]: I1210 15:48:08.395944 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-jfc28" event={"ID":"998863b6-4f48-4c8b-8011-a40377686b99","Type":"ContainerStarted","Data":"268a773575e8e7bc4c955a37a02920f4ec547cd4b0d892b69836cd046d01ff24"} Dec 10 15:48:08 crc kubenswrapper[4755]: E1210 15:48:08.397835 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 15:48:08 crc kubenswrapper[4755]: I1210 15:48:08.399186 4755 generic.go:334] "Generic (PLEG): container finished" podID="145f8d2b-2e95-4227-8c76-1f3ee8eab754" containerID="6dbc5523bfe7760f1e4d754bba1ea83694ed3f21bd02f816ade743f508382e9a" exitCode=0 Dec 10 15:48:08 crc kubenswrapper[4755]: I1210 15:48:08.399393 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-gateway-bc75944f-hm8ng" event={"ID":"145f8d2b-2e95-4227-8c76-1f3ee8eab754","Type":"ContainerDied","Data":"6dbc5523bfe7760f1e4d754bba1ea83694ed3f21bd02f816ade743f508382e9a"} Dec 10 15:48:08 crc kubenswrapper[4755]: I1210 15:48:08.399424 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-gateway-bc75944f-hm8ng" Dec 10 15:48:08 crc kubenswrapper[4755]: I1210 15:48:08.399434 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-gateway-bc75944f-hm8ng" event={"ID":"145f8d2b-2e95-4227-8c76-1f3ee8eab754","Type":"ContainerDied","Data":"14683c9e4e9c948b3cc4b13dfd85b47e6c8dabb8603f156fe6163d73a492cc15"} Dec 10 15:48:08 crc kubenswrapper[4755]: I1210 15:48:08.399446 4755 scope.go:117] "RemoveContainer" containerID="6dbc5523bfe7760f1e4d754bba1ea83694ed3f21bd02f816ade743f508382e9a" Dec 10 15:48:08 crc kubenswrapper[4755]: I1210 15:48:08.476501 4755 reconciler_common.go:293] "Volume detached for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/145f8d2b-2e95-4227-8c76-1f3ee8eab754-tenants\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:08 crc kubenswrapper[4755]: I1210 15:48:08.479051 4755 reconciler_common.go:293] "Volume detached for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/145f8d2b-2e95-4227-8c76-1f3ee8eab754-rbac\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:08 crc kubenswrapper[4755]: I1210 15:48:08.486899 4755 scope.go:117] "RemoveContainer" containerID="6dbc5523bfe7760f1e4d754bba1ea83694ed3f21bd02f816ade743f508382e9a" Dec 10 15:48:08 crc kubenswrapper[4755]: E1210 15:48:08.487426 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6dbc5523bfe7760f1e4d754bba1ea83694ed3f21bd02f816ade743f508382e9a\": container with ID starting with 6dbc5523bfe7760f1e4d754bba1ea83694ed3f21bd02f816ade743f508382e9a not found: ID does not exist" containerID="6dbc5523bfe7760f1e4d754bba1ea83694ed3f21bd02f816ade743f508382e9a" Dec 10 15:48:08 crc kubenswrapper[4755]: I1210 15:48:08.487492 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6dbc5523bfe7760f1e4d754bba1ea83694ed3f21bd02f816ade743f508382e9a"} err="failed to get container status \"6dbc5523bfe7760f1e4d754bba1ea83694ed3f21bd02f816ade743f508382e9a\": rpc error: code = NotFound desc = could not find container \"6dbc5523bfe7760f1e4d754bba1ea83694ed3f21bd02f816ade743f508382e9a\": container with ID starting with 6dbc5523bfe7760f1e4d754bba1ea83694ed3f21bd02f816ade743f508382e9a not found: ID does not exist" Dec 10 15:48:08 crc kubenswrapper[4755]: I1210 15:48:08.489570 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-lokistack-gateway-bc75944f-hm8ng"] Dec 10 15:48:08 crc kubenswrapper[4755]: I1210 15:48:08.506815 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-lokistack-gateway-bc75944f-hm8ng"] Dec 10 15:48:08 crc kubenswrapper[4755]: I1210 15:48:08.966979 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 10 15:48:09 crc kubenswrapper[4755]: I1210 15:48:09.354046 4755 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="e5a3871d-6b81-4b3d-9044-fcbcf437effb" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 10 15:48:09 crc kubenswrapper[4755]: I1210 15:48:09.370819 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 10 15:48:09 crc kubenswrapper[4755]: I1210 15:48:09.371158 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b2fcbda1-d1f1-4983-a9bc-d8575c3872d3" containerName="ceilometer-central-agent" containerID="cri-o://ff733829e2460a8346b0092d4246cb7c60ef161a3c1f2ca80cef8fd8a5c403d6" gracePeriod=30 Dec 10 15:48:09 crc kubenswrapper[4755]: I1210 15:48:09.371298 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b2fcbda1-d1f1-4983-a9bc-d8575c3872d3" containerName="proxy-httpd" containerID="cri-o://632225f0f1b6e3a898b37ef5dd601955d7ce1d91e0cc3d9d660312af9918ff7f" gracePeriod=30 Dec 10 15:48:09 crc kubenswrapper[4755]: I1210 15:48:09.371353 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b2fcbda1-d1f1-4983-a9bc-d8575c3872d3" containerName="sg-core" containerID="cri-o://6bbe1eaf6240c09e1b40c1e0dac6f2375549f75bea162ba608a2126b6d7054ad" gracePeriod=30 Dec 10 15:48:09 crc kubenswrapper[4755]: I1210 15:48:09.371395 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b2fcbda1-d1f1-4983-a9bc-d8575c3872d3" containerName="ceilometer-notification-agent" containerID="cri-o://22b1e7b1fe5466a2a30affe55a39f8b15b7cac87d7e28673310bd867ea2308c5" gracePeriod=30 Dec 10 15:48:09 crc kubenswrapper[4755]: E1210 15:48:09.414805 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 15:48:09 crc kubenswrapper[4755]: I1210 15:48:09.568888 4755 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-compactor-0" podUID="31bbbf2c-5266-4ea7-8428-ed2607013a35" containerName="loki-compactor" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 10 15:48:09 crc kubenswrapper[4755]: I1210 15:48:09.576659 4755 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-index-gateway-0" podUID="4e702de9-8dda-4370-b806-41083a70ac41" containerName="loki-index-gateway" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 10 15:48:09 crc kubenswrapper[4755]: I1210 15:48:09.768651 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="145f8d2b-2e95-4227-8c76-1f3ee8eab754" path="/var/lib/kubelet/pods/145f8d2b-2e95-4227-8c76-1f3ee8eab754/volumes" Dec 10 15:48:10 crc kubenswrapper[4755]: I1210 15:48:10.004827 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 10 15:48:10 crc kubenswrapper[4755]: I1210 15:48:10.425797 4755 generic.go:334] "Generic (PLEG): container finished" podID="b2fcbda1-d1f1-4983-a9bc-d8575c3872d3" containerID="632225f0f1b6e3a898b37ef5dd601955d7ce1d91e0cc3d9d660312af9918ff7f" exitCode=0 Dec 10 15:48:10 crc kubenswrapper[4755]: I1210 15:48:10.425836 4755 generic.go:334] "Generic (PLEG): container finished" podID="b2fcbda1-d1f1-4983-a9bc-d8575c3872d3" containerID="6bbe1eaf6240c09e1b40c1e0dac6f2375549f75bea162ba608a2126b6d7054ad" exitCode=2 Dec 10 15:48:10 crc kubenswrapper[4755]: I1210 15:48:10.425846 4755 generic.go:334] "Generic (PLEG): container finished" podID="b2fcbda1-d1f1-4983-a9bc-d8575c3872d3" containerID="ff733829e2460a8346b0092d4246cb7c60ef161a3c1f2ca80cef8fd8a5c403d6" exitCode=0 Dec 10 15:48:10 crc kubenswrapper[4755]: I1210 15:48:10.425847 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2fcbda1-d1f1-4983-a9bc-d8575c3872d3","Type":"ContainerDied","Data":"632225f0f1b6e3a898b37ef5dd601955d7ce1d91e0cc3d9d660312af9918ff7f"} Dec 10 15:48:10 crc kubenswrapper[4755]: I1210 15:48:10.425903 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2fcbda1-d1f1-4983-a9bc-d8575c3872d3","Type":"ContainerDied","Data":"6bbe1eaf6240c09e1b40c1e0dac6f2375549f75bea162ba608a2126b6d7054ad"} Dec 10 15:48:10 crc kubenswrapper[4755]: I1210 15:48:10.425915 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2fcbda1-d1f1-4983-a9bc-d8575c3872d3","Type":"ContainerDied","Data":"ff733829e2460a8346b0092d4246cb7c60ef161a3c1f2ca80cef8fd8a5c403d6"} Dec 10 15:48:11 crc kubenswrapper[4755]: I1210 15:48:11.817969 4755 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="b2fcbda1-d1f1-4983-a9bc-d8575c3872d3" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.227:3000/\": dial tcp 10.217.0.227:3000: connect: connection refused" Dec 10 15:48:14 crc kubenswrapper[4755]: I1210 15:48:14.415260 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="89e8722f-e9fc-4850-bb96-e51f9859805e" containerName="rabbitmq" containerID="cri-o://b4fc1550b67e9a56eb7b14dcfe33b6d44f6c5c3cdeba56241c9f298b59af77a0" gracePeriod=604795 Dec 10 15:48:14 crc kubenswrapper[4755]: I1210 15:48:14.505502 4755 generic.go:334] "Generic (PLEG): container finished" podID="b2fcbda1-d1f1-4983-a9bc-d8575c3872d3" containerID="22b1e7b1fe5466a2a30affe55a39f8b15b7cac87d7e28673310bd867ea2308c5" exitCode=0 Dec 10 15:48:14 crc kubenswrapper[4755]: I1210 15:48:14.505576 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2fcbda1-d1f1-4983-a9bc-d8575c3872d3","Type":"ContainerDied","Data":"22b1e7b1fe5466a2a30affe55a39f8b15b7cac87d7e28673310bd867ea2308c5"} Dec 10 15:48:14 crc kubenswrapper[4755]: I1210 15:48:14.685757 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 15:48:14 crc kubenswrapper[4755]: I1210 15:48:14.774673 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2fcbda1-d1f1-4983-a9bc-d8575c3872d3-log-httpd\") pod \"b2fcbda1-d1f1-4983-a9bc-d8575c3872d3\" (UID: \"b2fcbda1-d1f1-4983-a9bc-d8575c3872d3\") " Dec 10 15:48:14 crc kubenswrapper[4755]: I1210 15:48:14.774724 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b2fcbda1-d1f1-4983-a9bc-d8575c3872d3-ceilometer-tls-certs\") pod \"b2fcbda1-d1f1-4983-a9bc-d8575c3872d3\" (UID: \"b2fcbda1-d1f1-4983-a9bc-d8575c3872d3\") " Dec 10 15:48:14 crc kubenswrapper[4755]: I1210 15:48:14.774755 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b2fcbda1-d1f1-4983-a9bc-d8575c3872d3-sg-core-conf-yaml\") pod \"b2fcbda1-d1f1-4983-a9bc-d8575c3872d3\" (UID: \"b2fcbda1-d1f1-4983-a9bc-d8575c3872d3\") " Dec 10 15:48:14 crc kubenswrapper[4755]: I1210 15:48:14.774821 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2fcbda1-d1f1-4983-a9bc-d8575c3872d3-combined-ca-bundle\") pod \"b2fcbda1-d1f1-4983-a9bc-d8575c3872d3\" (UID: \"b2fcbda1-d1f1-4983-a9bc-d8575c3872d3\") " Dec 10 15:48:14 crc kubenswrapper[4755]: I1210 15:48:14.774898 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2fcbda1-d1f1-4983-a9bc-d8575c3872d3-run-httpd\") pod \"b2fcbda1-d1f1-4983-a9bc-d8575c3872d3\" (UID: \"b2fcbda1-d1f1-4983-a9bc-d8575c3872d3\") " Dec 10 15:48:14 crc kubenswrapper[4755]: I1210 15:48:14.774978 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lvcrp\" (UniqueName: \"kubernetes.io/projected/b2fcbda1-d1f1-4983-a9bc-d8575c3872d3-kube-api-access-lvcrp\") pod \"b2fcbda1-d1f1-4983-a9bc-d8575c3872d3\" (UID: \"b2fcbda1-d1f1-4983-a9bc-d8575c3872d3\") " Dec 10 15:48:14 crc kubenswrapper[4755]: I1210 15:48:14.775076 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2fcbda1-d1f1-4983-a9bc-d8575c3872d3-scripts\") pod \"b2fcbda1-d1f1-4983-a9bc-d8575c3872d3\" (UID: \"b2fcbda1-d1f1-4983-a9bc-d8575c3872d3\") " Dec 10 15:48:14 crc kubenswrapper[4755]: I1210 15:48:14.775158 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2fcbda1-d1f1-4983-a9bc-d8575c3872d3-config-data\") pod \"b2fcbda1-d1f1-4983-a9bc-d8575c3872d3\" (UID: \"b2fcbda1-d1f1-4983-a9bc-d8575c3872d3\") " Dec 10 15:48:14 crc kubenswrapper[4755]: I1210 15:48:14.775287 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2fcbda1-d1f1-4983-a9bc-d8575c3872d3-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b2fcbda1-d1f1-4983-a9bc-d8575c3872d3" (UID: "b2fcbda1-d1f1-4983-a9bc-d8575c3872d3"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:48:14 crc kubenswrapper[4755]: I1210 15:48:14.775420 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2fcbda1-d1f1-4983-a9bc-d8575c3872d3-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b2fcbda1-d1f1-4983-a9bc-d8575c3872d3" (UID: "b2fcbda1-d1f1-4983-a9bc-d8575c3872d3"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:48:14 crc kubenswrapper[4755]: I1210 15:48:14.775871 4755 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2fcbda1-d1f1-4983-a9bc-d8575c3872d3-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:14 crc kubenswrapper[4755]: I1210 15:48:14.775899 4755 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2fcbda1-d1f1-4983-a9bc-d8575c3872d3-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:14 crc kubenswrapper[4755]: I1210 15:48:14.782764 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2fcbda1-d1f1-4983-a9bc-d8575c3872d3-scripts" (OuterVolumeSpecName: "scripts") pod "b2fcbda1-d1f1-4983-a9bc-d8575c3872d3" (UID: "b2fcbda1-d1f1-4983-a9bc-d8575c3872d3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:48:14 crc kubenswrapper[4755]: I1210 15:48:14.782875 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2fcbda1-d1f1-4983-a9bc-d8575c3872d3-kube-api-access-lvcrp" (OuterVolumeSpecName: "kube-api-access-lvcrp") pod "b2fcbda1-d1f1-4983-a9bc-d8575c3872d3" (UID: "b2fcbda1-d1f1-4983-a9bc-d8575c3872d3"). InnerVolumeSpecName "kube-api-access-lvcrp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:48:14 crc kubenswrapper[4755]: I1210 15:48:14.866233 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2fcbda1-d1f1-4983-a9bc-d8575c3872d3-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "b2fcbda1-d1f1-4983-a9bc-d8575c3872d3" (UID: "b2fcbda1-d1f1-4983-a9bc-d8575c3872d3"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:48:14 crc kubenswrapper[4755]: I1210 15:48:14.886411 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lvcrp\" (UniqueName: \"kubernetes.io/projected/b2fcbda1-d1f1-4983-a9bc-d8575c3872d3-kube-api-access-lvcrp\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:14 crc kubenswrapper[4755]: I1210 15:48:14.886454 4755 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2fcbda1-d1f1-4983-a9bc-d8575c3872d3-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:14 crc kubenswrapper[4755]: I1210 15:48:14.886486 4755 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b2fcbda1-d1f1-4983-a9bc-d8575c3872d3-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:14 crc kubenswrapper[4755]: I1210 15:48:14.896648 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2fcbda1-d1f1-4983-a9bc-d8575c3872d3-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b2fcbda1-d1f1-4983-a9bc-d8575c3872d3" (UID: "b2fcbda1-d1f1-4983-a9bc-d8575c3872d3"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:48:14 crc kubenswrapper[4755]: I1210 15:48:14.989040 4755 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b2fcbda1-d1f1-4983-a9bc-d8575c3872d3-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:15 crc kubenswrapper[4755]: I1210 15:48:15.006622 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2fcbda1-d1f1-4983-a9bc-d8575c3872d3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b2fcbda1-d1f1-4983-a9bc-d8575c3872d3" (UID: "b2fcbda1-d1f1-4983-a9bc-d8575c3872d3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:48:15 crc kubenswrapper[4755]: I1210 15:48:15.021873 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2fcbda1-d1f1-4983-a9bc-d8575c3872d3-config-data" (OuterVolumeSpecName: "config-data") pod "b2fcbda1-d1f1-4983-a9bc-d8575c3872d3" (UID: "b2fcbda1-d1f1-4983-a9bc-d8575c3872d3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:48:15 crc kubenswrapper[4755]: I1210 15:48:15.090764 4755 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2fcbda1-d1f1-4983-a9bc-d8575c3872d3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:15 crc kubenswrapper[4755]: I1210 15:48:15.090798 4755 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2fcbda1-d1f1-4983-a9bc-d8575c3872d3-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:15 crc kubenswrapper[4755]: I1210 15:48:15.169957 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="fb480bc7-6936-4208-964b-44cffd08f907" containerName="rabbitmq" containerID="cri-o://8736ae2271c56389e0799a850f399ae7691aedda4dca66c57b45b6c795cfb756" gracePeriod=604796 Dec 10 15:48:15 crc kubenswrapper[4755]: I1210 15:48:15.517756 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2fcbda1-d1f1-4983-a9bc-d8575c3872d3","Type":"ContainerDied","Data":"d14c33e6ee2f2c88d10d0e352a5d0ef2656798b21de6c2da150e7842d50c5f5b"} Dec 10 15:48:15 crc kubenswrapper[4755]: I1210 15:48:15.518096 4755 scope.go:117] "RemoveContainer" containerID="632225f0f1b6e3a898b37ef5dd601955d7ce1d91e0cc3d9d660312af9918ff7f" Dec 10 15:48:15 crc kubenswrapper[4755]: I1210 15:48:15.517873 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 15:48:15 crc kubenswrapper[4755]: I1210 15:48:15.546987 4755 scope.go:117] "RemoveContainer" containerID="6bbe1eaf6240c09e1b40c1e0dac6f2375549f75bea162ba608a2126b6d7054ad" Dec 10 15:48:15 crc kubenswrapper[4755]: I1210 15:48:15.555663 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 10 15:48:15 crc kubenswrapper[4755]: I1210 15:48:15.572809 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 10 15:48:15 crc kubenswrapper[4755]: I1210 15:48:15.599573 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 10 15:48:15 crc kubenswrapper[4755]: E1210 15:48:15.600138 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="145f8d2b-2e95-4227-8c76-1f3ee8eab754" containerName="gateway" Dec 10 15:48:15 crc kubenswrapper[4755]: I1210 15:48:15.600159 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="145f8d2b-2e95-4227-8c76-1f3ee8eab754" containerName="gateway" Dec 10 15:48:15 crc kubenswrapper[4755]: E1210 15:48:15.600185 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2fcbda1-d1f1-4983-a9bc-d8575c3872d3" containerName="proxy-httpd" Dec 10 15:48:15 crc kubenswrapper[4755]: I1210 15:48:15.600193 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2fcbda1-d1f1-4983-a9bc-d8575c3872d3" containerName="proxy-httpd" Dec 10 15:48:15 crc kubenswrapper[4755]: E1210 15:48:15.600207 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2fcbda1-d1f1-4983-a9bc-d8575c3872d3" containerName="ceilometer-notification-agent" Dec 10 15:48:15 crc kubenswrapper[4755]: I1210 15:48:15.600215 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2fcbda1-d1f1-4983-a9bc-d8575c3872d3" containerName="ceilometer-notification-agent" Dec 10 15:48:15 crc kubenswrapper[4755]: E1210 15:48:15.600235 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2fcbda1-d1f1-4983-a9bc-d8575c3872d3" containerName="sg-core" Dec 10 15:48:15 crc kubenswrapper[4755]: I1210 15:48:15.600243 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2fcbda1-d1f1-4983-a9bc-d8575c3872d3" containerName="sg-core" Dec 10 15:48:15 crc kubenswrapper[4755]: E1210 15:48:15.600277 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2fcbda1-d1f1-4983-a9bc-d8575c3872d3" containerName="ceilometer-central-agent" Dec 10 15:48:15 crc kubenswrapper[4755]: I1210 15:48:15.600284 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2fcbda1-d1f1-4983-a9bc-d8575c3872d3" containerName="ceilometer-central-agent" Dec 10 15:48:15 crc kubenswrapper[4755]: I1210 15:48:15.600521 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2fcbda1-d1f1-4983-a9bc-d8575c3872d3" containerName="ceilometer-central-agent" Dec 10 15:48:15 crc kubenswrapper[4755]: I1210 15:48:15.600555 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2fcbda1-d1f1-4983-a9bc-d8575c3872d3" containerName="sg-core" Dec 10 15:48:15 crc kubenswrapper[4755]: I1210 15:48:15.600568 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2fcbda1-d1f1-4983-a9bc-d8575c3872d3" containerName="ceilometer-notification-agent" Dec 10 15:48:15 crc kubenswrapper[4755]: I1210 15:48:15.600586 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2fcbda1-d1f1-4983-a9bc-d8575c3872d3" containerName="proxy-httpd" Dec 10 15:48:15 crc kubenswrapper[4755]: I1210 15:48:15.600596 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="145f8d2b-2e95-4227-8c76-1f3ee8eab754" containerName="gateway" Dec 10 15:48:15 crc kubenswrapper[4755]: I1210 15:48:15.606087 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 15:48:15 crc kubenswrapper[4755]: I1210 15:48:15.610182 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 10 15:48:15 crc kubenswrapper[4755]: I1210 15:48:15.610208 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 10 15:48:15 crc kubenswrapper[4755]: I1210 15:48:15.610344 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Dec 10 15:48:15 crc kubenswrapper[4755]: I1210 15:48:15.610506 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 10 15:48:15 crc kubenswrapper[4755]: I1210 15:48:15.655872 4755 scope.go:117] "RemoveContainer" containerID="22b1e7b1fe5466a2a30affe55a39f8b15b7cac87d7e28673310bd867ea2308c5" Dec 10 15:48:15 crc kubenswrapper[4755]: I1210 15:48:15.705654 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6d104bea-ecdc-4fe1-9861-fb1a19fce845-log-httpd\") pod \"ceilometer-0\" (UID: \"6d104bea-ecdc-4fe1-9861-fb1a19fce845\") " pod="openstack/ceilometer-0" Dec 10 15:48:15 crc kubenswrapper[4755]: I1210 15:48:15.711692 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6d104bea-ecdc-4fe1-9861-fb1a19fce845-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6d104bea-ecdc-4fe1-9861-fb1a19fce845\") " pod="openstack/ceilometer-0" Dec 10 15:48:15 crc kubenswrapper[4755]: I1210 15:48:15.711843 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d104bea-ecdc-4fe1-9861-fb1a19fce845-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"6d104bea-ecdc-4fe1-9861-fb1a19fce845\") " pod="openstack/ceilometer-0" Dec 10 15:48:15 crc kubenswrapper[4755]: I1210 15:48:15.711924 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hw9gj\" (UniqueName: \"kubernetes.io/projected/6d104bea-ecdc-4fe1-9861-fb1a19fce845-kube-api-access-hw9gj\") pod \"ceilometer-0\" (UID: \"6d104bea-ecdc-4fe1-9861-fb1a19fce845\") " pod="openstack/ceilometer-0" Dec 10 15:48:15 crc kubenswrapper[4755]: I1210 15:48:15.712183 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6d104bea-ecdc-4fe1-9861-fb1a19fce845-run-httpd\") pod \"ceilometer-0\" (UID: \"6d104bea-ecdc-4fe1-9861-fb1a19fce845\") " pod="openstack/ceilometer-0" Dec 10 15:48:15 crc kubenswrapper[4755]: I1210 15:48:15.712309 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d104bea-ecdc-4fe1-9861-fb1a19fce845-config-data\") pod \"ceilometer-0\" (UID: \"6d104bea-ecdc-4fe1-9861-fb1a19fce845\") " pod="openstack/ceilometer-0" Dec 10 15:48:15 crc kubenswrapper[4755]: I1210 15:48:15.712412 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d104bea-ecdc-4fe1-9861-fb1a19fce845-scripts\") pod \"ceilometer-0\" (UID: \"6d104bea-ecdc-4fe1-9861-fb1a19fce845\") " pod="openstack/ceilometer-0" Dec 10 15:48:15 crc kubenswrapper[4755]: I1210 15:48:15.712608 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d104bea-ecdc-4fe1-9861-fb1a19fce845-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6d104bea-ecdc-4fe1-9861-fb1a19fce845\") " pod="openstack/ceilometer-0" Dec 10 15:48:15 crc kubenswrapper[4755]: I1210 15:48:15.715619 4755 scope.go:117] "RemoveContainer" containerID="ff733829e2460a8346b0092d4246cb7c60ef161a3c1f2ca80cef8fd8a5c403d6" Dec 10 15:48:15 crc kubenswrapper[4755]: I1210 15:48:15.795871 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2fcbda1-d1f1-4983-a9bc-d8575c3872d3" path="/var/lib/kubelet/pods/b2fcbda1-d1f1-4983-a9bc-d8575c3872d3/volumes" Dec 10 15:48:15 crc kubenswrapper[4755]: I1210 15:48:15.815669 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d104bea-ecdc-4fe1-9861-fb1a19fce845-config-data\") pod \"ceilometer-0\" (UID: \"6d104bea-ecdc-4fe1-9861-fb1a19fce845\") " pod="openstack/ceilometer-0" Dec 10 15:48:15 crc kubenswrapper[4755]: I1210 15:48:15.815742 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d104bea-ecdc-4fe1-9861-fb1a19fce845-scripts\") pod \"ceilometer-0\" (UID: \"6d104bea-ecdc-4fe1-9861-fb1a19fce845\") " pod="openstack/ceilometer-0" Dec 10 15:48:15 crc kubenswrapper[4755]: I1210 15:48:15.815800 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d104bea-ecdc-4fe1-9861-fb1a19fce845-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6d104bea-ecdc-4fe1-9861-fb1a19fce845\") " pod="openstack/ceilometer-0" Dec 10 15:48:15 crc kubenswrapper[4755]: I1210 15:48:15.815829 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6d104bea-ecdc-4fe1-9861-fb1a19fce845-log-httpd\") pod \"ceilometer-0\" (UID: \"6d104bea-ecdc-4fe1-9861-fb1a19fce845\") " pod="openstack/ceilometer-0" Dec 10 15:48:15 crc kubenswrapper[4755]: I1210 15:48:15.815925 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6d104bea-ecdc-4fe1-9861-fb1a19fce845-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6d104bea-ecdc-4fe1-9861-fb1a19fce845\") " pod="openstack/ceilometer-0" Dec 10 15:48:15 crc kubenswrapper[4755]: I1210 15:48:15.815947 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d104bea-ecdc-4fe1-9861-fb1a19fce845-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"6d104bea-ecdc-4fe1-9861-fb1a19fce845\") " pod="openstack/ceilometer-0" Dec 10 15:48:15 crc kubenswrapper[4755]: I1210 15:48:15.815983 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hw9gj\" (UniqueName: \"kubernetes.io/projected/6d104bea-ecdc-4fe1-9861-fb1a19fce845-kube-api-access-hw9gj\") pod \"ceilometer-0\" (UID: \"6d104bea-ecdc-4fe1-9861-fb1a19fce845\") " pod="openstack/ceilometer-0" Dec 10 15:48:15 crc kubenswrapper[4755]: I1210 15:48:15.816088 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6d104bea-ecdc-4fe1-9861-fb1a19fce845-run-httpd\") pod \"ceilometer-0\" (UID: \"6d104bea-ecdc-4fe1-9861-fb1a19fce845\") " pod="openstack/ceilometer-0" Dec 10 15:48:15 crc kubenswrapper[4755]: I1210 15:48:15.816592 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6d104bea-ecdc-4fe1-9861-fb1a19fce845-run-httpd\") pod \"ceilometer-0\" (UID: \"6d104bea-ecdc-4fe1-9861-fb1a19fce845\") " pod="openstack/ceilometer-0" Dec 10 15:48:15 crc kubenswrapper[4755]: I1210 15:48:15.818390 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6d104bea-ecdc-4fe1-9861-fb1a19fce845-log-httpd\") pod \"ceilometer-0\" (UID: \"6d104bea-ecdc-4fe1-9861-fb1a19fce845\") " pod="openstack/ceilometer-0" Dec 10 15:48:15 crc kubenswrapper[4755]: I1210 15:48:15.825789 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6d104bea-ecdc-4fe1-9861-fb1a19fce845-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6d104bea-ecdc-4fe1-9861-fb1a19fce845\") " pod="openstack/ceilometer-0" Dec 10 15:48:15 crc kubenswrapper[4755]: I1210 15:48:15.826105 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d104bea-ecdc-4fe1-9861-fb1a19fce845-config-data\") pod \"ceilometer-0\" (UID: \"6d104bea-ecdc-4fe1-9861-fb1a19fce845\") " pod="openstack/ceilometer-0" Dec 10 15:48:15 crc kubenswrapper[4755]: I1210 15:48:15.828662 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d104bea-ecdc-4fe1-9861-fb1a19fce845-scripts\") pod \"ceilometer-0\" (UID: \"6d104bea-ecdc-4fe1-9861-fb1a19fce845\") " pod="openstack/ceilometer-0" Dec 10 15:48:15 crc kubenswrapper[4755]: I1210 15:48:15.831239 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d104bea-ecdc-4fe1-9861-fb1a19fce845-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6d104bea-ecdc-4fe1-9861-fb1a19fce845\") " pod="openstack/ceilometer-0" Dec 10 15:48:15 crc kubenswrapper[4755]: I1210 15:48:15.850733 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d104bea-ecdc-4fe1-9861-fb1a19fce845-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"6d104bea-ecdc-4fe1-9861-fb1a19fce845\") " pod="openstack/ceilometer-0" Dec 10 15:48:15 crc kubenswrapper[4755]: I1210 15:48:15.860399 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hw9gj\" (UniqueName: \"kubernetes.io/projected/6d104bea-ecdc-4fe1-9861-fb1a19fce845-kube-api-access-hw9gj\") pod \"ceilometer-0\" (UID: \"6d104bea-ecdc-4fe1-9861-fb1a19fce845\") " pod="openstack/ceilometer-0" Dec 10 15:48:15 crc kubenswrapper[4755]: I1210 15:48:15.951234 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 15:48:16 crc kubenswrapper[4755]: W1210 15:48:16.465501 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6d104bea_ecdc_4fe1_9861_fb1a19fce845.slice/crio-5fcf247f051f5dbee4f07a49b149a204c224feca20b063b93fe8214ec6228a7b WatchSource:0}: Error finding container 5fcf247f051f5dbee4f07a49b149a204c224feca20b063b93fe8214ec6228a7b: Status 404 returned error can't find the container with id 5fcf247f051f5dbee4f07a49b149a204c224feca20b063b93fe8214ec6228a7b Dec 10 15:48:16 crc kubenswrapper[4755]: I1210 15:48:16.467264 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 10 15:48:16 crc kubenswrapper[4755]: I1210 15:48:16.530262 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6d104bea-ecdc-4fe1-9861-fb1a19fce845","Type":"ContainerStarted","Data":"5fcf247f051f5dbee4f07a49b149a204c224feca20b063b93fe8214ec6228a7b"} Dec 10 15:48:16 crc kubenswrapper[4755]: E1210 15:48:16.586007 4755 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 10 15:48:16 crc kubenswrapper[4755]: E1210 15:48:16.586066 4755 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 10 15:48:16 crc kubenswrapper[4755]: E1210 15:48:16.586196 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5d4h5b7hfbh5ddh688h9ch55bh7chf6h5ddh68ch94h69h5c5h596h59bh569hfchc4h676hcbh64dhdbh57fh75h5c9h98h59ch679h566h77h9cq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hw9gj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(6d104bea-ecdc-4fe1-9861-fb1a19fce845): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 10 15:48:17 crc kubenswrapper[4755]: I1210 15:48:17.542998 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6d104bea-ecdc-4fe1-9861-fb1a19fce845","Type":"ContainerStarted","Data":"6075dca685f49452b5a00026529213b23e5888f0aa1d0f77f362cb9d140623a9"} Dec 10 15:48:19 crc kubenswrapper[4755]: I1210 15:48:19.356189 4755 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="e5a3871d-6b81-4b3d-9044-fcbcf437effb" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 10 15:48:19 crc kubenswrapper[4755]: I1210 15:48:19.356625 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 15:48:19 crc kubenswrapper[4755]: I1210 15:48:19.562636 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6d104bea-ecdc-4fe1-9861-fb1a19fce845","Type":"ContainerStarted","Data":"ba1a77711990ded023915e3f71ea1b35673cdefd233646efd56204382116797d"} Dec 10 15:48:19 crc kubenswrapper[4755]: I1210 15:48:19.574720 4755 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-compactor-0" podUID="31bbbf2c-5266-4ea7-8428-ed2607013a35" containerName="loki-compactor" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 10 15:48:19 crc kubenswrapper[4755]: I1210 15:48:19.575283 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-compactor-0" Dec 10 15:48:19 crc kubenswrapper[4755]: I1210 15:48:19.587901 4755 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-index-gateway-0" podUID="4e702de9-8dda-4370-b806-41083a70ac41" containerName="loki-index-gateway" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 10 15:48:19 crc kubenswrapper[4755]: I1210 15:48:19.588013 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 10 15:48:20 crc kubenswrapper[4755]: E1210 15:48:20.456620 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 15:48:20 crc kubenswrapper[4755]: I1210 15:48:20.574000 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6d104bea-ecdc-4fe1-9861-fb1a19fce845","Type":"ContainerStarted","Data":"ba1d0fe3fd8568ebd71ff9ed0445ee2a49c61a6007e7a7cc997ee9d35ec7cfb3"} Dec 10 15:48:20 crc kubenswrapper[4755]: I1210 15:48:20.574142 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 10 15:48:20 crc kubenswrapper[4755]: E1210 15:48:20.575843 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 15:48:21 crc kubenswrapper[4755]: I1210 15:48:21.051543 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 10 15:48:21 crc kubenswrapper[4755]: I1210 15:48:21.154340 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/89e8722f-e9fc-4850-bb96-e51f9859805e-pod-info\") pod \"89e8722f-e9fc-4850-bb96-e51f9859805e\" (UID: \"89e8722f-e9fc-4850-bb96-e51f9859805e\") " Dec 10 15:48:21 crc kubenswrapper[4755]: I1210 15:48:21.154430 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/89e8722f-e9fc-4850-bb96-e51f9859805e-plugins-conf\") pod \"89e8722f-e9fc-4850-bb96-e51f9859805e\" (UID: \"89e8722f-e9fc-4850-bb96-e51f9859805e\") " Dec 10 15:48:21 crc kubenswrapper[4755]: I1210 15:48:21.154481 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/89e8722f-e9fc-4850-bb96-e51f9859805e-server-conf\") pod \"89e8722f-e9fc-4850-bb96-e51f9859805e\" (UID: \"89e8722f-e9fc-4850-bb96-e51f9859805e\") " Dec 10 15:48:21 crc kubenswrapper[4755]: I1210 15:48:21.154529 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/89e8722f-e9fc-4850-bb96-e51f9859805e-config-data\") pod \"89e8722f-e9fc-4850-bb96-e51f9859805e\" (UID: \"89e8722f-e9fc-4850-bb96-e51f9859805e\") " Dec 10 15:48:21 crc kubenswrapper[4755]: I1210 15:48:21.154568 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/89e8722f-e9fc-4850-bb96-e51f9859805e-erlang-cookie-secret\") pod \"89e8722f-e9fc-4850-bb96-e51f9859805e\" (UID: \"89e8722f-e9fc-4850-bb96-e51f9859805e\") " Dec 10 15:48:21 crc kubenswrapper[4755]: I1210 15:48:21.154591 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/89e8722f-e9fc-4850-bb96-e51f9859805e-rabbitmq-tls\") pod \"89e8722f-e9fc-4850-bb96-e51f9859805e\" (UID: \"89e8722f-e9fc-4850-bb96-e51f9859805e\") " Dec 10 15:48:21 crc kubenswrapper[4755]: I1210 15:48:21.154767 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/89e8722f-e9fc-4850-bb96-e51f9859805e-rabbitmq-plugins\") pod \"89e8722f-e9fc-4850-bb96-e51f9859805e\" (UID: \"89e8722f-e9fc-4850-bb96-e51f9859805e\") " Dec 10 15:48:21 crc kubenswrapper[4755]: I1210 15:48:21.154825 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/89e8722f-e9fc-4850-bb96-e51f9859805e-rabbitmq-erlang-cookie\") pod \"89e8722f-e9fc-4850-bb96-e51f9859805e\" (UID: \"89e8722f-e9fc-4850-bb96-e51f9859805e\") " Dec 10 15:48:21 crc kubenswrapper[4755]: I1210 15:48:21.154851 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/89e8722f-e9fc-4850-bb96-e51f9859805e-rabbitmq-confd\") pod \"89e8722f-e9fc-4850-bb96-e51f9859805e\" (UID: \"89e8722f-e9fc-4850-bb96-e51f9859805e\") " Dec 10 15:48:21 crc kubenswrapper[4755]: I1210 15:48:21.155861 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h5ng9\" (UniqueName: \"kubernetes.io/projected/89e8722f-e9fc-4850-bb96-e51f9859805e-kube-api-access-h5ng9\") pod \"89e8722f-e9fc-4850-bb96-e51f9859805e\" (UID: \"89e8722f-e9fc-4850-bb96-e51f9859805e\") " Dec 10 15:48:21 crc kubenswrapper[4755]: I1210 15:48:21.159357 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89e8722f-e9fc-4850-bb96-e51f9859805e-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "89e8722f-e9fc-4850-bb96-e51f9859805e" (UID: "89e8722f-e9fc-4850-bb96-e51f9859805e"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:48:21 crc kubenswrapper[4755]: I1210 15:48:21.178741 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89e8722f-e9fc-4850-bb96-e51f9859805e-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "89e8722f-e9fc-4850-bb96-e51f9859805e" (UID: "89e8722f-e9fc-4850-bb96-e51f9859805e"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:48:21 crc kubenswrapper[4755]: I1210 15:48:21.190956 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89e8722f-e9fc-4850-bb96-e51f9859805e-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "89e8722f-e9fc-4850-bb96-e51f9859805e" (UID: "89e8722f-e9fc-4850-bb96-e51f9859805e"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:48:21 crc kubenswrapper[4755]: I1210 15:48:21.191165 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89e8722f-e9fc-4850-bb96-e51f9859805e-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "89e8722f-e9fc-4850-bb96-e51f9859805e" (UID: "89e8722f-e9fc-4850-bb96-e51f9859805e"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:48:21 crc kubenswrapper[4755]: I1210 15:48:21.191389 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/89e8722f-e9fc-4850-bb96-e51f9859805e-pod-info" (OuterVolumeSpecName: "pod-info") pod "89e8722f-e9fc-4850-bb96-e51f9859805e" (UID: "89e8722f-e9fc-4850-bb96-e51f9859805e"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Dec 10 15:48:21 crc kubenswrapper[4755]: I1210 15:48:21.194666 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89e8722f-e9fc-4850-bb96-e51f9859805e-kube-api-access-h5ng9" (OuterVolumeSpecName: "kube-api-access-h5ng9") pod "89e8722f-e9fc-4850-bb96-e51f9859805e" (UID: "89e8722f-e9fc-4850-bb96-e51f9859805e"). InnerVolumeSpecName "kube-api-access-h5ng9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:48:21 crc kubenswrapper[4755]: I1210 15:48:21.205041 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-61a3430d-015d-4835-b4fc-5566f9913b53\") pod \"89e8722f-e9fc-4850-bb96-e51f9859805e\" (UID: \"89e8722f-e9fc-4850-bb96-e51f9859805e\") " Dec 10 15:48:21 crc kubenswrapper[4755]: I1210 15:48:21.205539 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89e8722f-e9fc-4850-bb96-e51f9859805e-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "89e8722f-e9fc-4850-bb96-e51f9859805e" (UID: "89e8722f-e9fc-4850-bb96-e51f9859805e"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:48:21 crc kubenswrapper[4755]: I1210 15:48:21.206764 4755 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/89e8722f-e9fc-4850-bb96-e51f9859805e-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:21 crc kubenswrapper[4755]: I1210 15:48:21.206786 4755 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/89e8722f-e9fc-4850-bb96-e51f9859805e-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:21 crc kubenswrapper[4755]: I1210 15:48:21.206798 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h5ng9\" (UniqueName: \"kubernetes.io/projected/89e8722f-e9fc-4850-bb96-e51f9859805e-kube-api-access-h5ng9\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:21 crc kubenswrapper[4755]: I1210 15:48:21.206808 4755 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/89e8722f-e9fc-4850-bb96-e51f9859805e-pod-info\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:21 crc kubenswrapper[4755]: I1210 15:48:21.206822 4755 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/89e8722f-e9fc-4850-bb96-e51f9859805e-plugins-conf\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:21 crc kubenswrapper[4755]: I1210 15:48:21.206830 4755 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/89e8722f-e9fc-4850-bb96-e51f9859805e-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:21 crc kubenswrapper[4755]: I1210 15:48:21.206839 4755 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/89e8722f-e9fc-4850-bb96-e51f9859805e-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:21 crc kubenswrapper[4755]: I1210 15:48:21.251341 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89e8722f-e9fc-4850-bb96-e51f9859805e-config-data" (OuterVolumeSpecName: "config-data") pod "89e8722f-e9fc-4850-bb96-e51f9859805e" (UID: "89e8722f-e9fc-4850-bb96-e51f9859805e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:48:21 crc kubenswrapper[4755]: I1210 15:48:21.318047 4755 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/89e8722f-e9fc-4850-bb96-e51f9859805e-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:21 crc kubenswrapper[4755]: I1210 15:48:21.340447 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89e8722f-e9fc-4850-bb96-e51f9859805e-server-conf" (OuterVolumeSpecName: "server-conf") pod "89e8722f-e9fc-4850-bb96-e51f9859805e" (UID: "89e8722f-e9fc-4850-bb96-e51f9859805e"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:48:21 crc kubenswrapper[4755]: I1210 15:48:21.365788 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-61a3430d-015d-4835-b4fc-5566f9913b53" (OuterVolumeSpecName: "persistence") pod "89e8722f-e9fc-4850-bb96-e51f9859805e" (UID: "89e8722f-e9fc-4850-bb96-e51f9859805e"). InnerVolumeSpecName "pvc-61a3430d-015d-4835-b4fc-5566f9913b53". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 10 15:48:21 crc kubenswrapper[4755]: I1210 15:48:21.422856 4755 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-61a3430d-015d-4835-b4fc-5566f9913b53\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-61a3430d-015d-4835-b4fc-5566f9913b53\") on node \"crc\" " Dec 10 15:48:21 crc kubenswrapper[4755]: I1210 15:48:21.422889 4755 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/89e8722f-e9fc-4850-bb96-e51f9859805e-server-conf\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:21 crc kubenswrapper[4755]: I1210 15:48:21.446271 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89e8722f-e9fc-4850-bb96-e51f9859805e-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "89e8722f-e9fc-4850-bb96-e51f9859805e" (UID: "89e8722f-e9fc-4850-bb96-e51f9859805e"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:48:21 crc kubenswrapper[4755]: I1210 15:48:21.496921 4755 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Dec 10 15:48:21 crc kubenswrapper[4755]: I1210 15:48:21.497077 4755 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-61a3430d-015d-4835-b4fc-5566f9913b53" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-61a3430d-015d-4835-b4fc-5566f9913b53") on node "crc" Dec 10 15:48:21 crc kubenswrapper[4755]: I1210 15:48:21.525143 4755 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/89e8722f-e9fc-4850-bb96-e51f9859805e-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:21 crc kubenswrapper[4755]: I1210 15:48:21.525183 4755 reconciler_common.go:293] "Volume detached for volume \"pvc-61a3430d-015d-4835-b4fc-5566f9913b53\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-61a3430d-015d-4835-b4fc-5566f9913b53\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:21 crc kubenswrapper[4755]: I1210 15:48:21.591941 4755 generic.go:334] "Generic (PLEG): container finished" podID="fb480bc7-6936-4208-964b-44cffd08f907" containerID="8736ae2271c56389e0799a850f399ae7691aedda4dca66c57b45b6c795cfb756" exitCode=0 Dec 10 15:48:21 crc kubenswrapper[4755]: I1210 15:48:21.591997 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"fb480bc7-6936-4208-964b-44cffd08f907","Type":"ContainerDied","Data":"8736ae2271c56389e0799a850f399ae7691aedda4dca66c57b45b6c795cfb756"} Dec 10 15:48:21 crc kubenswrapper[4755]: I1210 15:48:21.594705 4755 generic.go:334] "Generic (PLEG): container finished" podID="89e8722f-e9fc-4850-bb96-e51f9859805e" containerID="b4fc1550b67e9a56eb7b14dcfe33b6d44f6c5c3cdeba56241c9f298b59af77a0" exitCode=0 Dec 10 15:48:21 crc kubenswrapper[4755]: I1210 15:48:21.594982 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 10 15:48:21 crc kubenswrapper[4755]: I1210 15:48:21.596687 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"89e8722f-e9fc-4850-bb96-e51f9859805e","Type":"ContainerDied","Data":"b4fc1550b67e9a56eb7b14dcfe33b6d44f6c5c3cdeba56241c9f298b59af77a0"} Dec 10 15:48:21 crc kubenswrapper[4755]: I1210 15:48:21.596770 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"89e8722f-e9fc-4850-bb96-e51f9859805e","Type":"ContainerDied","Data":"ac185573f47c8f3f8602e9b28556958174e3d080e2d98f1b571d4f62b583f2fe"} Dec 10 15:48:21 crc kubenswrapper[4755]: I1210 15:48:21.596793 4755 scope.go:117] "RemoveContainer" containerID="b4fc1550b67e9a56eb7b14dcfe33b6d44f6c5c3cdeba56241c9f298b59af77a0" Dec 10 15:48:21 crc kubenswrapper[4755]: E1210 15:48:21.604784 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 15:48:21 crc kubenswrapper[4755]: I1210 15:48:21.639638 4755 scope.go:117] "RemoveContainer" containerID="2737c09a7a60eb8a396709c9839d92a46f9c8e8d9ca2c58a8da58c76ff81fbda" Dec 10 15:48:21 crc kubenswrapper[4755]: I1210 15:48:21.675161 4755 scope.go:117] "RemoveContainer" containerID="b4fc1550b67e9a56eb7b14dcfe33b6d44f6c5c3cdeba56241c9f298b59af77a0" Dec 10 15:48:21 crc kubenswrapper[4755]: E1210 15:48:21.676691 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4fc1550b67e9a56eb7b14dcfe33b6d44f6c5c3cdeba56241c9f298b59af77a0\": container with ID starting with b4fc1550b67e9a56eb7b14dcfe33b6d44f6c5c3cdeba56241c9f298b59af77a0 not found: ID does not exist" containerID="b4fc1550b67e9a56eb7b14dcfe33b6d44f6c5c3cdeba56241c9f298b59af77a0" Dec 10 15:48:21 crc kubenswrapper[4755]: I1210 15:48:21.676807 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4fc1550b67e9a56eb7b14dcfe33b6d44f6c5c3cdeba56241c9f298b59af77a0"} err="failed to get container status \"b4fc1550b67e9a56eb7b14dcfe33b6d44f6c5c3cdeba56241c9f298b59af77a0\": rpc error: code = NotFound desc = could not find container \"b4fc1550b67e9a56eb7b14dcfe33b6d44f6c5c3cdeba56241c9f298b59af77a0\": container with ID starting with b4fc1550b67e9a56eb7b14dcfe33b6d44f6c5c3cdeba56241c9f298b59af77a0 not found: ID does not exist" Dec 10 15:48:21 crc kubenswrapper[4755]: I1210 15:48:21.676909 4755 scope.go:117] "RemoveContainer" containerID="2737c09a7a60eb8a396709c9839d92a46f9c8e8d9ca2c58a8da58c76ff81fbda" Dec 10 15:48:21 crc kubenswrapper[4755]: E1210 15:48:21.679643 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2737c09a7a60eb8a396709c9839d92a46f9c8e8d9ca2c58a8da58c76ff81fbda\": container with ID starting with 2737c09a7a60eb8a396709c9839d92a46f9c8e8d9ca2c58a8da58c76ff81fbda not found: ID does not exist" containerID="2737c09a7a60eb8a396709c9839d92a46f9c8e8d9ca2c58a8da58c76ff81fbda" Dec 10 15:48:21 crc kubenswrapper[4755]: I1210 15:48:21.679841 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2737c09a7a60eb8a396709c9839d92a46f9c8e8d9ca2c58a8da58c76ff81fbda"} err="failed to get container status \"2737c09a7a60eb8a396709c9839d92a46f9c8e8d9ca2c58a8da58c76ff81fbda\": rpc error: code = NotFound desc = could not find container \"2737c09a7a60eb8a396709c9839d92a46f9c8e8d9ca2c58a8da58c76ff81fbda\": container with ID starting with 2737c09a7a60eb8a396709c9839d92a46f9c8e8d9ca2c58a8da58c76ff81fbda not found: ID does not exist" Dec 10 15:48:21 crc kubenswrapper[4755]: I1210 15:48:21.692761 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 10 15:48:21 crc kubenswrapper[4755]: I1210 15:48:21.708198 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 10 15:48:21 crc kubenswrapper[4755]: I1210 15:48:21.723550 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Dec 10 15:48:21 crc kubenswrapper[4755]: E1210 15:48:21.724682 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89e8722f-e9fc-4850-bb96-e51f9859805e" containerName="rabbitmq" Dec 10 15:48:21 crc kubenswrapper[4755]: I1210 15:48:21.724706 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="89e8722f-e9fc-4850-bb96-e51f9859805e" containerName="rabbitmq" Dec 10 15:48:21 crc kubenswrapper[4755]: E1210 15:48:21.724731 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89e8722f-e9fc-4850-bb96-e51f9859805e" containerName="setup-container" Dec 10 15:48:21 crc kubenswrapper[4755]: I1210 15:48:21.724740 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="89e8722f-e9fc-4850-bb96-e51f9859805e" containerName="setup-container" Dec 10 15:48:21 crc kubenswrapper[4755]: I1210 15:48:21.725043 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="89e8722f-e9fc-4850-bb96-e51f9859805e" containerName="rabbitmq" Dec 10 15:48:21 crc kubenswrapper[4755]: I1210 15:48:21.726993 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 10 15:48:21 crc kubenswrapper[4755]: I1210 15:48:21.733869 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Dec 10 15:48:21 crc kubenswrapper[4755]: I1210 15:48:21.733914 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Dec 10 15:48:21 crc kubenswrapper[4755]: I1210 15:48:21.733918 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Dec 10 15:48:21 crc kubenswrapper[4755]: I1210 15:48:21.733946 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Dec 10 15:48:21 crc kubenswrapper[4755]: I1210 15:48:21.733974 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-jh72g" Dec 10 15:48:21 crc kubenswrapper[4755]: I1210 15:48:21.734296 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Dec 10 15:48:21 crc kubenswrapper[4755]: I1210 15:48:21.737524 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Dec 10 15:48:21 crc kubenswrapper[4755]: I1210 15:48:21.754551 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 10 15:48:21 crc kubenswrapper[4755]: I1210 15:48:21.780643 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89e8722f-e9fc-4850-bb96-e51f9859805e" path="/var/lib/kubelet/pods/89e8722f-e9fc-4850-bb96-e51f9859805e/volumes" Dec 10 15:48:21 crc kubenswrapper[4755]: I1210 15:48:21.832444 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b7d822e6-7034-4d4d-b3b4-07ecee1fb7cd-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b7d822e6-7034-4d4d-b3b4-07ecee1fb7cd\") " pod="openstack/rabbitmq-server-0" Dec 10 15:48:21 crc kubenswrapper[4755]: I1210 15:48:21.832550 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-61a3430d-015d-4835-b4fc-5566f9913b53\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-61a3430d-015d-4835-b4fc-5566f9913b53\") pod \"rabbitmq-server-0\" (UID: \"b7d822e6-7034-4d4d-b3b4-07ecee1fb7cd\") " pod="openstack/rabbitmq-server-0" Dec 10 15:48:21 crc kubenswrapper[4755]: I1210 15:48:21.832581 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b7d822e6-7034-4d4d-b3b4-07ecee1fb7cd-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b7d822e6-7034-4d4d-b3b4-07ecee1fb7cd\") " pod="openstack/rabbitmq-server-0" Dec 10 15:48:21 crc kubenswrapper[4755]: I1210 15:48:21.832639 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b7d822e6-7034-4d4d-b3b4-07ecee1fb7cd-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b7d822e6-7034-4d4d-b3b4-07ecee1fb7cd\") " pod="openstack/rabbitmq-server-0" Dec 10 15:48:21 crc kubenswrapper[4755]: I1210 15:48:21.832730 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b7d822e6-7034-4d4d-b3b4-07ecee1fb7cd-config-data\") pod \"rabbitmq-server-0\" (UID: \"b7d822e6-7034-4d4d-b3b4-07ecee1fb7cd\") " pod="openstack/rabbitmq-server-0" Dec 10 15:48:21 crc kubenswrapper[4755]: I1210 15:48:21.832747 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b7d822e6-7034-4d4d-b3b4-07ecee1fb7cd-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b7d822e6-7034-4d4d-b3b4-07ecee1fb7cd\") " pod="openstack/rabbitmq-server-0" Dec 10 15:48:21 crc kubenswrapper[4755]: I1210 15:48:21.832768 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b7d822e6-7034-4d4d-b3b4-07ecee1fb7cd-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b7d822e6-7034-4d4d-b3b4-07ecee1fb7cd\") " pod="openstack/rabbitmq-server-0" Dec 10 15:48:21 crc kubenswrapper[4755]: I1210 15:48:21.832801 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b7d822e6-7034-4d4d-b3b4-07ecee1fb7cd-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b7d822e6-7034-4d4d-b3b4-07ecee1fb7cd\") " pod="openstack/rabbitmq-server-0" Dec 10 15:48:21 crc kubenswrapper[4755]: I1210 15:48:21.832841 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhblt\" (UniqueName: \"kubernetes.io/projected/b7d822e6-7034-4d4d-b3b4-07ecee1fb7cd-kube-api-access-lhblt\") pod \"rabbitmq-server-0\" (UID: \"b7d822e6-7034-4d4d-b3b4-07ecee1fb7cd\") " pod="openstack/rabbitmq-server-0" Dec 10 15:48:21 crc kubenswrapper[4755]: I1210 15:48:21.832871 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b7d822e6-7034-4d4d-b3b4-07ecee1fb7cd-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"b7d822e6-7034-4d4d-b3b4-07ecee1fb7cd\") " pod="openstack/rabbitmq-server-0" Dec 10 15:48:21 crc kubenswrapper[4755]: I1210 15:48:21.832897 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b7d822e6-7034-4d4d-b3b4-07ecee1fb7cd-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b7d822e6-7034-4d4d-b3b4-07ecee1fb7cd\") " pod="openstack/rabbitmq-server-0" Dec 10 15:48:21 crc kubenswrapper[4755]: E1210 15:48:21.891609 4755 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 10 15:48:21 crc kubenswrapper[4755]: E1210 15:48:21.891693 4755 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 10 15:48:21 crc kubenswrapper[4755]: E1210 15:48:21.891919 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mz4t5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-jfc28_openstack(998863b6-4f48-4c8b-8011-a40377686b99): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 10 15:48:21 crc kubenswrapper[4755]: E1210 15:48:21.893092 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 15:48:21 crc kubenswrapper[4755]: I1210 15:48:21.934442 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b7d822e6-7034-4d4d-b3b4-07ecee1fb7cd-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b7d822e6-7034-4d4d-b3b4-07ecee1fb7cd\") " pod="openstack/rabbitmq-server-0" Dec 10 15:48:21 crc kubenswrapper[4755]: I1210 15:48:21.934585 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b7d822e6-7034-4d4d-b3b4-07ecee1fb7cd-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b7d822e6-7034-4d4d-b3b4-07ecee1fb7cd\") " pod="openstack/rabbitmq-server-0" Dec 10 15:48:21 crc kubenswrapper[4755]: I1210 15:48:21.934641 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-61a3430d-015d-4835-b4fc-5566f9913b53\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-61a3430d-015d-4835-b4fc-5566f9913b53\") pod \"rabbitmq-server-0\" (UID: \"b7d822e6-7034-4d4d-b3b4-07ecee1fb7cd\") " pod="openstack/rabbitmq-server-0" Dec 10 15:48:21 crc kubenswrapper[4755]: I1210 15:48:21.934664 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b7d822e6-7034-4d4d-b3b4-07ecee1fb7cd-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b7d822e6-7034-4d4d-b3b4-07ecee1fb7cd\") " pod="openstack/rabbitmq-server-0" Dec 10 15:48:21 crc kubenswrapper[4755]: I1210 15:48:21.934721 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b7d822e6-7034-4d4d-b3b4-07ecee1fb7cd-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b7d822e6-7034-4d4d-b3b4-07ecee1fb7cd\") " pod="openstack/rabbitmq-server-0" Dec 10 15:48:21 crc kubenswrapper[4755]: I1210 15:48:21.934822 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b7d822e6-7034-4d4d-b3b4-07ecee1fb7cd-config-data\") pod \"rabbitmq-server-0\" (UID: \"b7d822e6-7034-4d4d-b3b4-07ecee1fb7cd\") " pod="openstack/rabbitmq-server-0" Dec 10 15:48:21 crc kubenswrapper[4755]: I1210 15:48:21.934839 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b7d822e6-7034-4d4d-b3b4-07ecee1fb7cd-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b7d822e6-7034-4d4d-b3b4-07ecee1fb7cd\") " pod="openstack/rabbitmq-server-0" Dec 10 15:48:21 crc kubenswrapper[4755]: I1210 15:48:21.934860 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b7d822e6-7034-4d4d-b3b4-07ecee1fb7cd-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b7d822e6-7034-4d4d-b3b4-07ecee1fb7cd\") " pod="openstack/rabbitmq-server-0" Dec 10 15:48:21 crc kubenswrapper[4755]: I1210 15:48:21.934898 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b7d822e6-7034-4d4d-b3b4-07ecee1fb7cd-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b7d822e6-7034-4d4d-b3b4-07ecee1fb7cd\") " pod="openstack/rabbitmq-server-0" Dec 10 15:48:21 crc kubenswrapper[4755]: I1210 15:48:21.934926 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhblt\" (UniqueName: \"kubernetes.io/projected/b7d822e6-7034-4d4d-b3b4-07ecee1fb7cd-kube-api-access-lhblt\") pod \"rabbitmq-server-0\" (UID: \"b7d822e6-7034-4d4d-b3b4-07ecee1fb7cd\") " pod="openstack/rabbitmq-server-0" Dec 10 15:48:21 crc kubenswrapper[4755]: I1210 15:48:21.934946 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b7d822e6-7034-4d4d-b3b4-07ecee1fb7cd-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"b7d822e6-7034-4d4d-b3b4-07ecee1fb7cd\") " pod="openstack/rabbitmq-server-0" Dec 10 15:48:21 crc kubenswrapper[4755]: I1210 15:48:21.937373 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b7d822e6-7034-4d4d-b3b4-07ecee1fb7cd-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b7d822e6-7034-4d4d-b3b4-07ecee1fb7cd\") " pod="openstack/rabbitmq-server-0" Dec 10 15:48:21 crc kubenswrapper[4755]: I1210 15:48:21.940609 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b7d822e6-7034-4d4d-b3b4-07ecee1fb7cd-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"b7d822e6-7034-4d4d-b3b4-07ecee1fb7cd\") " pod="openstack/rabbitmq-server-0" Dec 10 15:48:21 crc kubenswrapper[4755]: I1210 15:48:21.940884 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b7d822e6-7034-4d4d-b3b4-07ecee1fb7cd-config-data\") pod \"rabbitmq-server-0\" (UID: \"b7d822e6-7034-4d4d-b3b4-07ecee1fb7cd\") " pod="openstack/rabbitmq-server-0" Dec 10 15:48:21 crc kubenswrapper[4755]: I1210 15:48:21.941237 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b7d822e6-7034-4d4d-b3b4-07ecee1fb7cd-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b7d822e6-7034-4d4d-b3b4-07ecee1fb7cd\") " pod="openstack/rabbitmq-server-0" Dec 10 15:48:21 crc kubenswrapper[4755]: I1210 15:48:21.942350 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b7d822e6-7034-4d4d-b3b4-07ecee1fb7cd-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b7d822e6-7034-4d4d-b3b4-07ecee1fb7cd\") " pod="openstack/rabbitmq-server-0" Dec 10 15:48:21 crc kubenswrapper[4755]: I1210 15:48:21.942714 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b7d822e6-7034-4d4d-b3b4-07ecee1fb7cd-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b7d822e6-7034-4d4d-b3b4-07ecee1fb7cd\") " pod="openstack/rabbitmq-server-0" Dec 10 15:48:21 crc kubenswrapper[4755]: I1210 15:48:21.944884 4755 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 10 15:48:21 crc kubenswrapper[4755]: I1210 15:48:21.944918 4755 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-61a3430d-015d-4835-b4fc-5566f9913b53\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-61a3430d-015d-4835-b4fc-5566f9913b53\") pod \"rabbitmq-server-0\" (UID: \"b7d822e6-7034-4d4d-b3b4-07ecee1fb7cd\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/d4252b507526a7e91a94eb844c0ffdc167a616b1bba916a7295ffc4900f2a3e9/globalmount\"" pod="openstack/rabbitmq-server-0" Dec 10 15:48:21 crc kubenswrapper[4755]: I1210 15:48:21.946042 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b7d822e6-7034-4d4d-b3b4-07ecee1fb7cd-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b7d822e6-7034-4d4d-b3b4-07ecee1fb7cd\") " pod="openstack/rabbitmq-server-0" Dec 10 15:48:21 crc kubenswrapper[4755]: I1210 15:48:21.947231 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b7d822e6-7034-4d4d-b3b4-07ecee1fb7cd-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b7d822e6-7034-4d4d-b3b4-07ecee1fb7cd\") " pod="openstack/rabbitmq-server-0" Dec 10 15:48:21 crc kubenswrapper[4755]: I1210 15:48:21.950625 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b7d822e6-7034-4d4d-b3b4-07ecee1fb7cd-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b7d822e6-7034-4d4d-b3b4-07ecee1fb7cd\") " pod="openstack/rabbitmq-server-0" Dec 10 15:48:21 crc kubenswrapper[4755]: I1210 15:48:21.967537 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhblt\" (UniqueName: \"kubernetes.io/projected/b7d822e6-7034-4d4d-b3b4-07ecee1fb7cd-kube-api-access-lhblt\") pod \"rabbitmq-server-0\" (UID: \"b7d822e6-7034-4d4d-b3b4-07ecee1fb7cd\") " pod="openstack/rabbitmq-server-0" Dec 10 15:48:22 crc kubenswrapper[4755]: I1210 15:48:22.019492 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-61a3430d-015d-4835-b4fc-5566f9913b53\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-61a3430d-015d-4835-b4fc-5566f9913b53\") pod \"rabbitmq-server-0\" (UID: \"b7d822e6-7034-4d4d-b3b4-07ecee1fb7cd\") " pod="openstack/rabbitmq-server-0" Dec 10 15:48:22 crc kubenswrapper[4755]: I1210 15:48:22.050641 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 10 15:48:22 crc kubenswrapper[4755]: I1210 15:48:22.297249 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:48:22 crc kubenswrapper[4755]: I1210 15:48:22.450417 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/fb480bc7-6936-4208-964b-44cffd08f907-rabbitmq-plugins\") pod \"fb480bc7-6936-4208-964b-44cffd08f907\" (UID: \"fb480bc7-6936-4208-964b-44cffd08f907\") " Dec 10 15:48:22 crc kubenswrapper[4755]: I1210 15:48:22.450523 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/fb480bc7-6936-4208-964b-44cffd08f907-rabbitmq-erlang-cookie\") pod \"fb480bc7-6936-4208-964b-44cffd08f907\" (UID: \"fb480bc7-6936-4208-964b-44cffd08f907\") " Dec 10 15:48:22 crc kubenswrapper[4755]: I1210 15:48:22.450623 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/fb480bc7-6936-4208-964b-44cffd08f907-pod-info\") pod \"fb480bc7-6936-4208-964b-44cffd08f907\" (UID: \"fb480bc7-6936-4208-964b-44cffd08f907\") " Dec 10 15:48:22 crc kubenswrapper[4755]: I1210 15:48:22.450677 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/fb480bc7-6936-4208-964b-44cffd08f907-server-conf\") pod \"fb480bc7-6936-4208-964b-44cffd08f907\" (UID: \"fb480bc7-6936-4208-964b-44cffd08f907\") " Dec 10 15:48:22 crc kubenswrapper[4755]: I1210 15:48:22.450854 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/fb480bc7-6936-4208-964b-44cffd08f907-rabbitmq-tls\") pod \"fb480bc7-6936-4208-964b-44cffd08f907\" (UID: \"fb480bc7-6936-4208-964b-44cffd08f907\") " Dec 10 15:48:22 crc kubenswrapper[4755]: I1210 15:48:22.450888 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/fb480bc7-6936-4208-964b-44cffd08f907-plugins-conf\") pod \"fb480bc7-6936-4208-964b-44cffd08f907\" (UID: \"fb480bc7-6936-4208-964b-44cffd08f907\") " Dec 10 15:48:22 crc kubenswrapper[4755]: I1210 15:48:22.450917 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tj5zs\" (UniqueName: \"kubernetes.io/projected/fb480bc7-6936-4208-964b-44cffd08f907-kube-api-access-tj5zs\") pod \"fb480bc7-6936-4208-964b-44cffd08f907\" (UID: \"fb480bc7-6936-4208-964b-44cffd08f907\") " Dec 10 15:48:22 crc kubenswrapper[4755]: I1210 15:48:22.450971 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/fb480bc7-6936-4208-964b-44cffd08f907-erlang-cookie-secret\") pod \"fb480bc7-6936-4208-964b-44cffd08f907\" (UID: \"fb480bc7-6936-4208-964b-44cffd08f907\") " Dec 10 15:48:22 crc kubenswrapper[4755]: I1210 15:48:22.451001 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fb480bc7-6936-4208-964b-44cffd08f907-config-data\") pod \"fb480bc7-6936-4208-964b-44cffd08f907\" (UID: \"fb480bc7-6936-4208-964b-44cffd08f907\") " Dec 10 15:48:22 crc kubenswrapper[4755]: I1210 15:48:22.451422 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb480bc7-6936-4208-964b-44cffd08f907-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "fb480bc7-6936-4208-964b-44cffd08f907" (UID: "fb480bc7-6936-4208-964b-44cffd08f907"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:48:22 crc kubenswrapper[4755]: I1210 15:48:22.451819 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb480bc7-6936-4208-964b-44cffd08f907-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "fb480bc7-6936-4208-964b-44cffd08f907" (UID: "fb480bc7-6936-4208-964b-44cffd08f907"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:48:22 crc kubenswrapper[4755]: I1210 15:48:22.455944 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb480bc7-6936-4208-964b-44cffd08f907-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "fb480bc7-6936-4208-964b-44cffd08f907" (UID: "fb480bc7-6936-4208-964b-44cffd08f907"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:48:22 crc kubenswrapper[4755]: I1210 15:48:22.458595 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-77593632-dc40-4f21-b52e-726e9f34d0e5\") pod \"fb480bc7-6936-4208-964b-44cffd08f907\" (UID: \"fb480bc7-6936-4208-964b-44cffd08f907\") " Dec 10 15:48:22 crc kubenswrapper[4755]: I1210 15:48:22.458732 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/fb480bc7-6936-4208-964b-44cffd08f907-rabbitmq-confd\") pod \"fb480bc7-6936-4208-964b-44cffd08f907\" (UID: \"fb480bc7-6936-4208-964b-44cffd08f907\") " Dec 10 15:48:22 crc kubenswrapper[4755]: I1210 15:48:22.459935 4755 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/fb480bc7-6936-4208-964b-44cffd08f907-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:22 crc kubenswrapper[4755]: I1210 15:48:22.459951 4755 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/fb480bc7-6936-4208-964b-44cffd08f907-plugins-conf\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:22 crc kubenswrapper[4755]: I1210 15:48:22.463150 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb480bc7-6936-4208-964b-44cffd08f907-kube-api-access-tj5zs" (OuterVolumeSpecName: "kube-api-access-tj5zs") pod "fb480bc7-6936-4208-964b-44cffd08f907" (UID: "fb480bc7-6936-4208-964b-44cffd08f907"). InnerVolumeSpecName "kube-api-access-tj5zs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:48:22 crc kubenswrapper[4755]: I1210 15:48:22.472544 4755 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/fb480bc7-6936-4208-964b-44cffd08f907-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:22 crc kubenswrapper[4755]: I1210 15:48:22.484120 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb480bc7-6936-4208-964b-44cffd08f907-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "fb480bc7-6936-4208-964b-44cffd08f907" (UID: "fb480bc7-6936-4208-964b-44cffd08f907"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:48:22 crc kubenswrapper[4755]: I1210 15:48:22.518266 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb480bc7-6936-4208-964b-44cffd08f907-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "fb480bc7-6936-4208-964b-44cffd08f907" (UID: "fb480bc7-6936-4208-964b-44cffd08f907"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:48:22 crc kubenswrapper[4755]: I1210 15:48:22.542236 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/fb480bc7-6936-4208-964b-44cffd08f907-pod-info" (OuterVolumeSpecName: "pod-info") pod "fb480bc7-6936-4208-964b-44cffd08f907" (UID: "fb480bc7-6936-4208-964b-44cffd08f907"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Dec 10 15:48:22 crc kubenswrapper[4755]: I1210 15:48:22.560697 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb480bc7-6936-4208-964b-44cffd08f907-config-data" (OuterVolumeSpecName: "config-data") pod "fb480bc7-6936-4208-964b-44cffd08f907" (UID: "fb480bc7-6936-4208-964b-44cffd08f907"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:48:22 crc kubenswrapper[4755]: I1210 15:48:22.590993 4755 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/fb480bc7-6936-4208-964b-44cffd08f907-pod-info\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:22 crc kubenswrapper[4755]: I1210 15:48:22.591299 4755 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/fb480bc7-6936-4208-964b-44cffd08f907-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:22 crc kubenswrapper[4755]: I1210 15:48:22.591317 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tj5zs\" (UniqueName: \"kubernetes.io/projected/fb480bc7-6936-4208-964b-44cffd08f907-kube-api-access-tj5zs\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:22 crc kubenswrapper[4755]: I1210 15:48:22.591331 4755 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/fb480bc7-6936-4208-964b-44cffd08f907-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:22 crc kubenswrapper[4755]: I1210 15:48:22.591346 4755 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fb480bc7-6936-4208-964b-44cffd08f907-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:22 crc kubenswrapper[4755]: I1210 15:48:22.697261 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:48:22 crc kubenswrapper[4755]: I1210 15:48:22.697479 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"fb480bc7-6936-4208-964b-44cffd08f907","Type":"ContainerDied","Data":"8dbf06f3fe3aa83390c14d7b774f44242e2307287001500d0d87e698af07714f"} Dec 10 15:48:22 crc kubenswrapper[4755]: I1210 15:48:22.697527 4755 scope.go:117] "RemoveContainer" containerID="8736ae2271c56389e0799a850f399ae7691aedda4dca66c57b45b6c795cfb756" Dec 10 15:48:22 crc kubenswrapper[4755]: I1210 15:48:22.711301 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb480bc7-6936-4208-964b-44cffd08f907-server-conf" (OuterVolumeSpecName: "server-conf") pod "fb480bc7-6936-4208-964b-44cffd08f907" (UID: "fb480bc7-6936-4208-964b-44cffd08f907"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:48:22 crc kubenswrapper[4755]: I1210 15:48:22.720356 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 10 15:48:22 crc kubenswrapper[4755]: W1210 15:48:22.725758 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb7d822e6_7034_4d4d_b3b4_07ecee1fb7cd.slice/crio-1c4877ce086240b820e7de6e7f025ab32c4f326421b88749a2dd7b63a0002a53 WatchSource:0}: Error finding container 1c4877ce086240b820e7de6e7f025ab32c4f326421b88749a2dd7b63a0002a53: Status 404 returned error can't find the container with id 1c4877ce086240b820e7de6e7f025ab32c4f326421b88749a2dd7b63a0002a53 Dec 10 15:48:22 crc kubenswrapper[4755]: I1210 15:48:22.770419 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb480bc7-6936-4208-964b-44cffd08f907-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "fb480bc7-6936-4208-964b-44cffd08f907" (UID: "fb480bc7-6936-4208-964b-44cffd08f907"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:48:22 crc kubenswrapper[4755]: I1210 15:48:22.775663 4755 scope.go:117] "RemoveContainer" containerID="05050464a8e3abb66cfa4fb28127a52df0fde7cacd0e16b4c4b9c9d38958867c" Dec 10 15:48:22 crc kubenswrapper[4755]: I1210 15:48:22.797436 4755 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/fb480bc7-6936-4208-964b-44cffd08f907-server-conf\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:22 crc kubenswrapper[4755]: I1210 15:48:22.799486 4755 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/fb480bc7-6936-4208-964b-44cffd08f907-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:22 crc kubenswrapper[4755]: I1210 15:48:22.798584 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-77593632-dc40-4f21-b52e-726e9f34d0e5" (OuterVolumeSpecName: "persistence") pod "fb480bc7-6936-4208-964b-44cffd08f907" (UID: "fb480bc7-6936-4208-964b-44cffd08f907"). InnerVolumeSpecName "pvc-77593632-dc40-4f21-b52e-726e9f34d0e5". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 10 15:48:22 crc kubenswrapper[4755]: I1210 15:48:22.902385 4755 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-77593632-dc40-4f21-b52e-726e9f34d0e5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-77593632-dc40-4f21-b52e-726e9f34d0e5\") on node \"crc\" " Dec 10 15:48:22 crc kubenswrapper[4755]: I1210 15:48:22.937351 4755 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Dec 10 15:48:22 crc kubenswrapper[4755]: I1210 15:48:22.937582 4755 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-77593632-dc40-4f21-b52e-726e9f34d0e5" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-77593632-dc40-4f21-b52e-726e9f34d0e5") on node "crc" Dec 10 15:48:23 crc kubenswrapper[4755]: I1210 15:48:23.004663 4755 reconciler_common.go:293] "Volume detached for volume \"pvc-77593632-dc40-4f21-b52e-726e9f34d0e5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-77593632-dc40-4f21-b52e-726e9f34d0e5\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:23 crc kubenswrapper[4755]: I1210 15:48:23.065573 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 10 15:48:23 crc kubenswrapper[4755]: I1210 15:48:23.078369 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 10 15:48:23 crc kubenswrapper[4755]: I1210 15:48:23.090278 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 10 15:48:23 crc kubenswrapper[4755]: E1210 15:48:23.090915 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb480bc7-6936-4208-964b-44cffd08f907" containerName="rabbitmq" Dec 10 15:48:23 crc kubenswrapper[4755]: I1210 15:48:23.090941 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb480bc7-6936-4208-964b-44cffd08f907" containerName="rabbitmq" Dec 10 15:48:23 crc kubenswrapper[4755]: E1210 15:48:23.090962 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb480bc7-6936-4208-964b-44cffd08f907" containerName="setup-container" Dec 10 15:48:23 crc kubenswrapper[4755]: I1210 15:48:23.090971 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb480bc7-6936-4208-964b-44cffd08f907" containerName="setup-container" Dec 10 15:48:23 crc kubenswrapper[4755]: I1210 15:48:23.091264 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb480bc7-6936-4208-964b-44cffd08f907" containerName="rabbitmq" Dec 10 15:48:23 crc kubenswrapper[4755]: I1210 15:48:23.092906 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:48:23 crc kubenswrapper[4755]: I1210 15:48:23.098195 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Dec 10 15:48:23 crc kubenswrapper[4755]: I1210 15:48:23.098540 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-mjpt2" Dec 10 15:48:23 crc kubenswrapper[4755]: I1210 15:48:23.098592 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Dec 10 15:48:23 crc kubenswrapper[4755]: I1210 15:48:23.098706 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Dec 10 15:48:23 crc kubenswrapper[4755]: I1210 15:48:23.102036 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Dec 10 15:48:23 crc kubenswrapper[4755]: I1210 15:48:23.102326 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Dec 10 15:48:23 crc kubenswrapper[4755]: I1210 15:48:23.102593 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Dec 10 15:48:23 crc kubenswrapper[4755]: I1210 15:48:23.102952 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 10 15:48:23 crc kubenswrapper[4755]: I1210 15:48:23.122913 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c5084508-a21d-4f43-bc50-2f0c7f13edbe-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"c5084508-a21d-4f43-bc50-2f0c7f13edbe\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:48:23 crc kubenswrapper[4755]: I1210 15:48:23.123029 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c5084508-a21d-4f43-bc50-2f0c7f13edbe-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"c5084508-a21d-4f43-bc50-2f0c7f13edbe\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:48:23 crc kubenswrapper[4755]: I1210 15:48:23.123113 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c5084508-a21d-4f43-bc50-2f0c7f13edbe-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"c5084508-a21d-4f43-bc50-2f0c7f13edbe\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:48:23 crc kubenswrapper[4755]: I1210 15:48:23.123208 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-77593632-dc40-4f21-b52e-726e9f34d0e5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-77593632-dc40-4f21-b52e-726e9f34d0e5\") pod \"rabbitmq-cell1-server-0\" (UID: \"c5084508-a21d-4f43-bc50-2f0c7f13edbe\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:48:23 crc kubenswrapper[4755]: I1210 15:48:23.123263 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c5084508-a21d-4f43-bc50-2f0c7f13edbe-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"c5084508-a21d-4f43-bc50-2f0c7f13edbe\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:48:23 crc kubenswrapper[4755]: I1210 15:48:23.123393 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c5084508-a21d-4f43-bc50-2f0c7f13edbe-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"c5084508-a21d-4f43-bc50-2f0c7f13edbe\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:48:23 crc kubenswrapper[4755]: I1210 15:48:23.123478 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c5084508-a21d-4f43-bc50-2f0c7f13edbe-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"c5084508-a21d-4f43-bc50-2f0c7f13edbe\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:48:23 crc kubenswrapper[4755]: I1210 15:48:23.123502 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c5084508-a21d-4f43-bc50-2f0c7f13edbe-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c5084508-a21d-4f43-bc50-2f0c7f13edbe\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:48:23 crc kubenswrapper[4755]: I1210 15:48:23.123518 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smvpw\" (UniqueName: \"kubernetes.io/projected/c5084508-a21d-4f43-bc50-2f0c7f13edbe-kube-api-access-smvpw\") pod \"rabbitmq-cell1-server-0\" (UID: \"c5084508-a21d-4f43-bc50-2f0c7f13edbe\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:48:23 crc kubenswrapper[4755]: I1210 15:48:23.123623 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c5084508-a21d-4f43-bc50-2f0c7f13edbe-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"c5084508-a21d-4f43-bc50-2f0c7f13edbe\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:48:23 crc kubenswrapper[4755]: I1210 15:48:23.123773 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c5084508-a21d-4f43-bc50-2f0c7f13edbe-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c5084508-a21d-4f43-bc50-2f0c7f13edbe\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:48:23 crc kubenswrapper[4755]: I1210 15:48:23.225434 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c5084508-a21d-4f43-bc50-2f0c7f13edbe-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"c5084508-a21d-4f43-bc50-2f0c7f13edbe\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:48:23 crc kubenswrapper[4755]: I1210 15:48:23.225774 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-77593632-dc40-4f21-b52e-726e9f34d0e5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-77593632-dc40-4f21-b52e-726e9f34d0e5\") pod \"rabbitmq-cell1-server-0\" (UID: \"c5084508-a21d-4f43-bc50-2f0c7f13edbe\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:48:23 crc kubenswrapper[4755]: I1210 15:48:23.225806 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c5084508-a21d-4f43-bc50-2f0c7f13edbe-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"c5084508-a21d-4f43-bc50-2f0c7f13edbe\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:48:23 crc kubenswrapper[4755]: I1210 15:48:23.225835 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c5084508-a21d-4f43-bc50-2f0c7f13edbe-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"c5084508-a21d-4f43-bc50-2f0c7f13edbe\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:48:23 crc kubenswrapper[4755]: I1210 15:48:23.225860 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c5084508-a21d-4f43-bc50-2f0c7f13edbe-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"c5084508-a21d-4f43-bc50-2f0c7f13edbe\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:48:23 crc kubenswrapper[4755]: I1210 15:48:23.225878 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c5084508-a21d-4f43-bc50-2f0c7f13edbe-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c5084508-a21d-4f43-bc50-2f0c7f13edbe\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:48:23 crc kubenswrapper[4755]: I1210 15:48:23.225893 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smvpw\" (UniqueName: \"kubernetes.io/projected/c5084508-a21d-4f43-bc50-2f0c7f13edbe-kube-api-access-smvpw\") pod \"rabbitmq-cell1-server-0\" (UID: \"c5084508-a21d-4f43-bc50-2f0c7f13edbe\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:48:23 crc kubenswrapper[4755]: I1210 15:48:23.225920 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c5084508-a21d-4f43-bc50-2f0c7f13edbe-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"c5084508-a21d-4f43-bc50-2f0c7f13edbe\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:48:23 crc kubenswrapper[4755]: I1210 15:48:23.226025 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c5084508-a21d-4f43-bc50-2f0c7f13edbe-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"c5084508-a21d-4f43-bc50-2f0c7f13edbe\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:48:23 crc kubenswrapper[4755]: I1210 15:48:23.226303 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c5084508-a21d-4f43-bc50-2f0c7f13edbe-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"c5084508-a21d-4f43-bc50-2f0c7f13edbe\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:48:23 crc kubenswrapper[4755]: I1210 15:48:23.226575 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c5084508-a21d-4f43-bc50-2f0c7f13edbe-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c5084508-a21d-4f43-bc50-2f0c7f13edbe\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:48:23 crc kubenswrapper[4755]: I1210 15:48:23.226667 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c5084508-a21d-4f43-bc50-2f0c7f13edbe-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"c5084508-a21d-4f43-bc50-2f0c7f13edbe\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:48:23 crc kubenswrapper[4755]: I1210 15:48:23.226674 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c5084508-a21d-4f43-bc50-2f0c7f13edbe-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"c5084508-a21d-4f43-bc50-2f0c7f13edbe\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:48:23 crc kubenswrapper[4755]: I1210 15:48:23.226755 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c5084508-a21d-4f43-bc50-2f0c7f13edbe-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"c5084508-a21d-4f43-bc50-2f0c7f13edbe\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:48:23 crc kubenswrapper[4755]: I1210 15:48:23.226785 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c5084508-a21d-4f43-bc50-2f0c7f13edbe-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c5084508-a21d-4f43-bc50-2f0c7f13edbe\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:48:23 crc kubenswrapper[4755]: I1210 15:48:23.227602 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c5084508-a21d-4f43-bc50-2f0c7f13edbe-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c5084508-a21d-4f43-bc50-2f0c7f13edbe\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:48:23 crc kubenswrapper[4755]: I1210 15:48:23.228796 4755 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 10 15:48:23 crc kubenswrapper[4755]: I1210 15:48:23.228832 4755 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-77593632-dc40-4f21-b52e-726e9f34d0e5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-77593632-dc40-4f21-b52e-726e9f34d0e5\") pod \"rabbitmq-cell1-server-0\" (UID: \"c5084508-a21d-4f43-bc50-2f0c7f13edbe\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/46fdf78cc94e5747c93bd944a88b2597f9ef25d3ce7984ed1662cc52337b7889/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:48:23 crc kubenswrapper[4755]: I1210 15:48:23.232409 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c5084508-a21d-4f43-bc50-2f0c7f13edbe-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"c5084508-a21d-4f43-bc50-2f0c7f13edbe\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:48:23 crc kubenswrapper[4755]: I1210 15:48:23.233085 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c5084508-a21d-4f43-bc50-2f0c7f13edbe-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"c5084508-a21d-4f43-bc50-2f0c7f13edbe\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:48:23 crc kubenswrapper[4755]: I1210 15:48:23.238454 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c5084508-a21d-4f43-bc50-2f0c7f13edbe-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"c5084508-a21d-4f43-bc50-2f0c7f13edbe\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:48:23 crc kubenswrapper[4755]: I1210 15:48:23.242557 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c5084508-a21d-4f43-bc50-2f0c7f13edbe-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"c5084508-a21d-4f43-bc50-2f0c7f13edbe\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:48:23 crc kubenswrapper[4755]: I1210 15:48:23.243262 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smvpw\" (UniqueName: \"kubernetes.io/projected/c5084508-a21d-4f43-bc50-2f0c7f13edbe-kube-api-access-smvpw\") pod \"rabbitmq-cell1-server-0\" (UID: \"c5084508-a21d-4f43-bc50-2f0c7f13edbe\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:48:23 crc kubenswrapper[4755]: I1210 15:48:23.279636 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-77593632-dc40-4f21-b52e-726e9f34d0e5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-77593632-dc40-4f21-b52e-726e9f34d0e5\") pod \"rabbitmq-cell1-server-0\" (UID: \"c5084508-a21d-4f43-bc50-2f0c7f13edbe\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:48:23 crc kubenswrapper[4755]: I1210 15:48:23.428705 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:48:23 crc kubenswrapper[4755]: I1210 15:48:23.715905 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b7d822e6-7034-4d4d-b3b4-07ecee1fb7cd","Type":"ContainerStarted","Data":"1c4877ce086240b820e7de6e7f025ab32c4f326421b88749a2dd7b63a0002a53"} Dec 10 15:48:23 crc kubenswrapper[4755]: I1210 15:48:23.777198 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb480bc7-6936-4208-964b-44cffd08f907" path="/var/lib/kubelet/pods/fb480bc7-6936-4208-964b-44cffd08f907/volumes" Dec 10 15:48:23 crc kubenswrapper[4755]: I1210 15:48:23.935726 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 10 15:48:24 crc kubenswrapper[4755]: I1210 15:48:24.097355 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-dc7c944bf-gs9gg"] Dec 10 15:48:24 crc kubenswrapper[4755]: I1210 15:48:24.100008 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dc7c944bf-gs9gg" Dec 10 15:48:24 crc kubenswrapper[4755]: I1210 15:48:24.103569 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Dec 10 15:48:24 crc kubenswrapper[4755]: I1210 15:48:24.133802 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-dc7c944bf-gs9gg"] Dec 10 15:48:24 crc kubenswrapper[4755]: I1210 15:48:24.221406 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-dc7c944bf-gs9gg"] Dec 10 15:48:24 crc kubenswrapper[4755]: I1210 15:48:24.251116 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bae5f13f-ced2-4e59-bac7-bac0e2f95ecb-dns-swift-storage-0\") pod \"dnsmasq-dns-dc7c944bf-gs9gg\" (UID: \"bae5f13f-ced2-4e59-bac7-bac0e2f95ecb\") " pod="openstack/dnsmasq-dns-dc7c944bf-gs9gg" Dec 10 15:48:24 crc kubenswrapper[4755]: I1210 15:48:24.251290 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bae5f13f-ced2-4e59-bac7-bac0e2f95ecb-ovsdbserver-sb\") pod \"dnsmasq-dns-dc7c944bf-gs9gg\" (UID: \"bae5f13f-ced2-4e59-bac7-bac0e2f95ecb\") " pod="openstack/dnsmasq-dns-dc7c944bf-gs9gg" Dec 10 15:48:24 crc kubenswrapper[4755]: I1210 15:48:24.251322 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bae5f13f-ced2-4e59-bac7-bac0e2f95ecb-dns-svc\") pod \"dnsmasq-dns-dc7c944bf-gs9gg\" (UID: \"bae5f13f-ced2-4e59-bac7-bac0e2f95ecb\") " pod="openstack/dnsmasq-dns-dc7c944bf-gs9gg" Dec 10 15:48:24 crc kubenswrapper[4755]: I1210 15:48:24.251395 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bae5f13f-ced2-4e59-bac7-bac0e2f95ecb-ovsdbserver-nb\") pod \"dnsmasq-dns-dc7c944bf-gs9gg\" (UID: \"bae5f13f-ced2-4e59-bac7-bac0e2f95ecb\") " pod="openstack/dnsmasq-dns-dc7c944bf-gs9gg" Dec 10 15:48:24 crc kubenswrapper[4755]: I1210 15:48:24.251479 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bae5f13f-ced2-4e59-bac7-bac0e2f95ecb-config\") pod \"dnsmasq-dns-dc7c944bf-gs9gg\" (UID: \"bae5f13f-ced2-4e59-bac7-bac0e2f95ecb\") " pod="openstack/dnsmasq-dns-dc7c944bf-gs9gg" Dec 10 15:48:24 crc kubenswrapper[4755]: I1210 15:48:24.251523 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/bae5f13f-ced2-4e59-bac7-bac0e2f95ecb-openstack-edpm-ipam\") pod \"dnsmasq-dns-dc7c944bf-gs9gg\" (UID: \"bae5f13f-ced2-4e59-bac7-bac0e2f95ecb\") " pod="openstack/dnsmasq-dns-dc7c944bf-gs9gg" Dec 10 15:48:24 crc kubenswrapper[4755]: I1210 15:48:24.251558 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlxg5\" (UniqueName: \"kubernetes.io/projected/bae5f13f-ced2-4e59-bac7-bac0e2f95ecb-kube-api-access-xlxg5\") pod \"dnsmasq-dns-dc7c944bf-gs9gg\" (UID: \"bae5f13f-ced2-4e59-bac7-bac0e2f95ecb\") " pod="openstack/dnsmasq-dns-dc7c944bf-gs9gg" Dec 10 15:48:24 crc kubenswrapper[4755]: I1210 15:48:24.257522 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-c4b758ff5-f4t6l"] Dec 10 15:48:24 crc kubenswrapper[4755]: I1210 15:48:24.259407 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c4b758ff5-f4t6l" Dec 10 15:48:24 crc kubenswrapper[4755]: I1210 15:48:24.288610 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-c4b758ff5-f4t6l"] Dec 10 15:48:24 crc kubenswrapper[4755]: E1210 15:48:24.303496 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[config dns-svc dns-swift-storage-0 kube-api-access-xlxg5 openstack-edpm-ipam ovsdbserver-nb ovsdbserver-sb], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-dc7c944bf-gs9gg" podUID="bae5f13f-ced2-4e59-bac7-bac0e2f95ecb" Dec 10 15:48:24 crc kubenswrapper[4755]: I1210 15:48:24.353976 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bae5f13f-ced2-4e59-bac7-bac0e2f95ecb-ovsdbserver-sb\") pod \"dnsmasq-dns-dc7c944bf-gs9gg\" (UID: \"bae5f13f-ced2-4e59-bac7-bac0e2f95ecb\") " pod="openstack/dnsmasq-dns-dc7c944bf-gs9gg" Dec 10 15:48:24 crc kubenswrapper[4755]: I1210 15:48:24.354038 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bae5f13f-ced2-4e59-bac7-bac0e2f95ecb-dns-svc\") pod \"dnsmasq-dns-dc7c944bf-gs9gg\" (UID: \"bae5f13f-ced2-4e59-bac7-bac0e2f95ecb\") " pod="openstack/dnsmasq-dns-dc7c944bf-gs9gg" Dec 10 15:48:24 crc kubenswrapper[4755]: I1210 15:48:24.354106 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bae5f13f-ced2-4e59-bac7-bac0e2f95ecb-ovsdbserver-nb\") pod \"dnsmasq-dns-dc7c944bf-gs9gg\" (UID: \"bae5f13f-ced2-4e59-bac7-bac0e2f95ecb\") " pod="openstack/dnsmasq-dns-dc7c944bf-gs9gg" Dec 10 15:48:24 crc kubenswrapper[4755]: I1210 15:48:24.354154 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bae5f13f-ced2-4e59-bac7-bac0e2f95ecb-config\") pod \"dnsmasq-dns-dc7c944bf-gs9gg\" (UID: \"bae5f13f-ced2-4e59-bac7-bac0e2f95ecb\") " pod="openstack/dnsmasq-dns-dc7c944bf-gs9gg" Dec 10 15:48:24 crc kubenswrapper[4755]: I1210 15:48:24.354185 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/bae5f13f-ced2-4e59-bac7-bac0e2f95ecb-openstack-edpm-ipam\") pod \"dnsmasq-dns-dc7c944bf-gs9gg\" (UID: \"bae5f13f-ced2-4e59-bac7-bac0e2f95ecb\") " pod="openstack/dnsmasq-dns-dc7c944bf-gs9gg" Dec 10 15:48:24 crc kubenswrapper[4755]: I1210 15:48:24.354211 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlxg5\" (UniqueName: \"kubernetes.io/projected/bae5f13f-ced2-4e59-bac7-bac0e2f95ecb-kube-api-access-xlxg5\") pod \"dnsmasq-dns-dc7c944bf-gs9gg\" (UID: \"bae5f13f-ced2-4e59-bac7-bac0e2f95ecb\") " pod="openstack/dnsmasq-dns-dc7c944bf-gs9gg" Dec 10 15:48:24 crc kubenswrapper[4755]: I1210 15:48:24.354273 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bae5f13f-ced2-4e59-bac7-bac0e2f95ecb-dns-swift-storage-0\") pod \"dnsmasq-dns-dc7c944bf-gs9gg\" (UID: \"bae5f13f-ced2-4e59-bac7-bac0e2f95ecb\") " pod="openstack/dnsmasq-dns-dc7c944bf-gs9gg" Dec 10 15:48:24 crc kubenswrapper[4755]: I1210 15:48:24.355246 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bae5f13f-ced2-4e59-bac7-bac0e2f95ecb-dns-swift-storage-0\") pod \"dnsmasq-dns-dc7c944bf-gs9gg\" (UID: \"bae5f13f-ced2-4e59-bac7-bac0e2f95ecb\") " pod="openstack/dnsmasq-dns-dc7c944bf-gs9gg" Dec 10 15:48:24 crc kubenswrapper[4755]: I1210 15:48:24.355554 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bae5f13f-ced2-4e59-bac7-bac0e2f95ecb-ovsdbserver-nb\") pod \"dnsmasq-dns-dc7c944bf-gs9gg\" (UID: \"bae5f13f-ced2-4e59-bac7-bac0e2f95ecb\") " pod="openstack/dnsmasq-dns-dc7c944bf-gs9gg" Dec 10 15:48:24 crc kubenswrapper[4755]: I1210 15:48:24.355944 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bae5f13f-ced2-4e59-bac7-bac0e2f95ecb-config\") pod \"dnsmasq-dns-dc7c944bf-gs9gg\" (UID: \"bae5f13f-ced2-4e59-bac7-bac0e2f95ecb\") " pod="openstack/dnsmasq-dns-dc7c944bf-gs9gg" Dec 10 15:48:24 crc kubenswrapper[4755]: I1210 15:48:24.356200 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/bae5f13f-ced2-4e59-bac7-bac0e2f95ecb-openstack-edpm-ipam\") pod \"dnsmasq-dns-dc7c944bf-gs9gg\" (UID: \"bae5f13f-ced2-4e59-bac7-bac0e2f95ecb\") " pod="openstack/dnsmasq-dns-dc7c944bf-gs9gg" Dec 10 15:48:24 crc kubenswrapper[4755]: I1210 15:48:24.360021 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bae5f13f-ced2-4e59-bac7-bac0e2f95ecb-ovsdbserver-sb\") pod \"dnsmasq-dns-dc7c944bf-gs9gg\" (UID: \"bae5f13f-ced2-4e59-bac7-bac0e2f95ecb\") " pod="openstack/dnsmasq-dns-dc7c944bf-gs9gg" Dec 10 15:48:24 crc kubenswrapper[4755]: I1210 15:48:24.365946 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bae5f13f-ced2-4e59-bac7-bac0e2f95ecb-dns-svc\") pod \"dnsmasq-dns-dc7c944bf-gs9gg\" (UID: \"bae5f13f-ced2-4e59-bac7-bac0e2f95ecb\") " pod="openstack/dnsmasq-dns-dc7c944bf-gs9gg" Dec 10 15:48:24 crc kubenswrapper[4755]: I1210 15:48:24.397499 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlxg5\" (UniqueName: \"kubernetes.io/projected/bae5f13f-ced2-4e59-bac7-bac0e2f95ecb-kube-api-access-xlxg5\") pod \"dnsmasq-dns-dc7c944bf-gs9gg\" (UID: \"bae5f13f-ced2-4e59-bac7-bac0e2f95ecb\") " pod="openstack/dnsmasq-dns-dc7c944bf-gs9gg" Dec 10 15:48:24 crc kubenswrapper[4755]: I1210 15:48:24.456155 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3215d8ec-c0b3-4fda-a96e-4ed078293493-config\") pod \"dnsmasq-dns-c4b758ff5-f4t6l\" (UID: \"3215d8ec-c0b3-4fda-a96e-4ed078293493\") " pod="openstack/dnsmasq-dns-c4b758ff5-f4t6l" Dec 10 15:48:24 crc kubenswrapper[4755]: I1210 15:48:24.456215 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6vmf\" (UniqueName: \"kubernetes.io/projected/3215d8ec-c0b3-4fda-a96e-4ed078293493-kube-api-access-d6vmf\") pod \"dnsmasq-dns-c4b758ff5-f4t6l\" (UID: \"3215d8ec-c0b3-4fda-a96e-4ed078293493\") " pod="openstack/dnsmasq-dns-c4b758ff5-f4t6l" Dec 10 15:48:24 crc kubenswrapper[4755]: I1210 15:48:24.456607 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3215d8ec-c0b3-4fda-a96e-4ed078293493-dns-swift-storage-0\") pod \"dnsmasq-dns-c4b758ff5-f4t6l\" (UID: \"3215d8ec-c0b3-4fda-a96e-4ed078293493\") " pod="openstack/dnsmasq-dns-c4b758ff5-f4t6l" Dec 10 15:48:24 crc kubenswrapper[4755]: I1210 15:48:24.456697 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/3215d8ec-c0b3-4fda-a96e-4ed078293493-openstack-edpm-ipam\") pod \"dnsmasq-dns-c4b758ff5-f4t6l\" (UID: \"3215d8ec-c0b3-4fda-a96e-4ed078293493\") " pod="openstack/dnsmasq-dns-c4b758ff5-f4t6l" Dec 10 15:48:24 crc kubenswrapper[4755]: I1210 15:48:24.456774 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3215d8ec-c0b3-4fda-a96e-4ed078293493-ovsdbserver-sb\") pod \"dnsmasq-dns-c4b758ff5-f4t6l\" (UID: \"3215d8ec-c0b3-4fda-a96e-4ed078293493\") " pod="openstack/dnsmasq-dns-c4b758ff5-f4t6l" Dec 10 15:48:24 crc kubenswrapper[4755]: I1210 15:48:24.456864 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3215d8ec-c0b3-4fda-a96e-4ed078293493-ovsdbserver-nb\") pod \"dnsmasq-dns-c4b758ff5-f4t6l\" (UID: \"3215d8ec-c0b3-4fda-a96e-4ed078293493\") " pod="openstack/dnsmasq-dns-c4b758ff5-f4t6l" Dec 10 15:48:24 crc kubenswrapper[4755]: I1210 15:48:24.457043 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3215d8ec-c0b3-4fda-a96e-4ed078293493-dns-svc\") pod \"dnsmasq-dns-c4b758ff5-f4t6l\" (UID: \"3215d8ec-c0b3-4fda-a96e-4ed078293493\") " pod="openstack/dnsmasq-dns-c4b758ff5-f4t6l" Dec 10 15:48:24 crc kubenswrapper[4755]: I1210 15:48:24.558935 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3215d8ec-c0b3-4fda-a96e-4ed078293493-dns-swift-storage-0\") pod \"dnsmasq-dns-c4b758ff5-f4t6l\" (UID: \"3215d8ec-c0b3-4fda-a96e-4ed078293493\") " pod="openstack/dnsmasq-dns-c4b758ff5-f4t6l" Dec 10 15:48:24 crc kubenswrapper[4755]: I1210 15:48:24.558987 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/3215d8ec-c0b3-4fda-a96e-4ed078293493-openstack-edpm-ipam\") pod \"dnsmasq-dns-c4b758ff5-f4t6l\" (UID: \"3215d8ec-c0b3-4fda-a96e-4ed078293493\") " pod="openstack/dnsmasq-dns-c4b758ff5-f4t6l" Dec 10 15:48:24 crc kubenswrapper[4755]: I1210 15:48:24.559022 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3215d8ec-c0b3-4fda-a96e-4ed078293493-ovsdbserver-sb\") pod \"dnsmasq-dns-c4b758ff5-f4t6l\" (UID: \"3215d8ec-c0b3-4fda-a96e-4ed078293493\") " pod="openstack/dnsmasq-dns-c4b758ff5-f4t6l" Dec 10 15:48:24 crc kubenswrapper[4755]: I1210 15:48:24.559081 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3215d8ec-c0b3-4fda-a96e-4ed078293493-ovsdbserver-nb\") pod \"dnsmasq-dns-c4b758ff5-f4t6l\" (UID: \"3215d8ec-c0b3-4fda-a96e-4ed078293493\") " pod="openstack/dnsmasq-dns-c4b758ff5-f4t6l" Dec 10 15:48:24 crc kubenswrapper[4755]: I1210 15:48:24.559182 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3215d8ec-c0b3-4fda-a96e-4ed078293493-dns-svc\") pod \"dnsmasq-dns-c4b758ff5-f4t6l\" (UID: \"3215d8ec-c0b3-4fda-a96e-4ed078293493\") " pod="openstack/dnsmasq-dns-c4b758ff5-f4t6l" Dec 10 15:48:24 crc kubenswrapper[4755]: I1210 15:48:24.559259 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3215d8ec-c0b3-4fda-a96e-4ed078293493-config\") pod \"dnsmasq-dns-c4b758ff5-f4t6l\" (UID: \"3215d8ec-c0b3-4fda-a96e-4ed078293493\") " pod="openstack/dnsmasq-dns-c4b758ff5-f4t6l" Dec 10 15:48:24 crc kubenswrapper[4755]: I1210 15:48:24.559288 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6vmf\" (UniqueName: \"kubernetes.io/projected/3215d8ec-c0b3-4fda-a96e-4ed078293493-kube-api-access-d6vmf\") pod \"dnsmasq-dns-c4b758ff5-f4t6l\" (UID: \"3215d8ec-c0b3-4fda-a96e-4ed078293493\") " pod="openstack/dnsmasq-dns-c4b758ff5-f4t6l" Dec 10 15:48:24 crc kubenswrapper[4755]: I1210 15:48:24.560021 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3215d8ec-c0b3-4fda-a96e-4ed078293493-dns-swift-storage-0\") pod \"dnsmasq-dns-c4b758ff5-f4t6l\" (UID: \"3215d8ec-c0b3-4fda-a96e-4ed078293493\") " pod="openstack/dnsmasq-dns-c4b758ff5-f4t6l" Dec 10 15:48:24 crc kubenswrapper[4755]: I1210 15:48:24.560181 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3215d8ec-c0b3-4fda-a96e-4ed078293493-ovsdbserver-sb\") pod \"dnsmasq-dns-c4b758ff5-f4t6l\" (UID: \"3215d8ec-c0b3-4fda-a96e-4ed078293493\") " pod="openstack/dnsmasq-dns-c4b758ff5-f4t6l" Dec 10 15:48:24 crc kubenswrapper[4755]: I1210 15:48:24.560443 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3215d8ec-c0b3-4fda-a96e-4ed078293493-ovsdbserver-nb\") pod \"dnsmasq-dns-c4b758ff5-f4t6l\" (UID: \"3215d8ec-c0b3-4fda-a96e-4ed078293493\") " pod="openstack/dnsmasq-dns-c4b758ff5-f4t6l" Dec 10 15:48:24 crc kubenswrapper[4755]: I1210 15:48:24.560642 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3215d8ec-c0b3-4fda-a96e-4ed078293493-dns-svc\") pod \"dnsmasq-dns-c4b758ff5-f4t6l\" (UID: \"3215d8ec-c0b3-4fda-a96e-4ed078293493\") " pod="openstack/dnsmasq-dns-c4b758ff5-f4t6l" Dec 10 15:48:24 crc kubenswrapper[4755]: I1210 15:48:24.560768 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3215d8ec-c0b3-4fda-a96e-4ed078293493-config\") pod \"dnsmasq-dns-c4b758ff5-f4t6l\" (UID: \"3215d8ec-c0b3-4fda-a96e-4ed078293493\") " pod="openstack/dnsmasq-dns-c4b758ff5-f4t6l" Dec 10 15:48:24 crc kubenswrapper[4755]: I1210 15:48:24.560996 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/3215d8ec-c0b3-4fda-a96e-4ed078293493-openstack-edpm-ipam\") pod \"dnsmasq-dns-c4b758ff5-f4t6l\" (UID: \"3215d8ec-c0b3-4fda-a96e-4ed078293493\") " pod="openstack/dnsmasq-dns-c4b758ff5-f4t6l" Dec 10 15:48:24 crc kubenswrapper[4755]: I1210 15:48:24.576010 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6vmf\" (UniqueName: \"kubernetes.io/projected/3215d8ec-c0b3-4fda-a96e-4ed078293493-kube-api-access-d6vmf\") pod \"dnsmasq-dns-c4b758ff5-f4t6l\" (UID: \"3215d8ec-c0b3-4fda-a96e-4ed078293493\") " pod="openstack/dnsmasq-dns-c4b758ff5-f4t6l" Dec 10 15:48:24 crc kubenswrapper[4755]: I1210 15:48:24.621165 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c4b758ff5-f4t6l" Dec 10 15:48:24 crc kubenswrapper[4755]: I1210 15:48:24.743656 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c5084508-a21d-4f43-bc50-2f0c7f13edbe","Type":"ContainerStarted","Data":"05de8dd42d031f9588b9f8cf2eb07530aa5081391e4ff61731f79e9e2ede1732"} Dec 10 15:48:24 crc kubenswrapper[4755]: I1210 15:48:24.746349 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dc7c944bf-gs9gg" Dec 10 15:48:24 crc kubenswrapper[4755]: I1210 15:48:24.746347 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b7d822e6-7034-4d4d-b3b4-07ecee1fb7cd","Type":"ContainerStarted","Data":"2c8834e81dcfd5307cc22672e14a1b6d5f82877908142397baa4a5b4e90481bf"} Dec 10 15:48:24 crc kubenswrapper[4755]: I1210 15:48:24.776518 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dc7c944bf-gs9gg" Dec 10 15:48:24 crc kubenswrapper[4755]: I1210 15:48:24.876674 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/bae5f13f-ced2-4e59-bac7-bac0e2f95ecb-openstack-edpm-ipam\") pod \"bae5f13f-ced2-4e59-bac7-bac0e2f95ecb\" (UID: \"bae5f13f-ced2-4e59-bac7-bac0e2f95ecb\") " Dec 10 15:48:24 crc kubenswrapper[4755]: I1210 15:48:24.876751 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bae5f13f-ced2-4e59-bac7-bac0e2f95ecb-dns-svc\") pod \"bae5f13f-ced2-4e59-bac7-bac0e2f95ecb\" (UID: \"bae5f13f-ced2-4e59-bac7-bac0e2f95ecb\") " Dec 10 15:48:24 crc kubenswrapper[4755]: I1210 15:48:24.876797 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xlxg5\" (UniqueName: \"kubernetes.io/projected/bae5f13f-ced2-4e59-bac7-bac0e2f95ecb-kube-api-access-xlxg5\") pod \"bae5f13f-ced2-4e59-bac7-bac0e2f95ecb\" (UID: \"bae5f13f-ced2-4e59-bac7-bac0e2f95ecb\") " Dec 10 15:48:24 crc kubenswrapper[4755]: I1210 15:48:24.876830 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bae5f13f-ced2-4e59-bac7-bac0e2f95ecb-config\") pod \"bae5f13f-ced2-4e59-bac7-bac0e2f95ecb\" (UID: \"bae5f13f-ced2-4e59-bac7-bac0e2f95ecb\") " Dec 10 15:48:24 crc kubenswrapper[4755]: I1210 15:48:24.876973 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bae5f13f-ced2-4e59-bac7-bac0e2f95ecb-ovsdbserver-nb\") pod \"bae5f13f-ced2-4e59-bac7-bac0e2f95ecb\" (UID: \"bae5f13f-ced2-4e59-bac7-bac0e2f95ecb\") " Dec 10 15:48:24 crc kubenswrapper[4755]: I1210 15:48:24.877023 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bae5f13f-ced2-4e59-bac7-bac0e2f95ecb-ovsdbserver-sb\") pod \"bae5f13f-ced2-4e59-bac7-bac0e2f95ecb\" (UID: \"bae5f13f-ced2-4e59-bac7-bac0e2f95ecb\") " Dec 10 15:48:24 crc kubenswrapper[4755]: I1210 15:48:24.877073 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bae5f13f-ced2-4e59-bac7-bac0e2f95ecb-dns-swift-storage-0\") pod \"bae5f13f-ced2-4e59-bac7-bac0e2f95ecb\" (UID: \"bae5f13f-ced2-4e59-bac7-bac0e2f95ecb\") " Dec 10 15:48:24 crc kubenswrapper[4755]: I1210 15:48:24.878872 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bae5f13f-ced2-4e59-bac7-bac0e2f95ecb-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "bae5f13f-ced2-4e59-bac7-bac0e2f95ecb" (UID: "bae5f13f-ced2-4e59-bac7-bac0e2f95ecb"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:48:24 crc kubenswrapper[4755]: I1210 15:48:24.879245 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bae5f13f-ced2-4e59-bac7-bac0e2f95ecb-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "bae5f13f-ced2-4e59-bac7-bac0e2f95ecb" (UID: "bae5f13f-ced2-4e59-bac7-bac0e2f95ecb"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:48:24 crc kubenswrapper[4755]: I1210 15:48:24.879611 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bae5f13f-ced2-4e59-bac7-bac0e2f95ecb-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "bae5f13f-ced2-4e59-bac7-bac0e2f95ecb" (UID: "bae5f13f-ced2-4e59-bac7-bac0e2f95ecb"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:48:24 crc kubenswrapper[4755]: I1210 15:48:24.882369 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bae5f13f-ced2-4e59-bac7-bac0e2f95ecb-config" (OuterVolumeSpecName: "config") pod "bae5f13f-ced2-4e59-bac7-bac0e2f95ecb" (UID: "bae5f13f-ced2-4e59-bac7-bac0e2f95ecb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:48:24 crc kubenswrapper[4755]: I1210 15:48:24.882660 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bae5f13f-ced2-4e59-bac7-bac0e2f95ecb-kube-api-access-xlxg5" (OuterVolumeSpecName: "kube-api-access-xlxg5") pod "bae5f13f-ced2-4e59-bac7-bac0e2f95ecb" (UID: "bae5f13f-ced2-4e59-bac7-bac0e2f95ecb"). InnerVolumeSpecName "kube-api-access-xlxg5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:48:24 crc kubenswrapper[4755]: I1210 15:48:24.882959 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bae5f13f-ced2-4e59-bac7-bac0e2f95ecb-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "bae5f13f-ced2-4e59-bac7-bac0e2f95ecb" (UID: "bae5f13f-ced2-4e59-bac7-bac0e2f95ecb"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:48:24 crc kubenswrapper[4755]: I1210 15:48:24.883603 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bae5f13f-ced2-4e59-bac7-bac0e2f95ecb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bae5f13f-ced2-4e59-bac7-bac0e2f95ecb" (UID: "bae5f13f-ced2-4e59-bac7-bac0e2f95ecb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:48:24 crc kubenswrapper[4755]: I1210 15:48:24.979526 4755 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bae5f13f-ced2-4e59-bac7-bac0e2f95ecb-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:24 crc kubenswrapper[4755]: I1210 15:48:24.979862 4755 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bae5f13f-ced2-4e59-bac7-bac0e2f95ecb-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:24 crc kubenswrapper[4755]: I1210 15:48:24.979879 4755 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bae5f13f-ced2-4e59-bac7-bac0e2f95ecb-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:24 crc kubenswrapper[4755]: I1210 15:48:24.979892 4755 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/bae5f13f-ced2-4e59-bac7-bac0e2f95ecb-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:24 crc kubenswrapper[4755]: I1210 15:48:24.979905 4755 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bae5f13f-ced2-4e59-bac7-bac0e2f95ecb-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:24 crc kubenswrapper[4755]: I1210 15:48:24.979916 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xlxg5\" (UniqueName: \"kubernetes.io/projected/bae5f13f-ced2-4e59-bac7-bac0e2f95ecb-kube-api-access-xlxg5\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:24 crc kubenswrapper[4755]: I1210 15:48:24.979930 4755 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bae5f13f-ced2-4e59-bac7-bac0e2f95ecb-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:25 crc kubenswrapper[4755]: I1210 15:48:25.119759 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-c4b758ff5-f4t6l"] Dec 10 15:48:25 crc kubenswrapper[4755]: I1210 15:48:25.760592 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dc7c944bf-gs9gg" Dec 10 15:48:25 crc kubenswrapper[4755]: I1210 15:48:25.773944 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c4b758ff5-f4t6l" event={"ID":"3215d8ec-c0b3-4fda-a96e-4ed078293493","Type":"ContainerStarted","Data":"74f8e15e512be7f05e748f653878b96658ca1a38748337e5b0381affd5f2f12b"} Dec 10 15:48:25 crc kubenswrapper[4755]: I1210 15:48:25.827246 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-dc7c944bf-gs9gg"] Dec 10 15:48:25 crc kubenswrapper[4755]: I1210 15:48:25.836025 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-dc7c944bf-gs9gg"] Dec 10 15:48:26 crc kubenswrapper[4755]: I1210 15:48:26.782802 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-5zwd4" Dec 10 15:48:26 crc kubenswrapper[4755]: I1210 15:48:26.783674 4755 generic.go:334] "Generic (PLEG): container finished" podID="31bbbf2c-5266-4ea7-8428-ed2607013a35" containerID="37de70e4c1ac2932d37f73fa7dba2bcd1de89ae938c16517de1bd1feac16cf52" exitCode=137 Dec 10 15:48:26 crc kubenswrapper[4755]: I1210 15:48:26.783797 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-compactor-0" event={"ID":"31bbbf2c-5266-4ea7-8428-ed2607013a35","Type":"ContainerDied","Data":"37de70e4c1ac2932d37f73fa7dba2bcd1de89ae938c16517de1bd1feac16cf52"} Dec 10 15:48:26 crc kubenswrapper[4755]: I1210 15:48:26.788640 4755 generic.go:334] "Generic (PLEG): container finished" podID="e5a3871d-6b81-4b3d-9044-fcbcf437effb" containerID="3c409743e4ab358dd29fc43502060ffc2ead257951f6999bad0e97dcba14f061" exitCode=137 Dec 10 15:48:26 crc kubenswrapper[4755]: I1210 15:48:26.788694 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-ingester-0" event={"ID":"e5a3871d-6b81-4b3d-9044-fcbcf437effb","Type":"ContainerDied","Data":"3c409743e4ab358dd29fc43502060ffc2ead257951f6999bad0e97dcba14f061"} Dec 10 15:48:26 crc kubenswrapper[4755]: I1210 15:48:26.790611 4755 generic.go:334] "Generic (PLEG): container finished" podID="3215d8ec-c0b3-4fda-a96e-4ed078293493" containerID="3a002a0f52f68290adef44252dd317fc3b3a5f1343bd73f16eb0cf4e800ff26e" exitCode=0 Dec 10 15:48:26 crc kubenswrapper[4755]: I1210 15:48:26.790652 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c4b758ff5-f4t6l" event={"ID":"3215d8ec-c0b3-4fda-a96e-4ed078293493","Type":"ContainerDied","Data":"3a002a0f52f68290adef44252dd317fc3b3a5f1343bd73f16eb0cf4e800ff26e"} Dec 10 15:48:26 crc kubenswrapper[4755]: I1210 15:48:26.797183 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c5084508-a21d-4f43-bc50-2f0c7f13edbe","Type":"ContainerStarted","Data":"bb0752734f286c2a44b5604653e613e9de46509f1688c094364b4e4280b0706b"} Dec 10 15:48:26 crc kubenswrapper[4755]: I1210 15:48:26.822628 4755 generic.go:334] "Generic (PLEG): container finished" podID="4e702de9-8dda-4370-b806-41083a70ac41" containerID="7be77a14baab6fbe2e6434c6e42e63b8dd0c8bc56ea6074c47b153b2a21ed53e" exitCode=137 Dec 10 15:48:26 crc kubenswrapper[4755]: I1210 15:48:26.822677 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-index-gateway-0" event={"ID":"4e702de9-8dda-4370-b806-41083a70ac41","Type":"ContainerDied","Data":"7be77a14baab6fbe2e6434c6e42e63b8dd0c8bc56ea6074c47b153b2a21ed53e"} Dec 10 15:48:26 crc kubenswrapper[4755]: I1210 15:48:26.860012 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-lokistack-distributor-664b687b54-qvsww"] Dec 10 15:48:26 crc kubenswrapper[4755]: I1210 15:48:26.860412 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cloudkitty-lokistack-distributor-664b687b54-qvsww" podUID="ad77f530-dc0b-44ec-b4e2-c580cfe568fe" containerName="loki-distributor" containerID="cri-o://e118e9654dcdb3ccbeee519bdf5ed84674da1321f88271483d3036cac60db4fe" gracePeriod=30 Dec 10 15:48:27 crc kubenswrapper[4755]: I1210 15:48:27.091689 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-compactor-0" Dec 10 15:48:27 crc kubenswrapper[4755]: I1210 15:48:27.226570 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-qjtx7" Dec 10 15:48:27 crc kubenswrapper[4755]: I1210 15:48:27.243134 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cloudkitty-lokistack-compactor-http\" (UniqueName: \"kubernetes.io/secret/31bbbf2c-5266-4ea7-8428-ed2607013a35-cloudkitty-lokistack-compactor-http\") pod \"31bbbf2c-5266-4ea7-8428-ed2607013a35\" (UID: \"31bbbf2c-5266-4ea7-8428-ed2607013a35\") " Dec 10 15:48:27 crc kubenswrapper[4755]: I1210 15:48:27.243279 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"storage\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"31bbbf2c-5266-4ea7-8428-ed2607013a35\" (UID: \"31bbbf2c-5266-4ea7-8428-ed2607013a35\") " Dec 10 15:48:27 crc kubenswrapper[4755]: I1210 15:48:27.243331 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5l645\" (UniqueName: \"kubernetes.io/projected/31bbbf2c-5266-4ea7-8428-ed2607013a35-kube-api-access-5l645\") pod \"31bbbf2c-5266-4ea7-8428-ed2607013a35\" (UID: \"31bbbf2c-5266-4ea7-8428-ed2607013a35\") " Dec 10 15:48:27 crc kubenswrapper[4755]: I1210 15:48:27.243373 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cloudkitty-lokistack-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/31bbbf2c-5266-4ea7-8428-ed2607013a35-cloudkitty-lokistack-compactor-grpc\") pod \"31bbbf2c-5266-4ea7-8428-ed2607013a35\" (UID: \"31bbbf2c-5266-4ea7-8428-ed2607013a35\") " Dec 10 15:48:27 crc kubenswrapper[4755]: I1210 15:48:27.243515 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31bbbf2c-5266-4ea7-8428-ed2607013a35-config\") pod \"31bbbf2c-5266-4ea7-8428-ed2607013a35\" (UID: \"31bbbf2c-5266-4ea7-8428-ed2607013a35\") " Dec 10 15:48:27 crc kubenswrapper[4755]: I1210 15:48:27.243545 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/31bbbf2c-5266-4ea7-8428-ed2607013a35-cloudkitty-lokistack-ca-bundle\") pod \"31bbbf2c-5266-4ea7-8428-ed2607013a35\" (UID: \"31bbbf2c-5266-4ea7-8428-ed2607013a35\") " Dec 10 15:48:27 crc kubenswrapper[4755]: I1210 15:48:27.243567 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/31bbbf2c-5266-4ea7-8428-ed2607013a35-cloudkitty-loki-s3\") pod \"31bbbf2c-5266-4ea7-8428-ed2607013a35\" (UID: \"31bbbf2c-5266-4ea7-8428-ed2607013a35\") " Dec 10 15:48:27 crc kubenswrapper[4755]: I1210 15:48:27.249193 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31bbbf2c-5266-4ea7-8428-ed2607013a35-cloudkitty-lokistack-ca-bundle" (OuterVolumeSpecName: "cloudkitty-lokistack-ca-bundle") pod "31bbbf2c-5266-4ea7-8428-ed2607013a35" (UID: "31bbbf2c-5266-4ea7-8428-ed2607013a35"). InnerVolumeSpecName "cloudkitty-lokistack-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:48:27 crc kubenswrapper[4755]: I1210 15:48:27.250204 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31bbbf2c-5266-4ea7-8428-ed2607013a35-config" (OuterVolumeSpecName: "config") pod "31bbbf2c-5266-4ea7-8428-ed2607013a35" (UID: "31bbbf2c-5266-4ea7-8428-ed2607013a35"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:48:27 crc kubenswrapper[4755]: I1210 15:48:27.257999 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31bbbf2c-5266-4ea7-8428-ed2607013a35-kube-api-access-5l645" (OuterVolumeSpecName: "kube-api-access-5l645") pod "31bbbf2c-5266-4ea7-8428-ed2607013a35" (UID: "31bbbf2c-5266-4ea7-8428-ed2607013a35"). InnerVolumeSpecName "kube-api-access-5l645". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:48:27 crc kubenswrapper[4755]: I1210 15:48:27.258681 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31bbbf2c-5266-4ea7-8428-ed2607013a35-cloudkitty-loki-s3" (OuterVolumeSpecName: "cloudkitty-loki-s3") pod "31bbbf2c-5266-4ea7-8428-ed2607013a35" (UID: "31bbbf2c-5266-4ea7-8428-ed2607013a35"). InnerVolumeSpecName "cloudkitty-loki-s3". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:48:27 crc kubenswrapper[4755]: I1210 15:48:27.258757 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31bbbf2c-5266-4ea7-8428-ed2607013a35-cloudkitty-lokistack-compactor-grpc" (OuterVolumeSpecName: "cloudkitty-lokistack-compactor-grpc") pod "31bbbf2c-5266-4ea7-8428-ed2607013a35" (UID: "31bbbf2c-5266-4ea7-8428-ed2607013a35"). InnerVolumeSpecName "cloudkitty-lokistack-compactor-grpc". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:48:27 crc kubenswrapper[4755]: I1210 15:48:27.258833 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31bbbf2c-5266-4ea7-8428-ed2607013a35-cloudkitty-lokistack-compactor-http" (OuterVolumeSpecName: "cloudkitty-lokistack-compactor-http") pod "31bbbf2c-5266-4ea7-8428-ed2607013a35" (UID: "31bbbf2c-5266-4ea7-8428-ed2607013a35"). InnerVolumeSpecName "cloudkitty-lokistack-compactor-http". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:48:27 crc kubenswrapper[4755]: I1210 15:48:27.261759 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "storage") pod "31bbbf2c-5266-4ea7-8428-ed2607013a35" (UID: "31bbbf2c-5266-4ea7-8428-ed2607013a35"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 10 15:48:27 crc kubenswrapper[4755]: I1210 15:48:27.295780 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-lokistack-querier-5467947bf7-qpg72"] Dec 10 15:48:27 crc kubenswrapper[4755]: I1210 15:48:27.295996 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cloudkitty-lokistack-querier-5467947bf7-qpg72" podUID="f9c583d4-e5d0-4c13-9989-dea15920e9e6" containerName="loki-querier" containerID="cri-o://4164eb28ac9f29baeb1e602db84f2fe27c26cb04d570c22fd72e60c0a69e8dc2" gracePeriod=30 Dec 10 15:48:27 crc kubenswrapper[4755]: I1210 15:48:27.335458 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-5kvgz" Dec 10 15:48:27 crc kubenswrapper[4755]: I1210 15:48:27.351261 4755 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31bbbf2c-5266-4ea7-8428-ed2607013a35-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:27 crc kubenswrapper[4755]: I1210 15:48:27.351301 4755 reconciler_common.go:293] "Volume detached for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/31bbbf2c-5266-4ea7-8428-ed2607013a35-cloudkitty-lokistack-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:27 crc kubenswrapper[4755]: I1210 15:48:27.351313 4755 reconciler_common.go:293] "Volume detached for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/31bbbf2c-5266-4ea7-8428-ed2607013a35-cloudkitty-loki-s3\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:27 crc kubenswrapper[4755]: I1210 15:48:27.351323 4755 reconciler_common.go:293] "Volume detached for volume \"cloudkitty-lokistack-compactor-http\" (UniqueName: \"kubernetes.io/secret/31bbbf2c-5266-4ea7-8428-ed2607013a35-cloudkitty-lokistack-compactor-http\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:27 crc kubenswrapper[4755]: I1210 15:48:27.351346 4755 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Dec 10 15:48:27 crc kubenswrapper[4755]: I1210 15:48:27.351355 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5l645\" (UniqueName: \"kubernetes.io/projected/31bbbf2c-5266-4ea7-8428-ed2607013a35-kube-api-access-5l645\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:27 crc kubenswrapper[4755]: I1210 15:48:27.351365 4755 reconciler_common.go:293] "Volume detached for volume \"cloudkitty-lokistack-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/31bbbf2c-5266-4ea7-8428-ed2607013a35-cloudkitty-lokistack-compactor-grpc\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:27 crc kubenswrapper[4755]: I1210 15:48:27.401732 4755 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Dec 10 15:48:27 crc kubenswrapper[4755]: I1210 15:48:27.430697 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-dr466"] Dec 10 15:48:27 crc kubenswrapper[4755]: I1210 15:48:27.430903 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-dr466" podUID="8fbdd63a-fd88-4a37-85fb-08e7d21af574" containerName="loki-query-frontend" containerID="cri-o://4e315c5d02bd4b65abbb32f80d628db448bc67df9957d09b0d80d70ea9b98178" gracePeriod=30 Dec 10 15:48:27 crc kubenswrapper[4755]: I1210 15:48:27.458536 4755 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:27 crc kubenswrapper[4755]: I1210 15:48:27.716776 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 15:48:27 crc kubenswrapper[4755]: I1210 15:48:27.772641 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bae5f13f-ced2-4e59-bac7-bac0e2f95ecb" path="/var/lib/kubelet/pods/bae5f13f-ced2-4e59-bac7-bac0e2f95ecb/volumes" Dec 10 15:48:27 crc kubenswrapper[4755]: I1210 15:48:27.835240 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-compactor-0" Dec 10 15:48:27 crc kubenswrapper[4755]: I1210 15:48:27.836312 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-compactor-0" event={"ID":"31bbbf2c-5266-4ea7-8428-ed2607013a35","Type":"ContainerDied","Data":"48cda1d5cfdee2309eec18d83883dabc9750c36459e282383ae0c2ea33194482"} Dec 10 15:48:27 crc kubenswrapper[4755]: I1210 15:48:27.836354 4755 scope.go:117] "RemoveContainer" containerID="37de70e4c1ac2932d37f73fa7dba2bcd1de89ae938c16517de1bd1feac16cf52" Dec 10 15:48:27 crc kubenswrapper[4755]: I1210 15:48:27.838928 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 15:48:27 crc kubenswrapper[4755]: I1210 15:48:27.838929 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-ingester-0" event={"ID":"e5a3871d-6b81-4b3d-9044-fcbcf437effb","Type":"ContainerDied","Data":"984d8c9245c63dc4cbff19124f644f9e4fa34f4b3b6fb15b603ed23debeb4c7f"} Dec 10 15:48:27 crc kubenswrapper[4755]: I1210 15:48:27.843794 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c4b758ff5-f4t6l" event={"ID":"3215d8ec-c0b3-4fda-a96e-4ed078293493","Type":"ContainerStarted","Data":"bf4860227c7f41d8872c97be77085b0593dd0ef5ec3f208a6cf4ee6907107c75"} Dec 10 15:48:27 crc kubenswrapper[4755]: I1210 15:48:27.844148 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-c4b758ff5-f4t6l" Dec 10 15:48:27 crc kubenswrapper[4755]: I1210 15:48:27.869350 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-lokistack-compactor-0"] Dec 10 15:48:27 crc kubenswrapper[4755]: I1210 15:48:27.870177 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rj9jd\" (UniqueName: \"kubernetes.io/projected/e5a3871d-6b81-4b3d-9044-fcbcf437effb-kube-api-access-rj9jd\") pod \"e5a3871d-6b81-4b3d-9044-fcbcf437effb\" (UID: \"e5a3871d-6b81-4b3d-9044-fcbcf437effb\") " Dec 10 15:48:27 crc kubenswrapper[4755]: I1210 15:48:27.870223 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"storage\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"e5a3871d-6b81-4b3d-9044-fcbcf437effb\" (UID: \"e5a3871d-6b81-4b3d-9044-fcbcf437effb\") " Dec 10 15:48:27 crc kubenswrapper[4755]: I1210 15:48:27.870315 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"wal\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"e5a3871d-6b81-4b3d-9044-fcbcf437effb\" (UID: \"e5a3871d-6b81-4b3d-9044-fcbcf437effb\") " Dec 10 15:48:27 crc kubenswrapper[4755]: I1210 15:48:27.870355 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/e5a3871d-6b81-4b3d-9044-fcbcf437effb-cloudkitty-loki-s3\") pod \"e5a3871d-6b81-4b3d-9044-fcbcf437effb\" (UID: \"e5a3871d-6b81-4b3d-9044-fcbcf437effb\") " Dec 10 15:48:27 crc kubenswrapper[4755]: I1210 15:48:27.870406 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5a3871d-6b81-4b3d-9044-fcbcf437effb-config\") pod \"e5a3871d-6b81-4b3d-9044-fcbcf437effb\" (UID: \"e5a3871d-6b81-4b3d-9044-fcbcf437effb\") " Dec 10 15:48:27 crc kubenswrapper[4755]: I1210 15:48:27.870436 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e5a3871d-6b81-4b3d-9044-fcbcf437effb-cloudkitty-lokistack-ca-bundle\") pod \"e5a3871d-6b81-4b3d-9044-fcbcf437effb\" (UID: \"e5a3871d-6b81-4b3d-9044-fcbcf437effb\") " Dec 10 15:48:27 crc kubenswrapper[4755]: I1210 15:48:27.870510 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cloudkitty-lokistack-ingester-http\" (UniqueName: \"kubernetes.io/secret/e5a3871d-6b81-4b3d-9044-fcbcf437effb-cloudkitty-lokistack-ingester-http\") pod \"e5a3871d-6b81-4b3d-9044-fcbcf437effb\" (UID: \"e5a3871d-6b81-4b3d-9044-fcbcf437effb\") " Dec 10 15:48:27 crc kubenswrapper[4755]: I1210 15:48:27.870589 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cloudkitty-lokistack-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/e5a3871d-6b81-4b3d-9044-fcbcf437effb-cloudkitty-lokistack-ingester-grpc\") pod \"e5a3871d-6b81-4b3d-9044-fcbcf437effb\" (UID: \"e5a3871d-6b81-4b3d-9044-fcbcf437effb\") " Dec 10 15:48:27 crc kubenswrapper[4755]: I1210 15:48:27.874318 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5a3871d-6b81-4b3d-9044-fcbcf437effb-config" (OuterVolumeSpecName: "config") pod "e5a3871d-6b81-4b3d-9044-fcbcf437effb" (UID: "e5a3871d-6b81-4b3d-9044-fcbcf437effb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:48:27 crc kubenswrapper[4755]: I1210 15:48:27.880996 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5a3871d-6b81-4b3d-9044-fcbcf437effb-cloudkitty-lokistack-ca-bundle" (OuterVolumeSpecName: "cloudkitty-lokistack-ca-bundle") pod "e5a3871d-6b81-4b3d-9044-fcbcf437effb" (UID: "e5a3871d-6b81-4b3d-9044-fcbcf437effb"). InnerVolumeSpecName "cloudkitty-lokistack-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:48:27 crc kubenswrapper[4755]: I1210 15:48:27.889996 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "wal") pod "e5a3871d-6b81-4b3d-9044-fcbcf437effb" (UID: "e5a3871d-6b81-4b3d-9044-fcbcf437effb"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 10 15:48:27 crc kubenswrapper[4755]: I1210 15:48:27.903610 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5a3871d-6b81-4b3d-9044-fcbcf437effb-kube-api-access-rj9jd" (OuterVolumeSpecName: "kube-api-access-rj9jd") pod "e5a3871d-6b81-4b3d-9044-fcbcf437effb" (UID: "e5a3871d-6b81-4b3d-9044-fcbcf437effb"). InnerVolumeSpecName "kube-api-access-rj9jd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:48:27 crc kubenswrapper[4755]: I1210 15:48:27.905602 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5a3871d-6b81-4b3d-9044-fcbcf437effb-cloudkitty-lokistack-ingester-grpc" (OuterVolumeSpecName: "cloudkitty-lokistack-ingester-grpc") pod "e5a3871d-6b81-4b3d-9044-fcbcf437effb" (UID: "e5a3871d-6b81-4b3d-9044-fcbcf437effb"). InnerVolumeSpecName "cloudkitty-lokistack-ingester-grpc". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:48:27 crc kubenswrapper[4755]: I1210 15:48:27.905713 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5a3871d-6b81-4b3d-9044-fcbcf437effb-cloudkitty-lokistack-ingester-http" (OuterVolumeSpecName: "cloudkitty-lokistack-ingester-http") pod "e5a3871d-6b81-4b3d-9044-fcbcf437effb" (UID: "e5a3871d-6b81-4b3d-9044-fcbcf437effb"). InnerVolumeSpecName "cloudkitty-lokistack-ingester-http". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:48:27 crc kubenswrapper[4755]: I1210 15:48:27.906074 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "storage") pod "e5a3871d-6b81-4b3d-9044-fcbcf437effb" (UID: "e5a3871d-6b81-4b3d-9044-fcbcf437effb"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 10 15:48:27 crc kubenswrapper[4755]: I1210 15:48:27.909449 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5a3871d-6b81-4b3d-9044-fcbcf437effb-cloudkitty-loki-s3" (OuterVolumeSpecName: "cloudkitty-loki-s3") pod "e5a3871d-6b81-4b3d-9044-fcbcf437effb" (UID: "e5a3871d-6b81-4b3d-9044-fcbcf437effb"). InnerVolumeSpecName "cloudkitty-loki-s3". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:48:27 crc kubenswrapper[4755]: I1210 15:48:27.923162 4755 scope.go:117] "RemoveContainer" containerID="3c409743e4ab358dd29fc43502060ffc2ead257951f6999bad0e97dcba14f061" Dec 10 15:48:27 crc kubenswrapper[4755]: I1210 15:48:27.951546 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-lokistack-compactor-0"] Dec 10 15:48:27 crc kubenswrapper[4755]: I1210 15:48:27.956288 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-c4b758ff5-f4t6l" podStartSLOduration=3.956266298 podStartE2EDuration="3.956266298s" podCreationTimestamp="2025-12-10 15:48:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:48:27.898195575 +0000 UTC m=+1504.499079207" watchObservedRunningTime="2025-12-10 15:48:27.956266298 +0000 UTC m=+1504.557149930" Dec 10 15:48:27 crc kubenswrapper[4755]: I1210 15:48:27.956900 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-compactor-0"] Dec 10 15:48:27 crc kubenswrapper[4755]: E1210 15:48:27.957458 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31bbbf2c-5266-4ea7-8428-ed2607013a35" containerName="loki-compactor" Dec 10 15:48:27 crc kubenswrapper[4755]: I1210 15:48:27.957496 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="31bbbf2c-5266-4ea7-8428-ed2607013a35" containerName="loki-compactor" Dec 10 15:48:27 crc kubenswrapper[4755]: E1210 15:48:27.957528 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5a3871d-6b81-4b3d-9044-fcbcf437effb" containerName="loki-ingester" Dec 10 15:48:27 crc kubenswrapper[4755]: I1210 15:48:27.957538 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5a3871d-6b81-4b3d-9044-fcbcf437effb" containerName="loki-ingester" Dec 10 15:48:27 crc kubenswrapper[4755]: I1210 15:48:27.957767 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="31bbbf2c-5266-4ea7-8428-ed2607013a35" containerName="loki-compactor" Dec 10 15:48:27 crc kubenswrapper[4755]: I1210 15:48:27.957797 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5a3871d-6b81-4b3d-9044-fcbcf437effb" containerName="loki-ingester" Dec 10 15:48:27 crc kubenswrapper[4755]: I1210 15:48:27.958769 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-compactor-0" Dec 10 15:48:27 crc kubenswrapper[4755]: I1210 15:48:27.961919 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-compactor-http" Dec 10 15:48:27 crc kubenswrapper[4755]: I1210 15:48:27.963374 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-compactor-grpc" Dec 10 15:48:27 crc kubenswrapper[4755]: I1210 15:48:27.982993 4755 reconciler_common.go:293] "Volume detached for volume \"cloudkitty-lokistack-ingester-http\" (UniqueName: \"kubernetes.io/secret/e5a3871d-6b81-4b3d-9044-fcbcf437effb-cloudkitty-lokistack-ingester-http\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:27 crc kubenswrapper[4755]: I1210 15:48:27.983047 4755 reconciler_common.go:293] "Volume detached for volume \"cloudkitty-lokistack-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/e5a3871d-6b81-4b3d-9044-fcbcf437effb-cloudkitty-lokistack-ingester-grpc\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:27 crc kubenswrapper[4755]: I1210 15:48:27.983057 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rj9jd\" (UniqueName: \"kubernetes.io/projected/e5a3871d-6b81-4b3d-9044-fcbcf437effb-kube-api-access-rj9jd\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:27 crc kubenswrapper[4755]: I1210 15:48:27.983077 4755 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Dec 10 15:48:27 crc kubenswrapper[4755]: I1210 15:48:27.983091 4755 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Dec 10 15:48:27 crc kubenswrapper[4755]: I1210 15:48:27.983119 4755 reconciler_common.go:293] "Volume detached for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/e5a3871d-6b81-4b3d-9044-fcbcf437effb-cloudkitty-loki-s3\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:27 crc kubenswrapper[4755]: I1210 15:48:27.983131 4755 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5a3871d-6b81-4b3d-9044-fcbcf437effb-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:27 crc kubenswrapper[4755]: I1210 15:48:27.983140 4755 reconciler_common.go:293] "Volume detached for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e5a3871d-6b81-4b3d-9044-fcbcf437effb-cloudkitty-lokistack-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:28 crc kubenswrapper[4755]: I1210 15:48:28.009901 4755 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Dec 10 15:48:28 crc kubenswrapper[4755]: I1210 15:48:28.013315 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-compactor-0"] Dec 10 15:48:28 crc kubenswrapper[4755]: I1210 15:48:28.033445 4755 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Dec 10 15:48:28 crc kubenswrapper[4755]: I1210 15:48:28.085490 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrzd7\" (UniqueName: \"kubernetes.io/projected/d69d2cc3-cf06-420b-a629-1a1a924eee12-kube-api-access-xrzd7\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"d69d2cc3-cf06-420b-a629-1a1a924eee12\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 10 15:48:28 crc kubenswrapper[4755]: I1210 15:48:28.085533 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"d69d2cc3-cf06-420b-a629-1a1a924eee12\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 10 15:48:28 crc kubenswrapper[4755]: I1210 15:48:28.085632 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d69d2cc3-cf06-420b-a629-1a1a924eee12-config\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"d69d2cc3-cf06-420b-a629-1a1a924eee12\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 10 15:48:28 crc kubenswrapper[4755]: I1210 15:48:28.085661 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/d69d2cc3-cf06-420b-a629-1a1a924eee12-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"d69d2cc3-cf06-420b-a629-1a1a924eee12\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 10 15:48:28 crc kubenswrapper[4755]: I1210 15:48:28.085688 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-compactor-http\" (UniqueName: \"kubernetes.io/secret/d69d2cc3-cf06-420b-a629-1a1a924eee12-cloudkitty-lokistack-compactor-http\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"d69d2cc3-cf06-420b-a629-1a1a924eee12\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 10 15:48:28 crc kubenswrapper[4755]: I1210 15:48:28.085904 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d69d2cc3-cf06-420b-a629-1a1a924eee12-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"d69d2cc3-cf06-420b-a629-1a1a924eee12\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 10 15:48:28 crc kubenswrapper[4755]: I1210 15:48:28.086169 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/d69d2cc3-cf06-420b-a629-1a1a924eee12-cloudkitty-lokistack-compactor-grpc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"d69d2cc3-cf06-420b-a629-1a1a924eee12\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 10 15:48:28 crc kubenswrapper[4755]: I1210 15:48:28.086328 4755 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:28 crc kubenswrapper[4755]: I1210 15:48:28.086344 4755 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:28 crc kubenswrapper[4755]: I1210 15:48:28.118870 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 10 15:48:28 crc kubenswrapper[4755]: I1210 15:48:28.188032 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"storage\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"4e702de9-8dda-4370-b806-41083a70ac41\" (UID: \"4e702de9-8dda-4370-b806-41083a70ac41\") " Dec 10 15:48:28 crc kubenswrapper[4755]: I1210 15:48:28.188085 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e702de9-8dda-4370-b806-41083a70ac41-config\") pod \"4e702de9-8dda-4370-b806-41083a70ac41\" (UID: \"4e702de9-8dda-4370-b806-41083a70ac41\") " Dec 10 15:48:28 crc kubenswrapper[4755]: I1210 15:48:28.188221 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cloudkitty-lokistack-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/4e702de9-8dda-4370-b806-41083a70ac41-cloudkitty-lokistack-index-gateway-grpc\") pod \"4e702de9-8dda-4370-b806-41083a70ac41\" (UID: \"4e702de9-8dda-4370-b806-41083a70ac41\") " Dec 10 15:48:28 crc kubenswrapper[4755]: I1210 15:48:28.188286 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6nxq6\" (UniqueName: \"kubernetes.io/projected/4e702de9-8dda-4370-b806-41083a70ac41-kube-api-access-6nxq6\") pod \"4e702de9-8dda-4370-b806-41083a70ac41\" (UID: \"4e702de9-8dda-4370-b806-41083a70ac41\") " Dec 10 15:48:28 crc kubenswrapper[4755]: I1210 15:48:28.188438 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cloudkitty-lokistack-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/4e702de9-8dda-4370-b806-41083a70ac41-cloudkitty-lokistack-index-gateway-http\") pod \"4e702de9-8dda-4370-b806-41083a70ac41\" (UID: \"4e702de9-8dda-4370-b806-41083a70ac41\") " Dec 10 15:48:28 crc kubenswrapper[4755]: I1210 15:48:28.188513 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/4e702de9-8dda-4370-b806-41083a70ac41-cloudkitty-loki-s3\") pod \"4e702de9-8dda-4370-b806-41083a70ac41\" (UID: \"4e702de9-8dda-4370-b806-41083a70ac41\") " Dec 10 15:48:28 crc kubenswrapper[4755]: I1210 15:48:28.188581 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4e702de9-8dda-4370-b806-41083a70ac41-cloudkitty-lokistack-ca-bundle\") pod \"4e702de9-8dda-4370-b806-41083a70ac41\" (UID: \"4e702de9-8dda-4370-b806-41083a70ac41\") " Dec 10 15:48:28 crc kubenswrapper[4755]: I1210 15:48:28.189529 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e702de9-8dda-4370-b806-41083a70ac41-config" (OuterVolumeSpecName: "config") pod "4e702de9-8dda-4370-b806-41083a70ac41" (UID: "4e702de9-8dda-4370-b806-41083a70ac41"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:48:28 crc kubenswrapper[4755]: I1210 15:48:28.191565 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/d69d2cc3-cf06-420b-a629-1a1a924eee12-cloudkitty-lokistack-compactor-grpc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"d69d2cc3-cf06-420b-a629-1a1a924eee12\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 10 15:48:28 crc kubenswrapper[4755]: I1210 15:48:28.193830 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"d69d2cc3-cf06-420b-a629-1a1a924eee12\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 10 15:48:28 crc kubenswrapper[4755]: I1210 15:48:28.194446 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrzd7\" (UniqueName: \"kubernetes.io/projected/d69d2cc3-cf06-420b-a629-1a1a924eee12-kube-api-access-xrzd7\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"d69d2cc3-cf06-420b-a629-1a1a924eee12\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 10 15:48:28 crc kubenswrapper[4755]: I1210 15:48:28.194734 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d69d2cc3-cf06-420b-a629-1a1a924eee12-config\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"d69d2cc3-cf06-420b-a629-1a1a924eee12\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 10 15:48:28 crc kubenswrapper[4755]: I1210 15:48:28.194788 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/d69d2cc3-cf06-420b-a629-1a1a924eee12-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"d69d2cc3-cf06-420b-a629-1a1a924eee12\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 10 15:48:28 crc kubenswrapper[4755]: I1210 15:48:28.194857 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-compactor-http\" (UniqueName: \"kubernetes.io/secret/d69d2cc3-cf06-420b-a629-1a1a924eee12-cloudkitty-lokistack-compactor-http\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"d69d2cc3-cf06-420b-a629-1a1a924eee12\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 10 15:48:28 crc kubenswrapper[4755]: I1210 15:48:28.195143 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d69d2cc3-cf06-420b-a629-1a1a924eee12-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"d69d2cc3-cf06-420b-a629-1a1a924eee12\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 10 15:48:28 crc kubenswrapper[4755]: I1210 15:48:28.195980 4755 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e702de9-8dda-4370-b806-41083a70ac41-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:28 crc kubenswrapper[4755]: I1210 15:48:28.198765 4755 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"d69d2cc3-cf06-420b-a629-1a1a924eee12\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/cloudkitty-lokistack-compactor-0" Dec 10 15:48:28 crc kubenswrapper[4755]: I1210 15:48:28.200490 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d69d2cc3-cf06-420b-a629-1a1a924eee12-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"d69d2cc3-cf06-420b-a629-1a1a924eee12\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 10 15:48:28 crc kubenswrapper[4755]: I1210 15:48:28.203114 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d69d2cc3-cf06-420b-a629-1a1a924eee12-config\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"d69d2cc3-cf06-420b-a629-1a1a924eee12\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 10 15:48:28 crc kubenswrapper[4755]: I1210 15:48:28.203576 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e702de9-8dda-4370-b806-41083a70ac41-cloudkitty-lokistack-ca-bundle" (OuterVolumeSpecName: "cloudkitty-lokistack-ca-bundle") pod "4e702de9-8dda-4370-b806-41083a70ac41" (UID: "4e702de9-8dda-4370-b806-41083a70ac41"). InnerVolumeSpecName "cloudkitty-lokistack-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:48:28 crc kubenswrapper[4755]: I1210 15:48:28.207804 4755 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-distributor-664b687b54-qvsww" podUID="ad77f530-dc0b-44ec-b4e2-c580cfe568fe" containerName="loki-distributor" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 10 15:48:28 crc kubenswrapper[4755]: I1210 15:48:28.212119 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/d69d2cc3-cf06-420b-a629-1a1a924eee12-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"d69d2cc3-cf06-420b-a629-1a1a924eee12\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 10 15:48:28 crc kubenswrapper[4755]: I1210 15:48:28.213768 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "storage") pod "4e702de9-8dda-4370-b806-41083a70ac41" (UID: "4e702de9-8dda-4370-b806-41083a70ac41"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 10 15:48:28 crc kubenswrapper[4755]: I1210 15:48:28.224312 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-compactor-http\" (UniqueName: \"kubernetes.io/secret/d69d2cc3-cf06-420b-a629-1a1a924eee12-cloudkitty-lokistack-compactor-http\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"d69d2cc3-cf06-420b-a629-1a1a924eee12\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 10 15:48:28 crc kubenswrapper[4755]: I1210 15:48:28.224665 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e702de9-8dda-4370-b806-41083a70ac41-cloudkitty-loki-s3" (OuterVolumeSpecName: "cloudkitty-loki-s3") pod "4e702de9-8dda-4370-b806-41083a70ac41" (UID: "4e702de9-8dda-4370-b806-41083a70ac41"). InnerVolumeSpecName "cloudkitty-loki-s3". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:48:28 crc kubenswrapper[4755]: I1210 15:48:28.224972 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/d69d2cc3-cf06-420b-a629-1a1a924eee12-cloudkitty-lokistack-compactor-grpc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"d69d2cc3-cf06-420b-a629-1a1a924eee12\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 10 15:48:28 crc kubenswrapper[4755]: I1210 15:48:28.224979 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e702de9-8dda-4370-b806-41083a70ac41-cloudkitty-lokistack-index-gateway-grpc" (OuterVolumeSpecName: "cloudkitty-lokistack-index-gateway-grpc") pod "4e702de9-8dda-4370-b806-41083a70ac41" (UID: "4e702de9-8dda-4370-b806-41083a70ac41"). InnerVolumeSpecName "cloudkitty-lokistack-index-gateway-grpc". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:48:28 crc kubenswrapper[4755]: I1210 15:48:28.229701 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e702de9-8dda-4370-b806-41083a70ac41-cloudkitty-lokistack-index-gateway-http" (OuterVolumeSpecName: "cloudkitty-lokistack-index-gateway-http") pod "4e702de9-8dda-4370-b806-41083a70ac41" (UID: "4e702de9-8dda-4370-b806-41083a70ac41"). InnerVolumeSpecName "cloudkitty-lokistack-index-gateway-http". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:48:28 crc kubenswrapper[4755]: I1210 15:48:28.232957 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e702de9-8dda-4370-b806-41083a70ac41-kube-api-access-6nxq6" (OuterVolumeSpecName: "kube-api-access-6nxq6") pod "4e702de9-8dda-4370-b806-41083a70ac41" (UID: "4e702de9-8dda-4370-b806-41083a70ac41"). InnerVolumeSpecName "kube-api-access-6nxq6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:48:28 crc kubenswrapper[4755]: I1210 15:48:28.268423 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrzd7\" (UniqueName: \"kubernetes.io/projected/d69d2cc3-cf06-420b-a629-1a1a924eee12-kube-api-access-xrzd7\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"d69d2cc3-cf06-420b-a629-1a1a924eee12\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 10 15:48:28 crc kubenswrapper[4755]: I1210 15:48:28.280601 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-lokistack-ingester-0"] Dec 10 15:48:28 crc kubenswrapper[4755]: I1210 15:48:28.300373 4755 reconciler_common.go:293] "Volume detached for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/4e702de9-8dda-4370-b806-41083a70ac41-cloudkitty-loki-s3\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:28 crc kubenswrapper[4755]: I1210 15:48:28.300625 4755 reconciler_common.go:293] "Volume detached for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4e702de9-8dda-4370-b806-41083a70ac41-cloudkitty-lokistack-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:28 crc kubenswrapper[4755]: I1210 15:48:28.300724 4755 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Dec 10 15:48:28 crc kubenswrapper[4755]: I1210 15:48:28.300834 4755 reconciler_common.go:293] "Volume detached for volume \"cloudkitty-lokistack-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/4e702de9-8dda-4370-b806-41083a70ac41-cloudkitty-lokistack-index-gateway-grpc\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:28 crc kubenswrapper[4755]: I1210 15:48:28.300920 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6nxq6\" (UniqueName: \"kubernetes.io/projected/4e702de9-8dda-4370-b806-41083a70ac41-kube-api-access-6nxq6\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:28 crc kubenswrapper[4755]: I1210 15:48:28.301003 4755 reconciler_common.go:293] "Volume detached for volume \"cloudkitty-lokistack-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/4e702de9-8dda-4370-b806-41083a70ac41-cloudkitty-lokistack-index-gateway-http\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:28 crc kubenswrapper[4755]: I1210 15:48:28.352817 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-lokistack-ingester-0"] Dec 10 15:48:28 crc kubenswrapper[4755]: I1210 15:48:28.382457 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"d69d2cc3-cf06-420b-a629-1a1a924eee12\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 10 15:48:28 crc kubenswrapper[4755]: I1210 15:48:28.400544 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-ingester-0"] Dec 10 15:48:28 crc kubenswrapper[4755]: E1210 15:48:28.401081 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e702de9-8dda-4370-b806-41083a70ac41" containerName="loki-index-gateway" Dec 10 15:48:28 crc kubenswrapper[4755]: I1210 15:48:28.401095 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e702de9-8dda-4370-b806-41083a70ac41" containerName="loki-index-gateway" Dec 10 15:48:28 crc kubenswrapper[4755]: I1210 15:48:28.401333 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e702de9-8dda-4370-b806-41083a70ac41" containerName="loki-index-gateway" Dec 10 15:48:28 crc kubenswrapper[4755]: I1210 15:48:28.402095 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 15:48:28 crc kubenswrapper[4755]: I1210 15:48:28.404426 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-compactor-0" Dec 10 15:48:28 crc kubenswrapper[4755]: I1210 15:48:28.405773 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-ingester-http" Dec 10 15:48:28 crc kubenswrapper[4755]: I1210 15:48:28.406111 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-ingester-grpc" Dec 10 15:48:28 crc kubenswrapper[4755]: I1210 15:48:28.416320 4755 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Dec 10 15:48:28 crc kubenswrapper[4755]: I1210 15:48:28.435026 4755 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-querier-5467947bf7-qpg72" podUID="f9c583d4-e5d0-4c13-9989-dea15920e9e6" containerName="loki-querier" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 10 15:48:28 crc kubenswrapper[4755]: I1210 15:48:28.440646 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-ingester-0"] Dec 10 15:48:28 crc kubenswrapper[4755]: I1210 15:48:28.506261 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/ceb83259-f1d9-4219-a0c3-b42d35e2dc02-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"ceb83259-f1d9-4219-a0c3-b42d35e2dc02\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 15:48:28 crc kubenswrapper[4755]: I1210 15:48:28.506343 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"ceb83259-f1d9-4219-a0c3-b42d35e2dc02\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 15:48:28 crc kubenswrapper[4755]: I1210 15:48:28.506383 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/ceb83259-f1d9-4219-a0c3-b42d35e2dc02-cloudkitty-lokistack-ingester-grpc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"ceb83259-f1d9-4219-a0c3-b42d35e2dc02\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 15:48:28 crc kubenswrapper[4755]: I1210 15:48:28.506553 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ceb83259-f1d9-4219-a0c3-b42d35e2dc02-config\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"ceb83259-f1d9-4219-a0c3-b42d35e2dc02\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 15:48:28 crc kubenswrapper[4755]: I1210 15:48:28.506618 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ingester-http\" (UniqueName: \"kubernetes.io/secret/ceb83259-f1d9-4219-a0c3-b42d35e2dc02-cloudkitty-lokistack-ingester-http\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"ceb83259-f1d9-4219-a0c3-b42d35e2dc02\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 15:48:28 crc kubenswrapper[4755]: I1210 15:48:28.506659 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cg9c4\" (UniqueName: \"kubernetes.io/projected/ceb83259-f1d9-4219-a0c3-b42d35e2dc02-kube-api-access-cg9c4\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"ceb83259-f1d9-4219-a0c3-b42d35e2dc02\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 15:48:28 crc kubenswrapper[4755]: I1210 15:48:28.506715 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"ceb83259-f1d9-4219-a0c3-b42d35e2dc02\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 15:48:28 crc kubenswrapper[4755]: I1210 15:48:28.506750 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ceb83259-f1d9-4219-a0c3-b42d35e2dc02-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"ceb83259-f1d9-4219-a0c3-b42d35e2dc02\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 15:48:28 crc kubenswrapper[4755]: I1210 15:48:28.506849 4755 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:28 crc kubenswrapper[4755]: I1210 15:48:28.610287 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ceb83259-f1d9-4219-a0c3-b42d35e2dc02-config\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"ceb83259-f1d9-4219-a0c3-b42d35e2dc02\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 15:48:28 crc kubenswrapper[4755]: I1210 15:48:28.610914 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ingester-http\" (UniqueName: \"kubernetes.io/secret/ceb83259-f1d9-4219-a0c3-b42d35e2dc02-cloudkitty-lokistack-ingester-http\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"ceb83259-f1d9-4219-a0c3-b42d35e2dc02\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 15:48:28 crc kubenswrapper[4755]: I1210 15:48:28.610963 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cg9c4\" (UniqueName: \"kubernetes.io/projected/ceb83259-f1d9-4219-a0c3-b42d35e2dc02-kube-api-access-cg9c4\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"ceb83259-f1d9-4219-a0c3-b42d35e2dc02\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 15:48:28 crc kubenswrapper[4755]: I1210 15:48:28.611004 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"ceb83259-f1d9-4219-a0c3-b42d35e2dc02\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 15:48:28 crc kubenswrapper[4755]: I1210 15:48:28.611044 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ceb83259-f1d9-4219-a0c3-b42d35e2dc02-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"ceb83259-f1d9-4219-a0c3-b42d35e2dc02\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 15:48:28 crc kubenswrapper[4755]: I1210 15:48:28.611108 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/ceb83259-f1d9-4219-a0c3-b42d35e2dc02-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"ceb83259-f1d9-4219-a0c3-b42d35e2dc02\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 15:48:28 crc kubenswrapper[4755]: I1210 15:48:28.611151 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"ceb83259-f1d9-4219-a0c3-b42d35e2dc02\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 15:48:28 crc kubenswrapper[4755]: I1210 15:48:28.611175 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/ceb83259-f1d9-4219-a0c3-b42d35e2dc02-cloudkitty-lokistack-ingester-grpc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"ceb83259-f1d9-4219-a0c3-b42d35e2dc02\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 15:48:28 crc kubenswrapper[4755]: I1210 15:48:28.611657 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ceb83259-f1d9-4219-a0c3-b42d35e2dc02-config\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"ceb83259-f1d9-4219-a0c3-b42d35e2dc02\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 15:48:28 crc kubenswrapper[4755]: I1210 15:48:28.611809 4755 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"ceb83259-f1d9-4219-a0c3-b42d35e2dc02\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 15:48:28 crc kubenswrapper[4755]: I1210 15:48:28.612636 4755 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"ceb83259-f1d9-4219-a0c3-b42d35e2dc02\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 15:48:28 crc kubenswrapper[4755]: I1210 15:48:28.612978 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ceb83259-f1d9-4219-a0c3-b42d35e2dc02-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"ceb83259-f1d9-4219-a0c3-b42d35e2dc02\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 15:48:28 crc kubenswrapper[4755]: I1210 15:48:28.619949 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/ceb83259-f1d9-4219-a0c3-b42d35e2dc02-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"ceb83259-f1d9-4219-a0c3-b42d35e2dc02\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 15:48:28 crc kubenswrapper[4755]: I1210 15:48:28.624724 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/ceb83259-f1d9-4219-a0c3-b42d35e2dc02-cloudkitty-lokistack-ingester-grpc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"ceb83259-f1d9-4219-a0c3-b42d35e2dc02\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 15:48:28 crc kubenswrapper[4755]: I1210 15:48:28.627362 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ingester-http\" (UniqueName: \"kubernetes.io/secret/ceb83259-f1d9-4219-a0c3-b42d35e2dc02-cloudkitty-lokistack-ingester-http\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"ceb83259-f1d9-4219-a0c3-b42d35e2dc02\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 15:48:28 crc kubenswrapper[4755]: I1210 15:48:28.638147 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cg9c4\" (UniqueName: \"kubernetes.io/projected/ceb83259-f1d9-4219-a0c3-b42d35e2dc02-kube-api-access-cg9c4\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"ceb83259-f1d9-4219-a0c3-b42d35e2dc02\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 15:48:28 crc kubenswrapper[4755]: I1210 15:48:28.657849 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"ceb83259-f1d9-4219-a0c3-b42d35e2dc02\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 15:48:28 crc kubenswrapper[4755]: I1210 15:48:28.660297 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"ceb83259-f1d9-4219-a0c3-b42d35e2dc02\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 15:48:28 crc kubenswrapper[4755]: I1210 15:48:28.746251 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 15:48:28 crc kubenswrapper[4755]: I1210 15:48:28.898807 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-index-gateway-0" event={"ID":"4e702de9-8dda-4370-b806-41083a70ac41","Type":"ContainerDied","Data":"9ba179190b0b67cfb0cef24758b39736a4a2c9e4cefa71bda319413c31f2a208"} Dec 10 15:48:28 crc kubenswrapper[4755]: I1210 15:48:28.898866 4755 scope.go:117] "RemoveContainer" containerID="7be77a14baab6fbe2e6434c6e42e63b8dd0c8bc56ea6074c47b153b2a21ed53e" Dec 10 15:48:28 crc kubenswrapper[4755]: I1210 15:48:28.899034 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 10 15:48:28 crc kubenswrapper[4755]: I1210 15:48:28.984109 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-lokistack-index-gateway-0"] Dec 10 15:48:29 crc kubenswrapper[4755]: I1210 15:48:29.002709 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-lokistack-index-gateway-0"] Dec 10 15:48:29 crc kubenswrapper[4755]: I1210 15:48:29.013568 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-index-gateway-0"] Dec 10 15:48:29 crc kubenswrapper[4755]: I1210 15:48:29.015234 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 10 15:48:29 crc kubenswrapper[4755]: I1210 15:48:29.019046 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-index-gateway-http" Dec 10 15:48:29 crc kubenswrapper[4755]: I1210 15:48:29.019971 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-index-gateway-grpc" Dec 10 15:48:29 crc kubenswrapper[4755]: I1210 15:48:29.037084 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-index-gateway-0"] Dec 10 15:48:29 crc kubenswrapper[4755]: I1210 15:48:29.056249 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-compactor-0"] Dec 10 15:48:29 crc kubenswrapper[4755]: I1210 15:48:29.126542 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e72469cb-a78b-45a9-8fea-afb38a2c78dc-config\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"e72469cb-a78b-45a9-8fea-afb38a2c78dc\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 10 15:48:29 crc kubenswrapper[4755]: I1210 15:48:29.126630 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/e72469cb-a78b-45a9-8fea-afb38a2c78dc-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"e72469cb-a78b-45a9-8fea-afb38a2c78dc\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 10 15:48:29 crc kubenswrapper[4755]: I1210 15:48:29.126674 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"e72469cb-a78b-45a9-8fea-afb38a2c78dc\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 10 15:48:29 crc kubenswrapper[4755]: I1210 15:48:29.126752 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/e72469cb-a78b-45a9-8fea-afb38a2c78dc-cloudkitty-lokistack-index-gateway-grpc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"e72469cb-a78b-45a9-8fea-afb38a2c78dc\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 10 15:48:29 crc kubenswrapper[4755]: I1210 15:48:29.126813 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e72469cb-a78b-45a9-8fea-afb38a2c78dc-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"e72469cb-a78b-45a9-8fea-afb38a2c78dc\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 10 15:48:29 crc kubenswrapper[4755]: I1210 15:48:29.126880 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/e72469cb-a78b-45a9-8fea-afb38a2c78dc-cloudkitty-lokistack-index-gateway-http\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"e72469cb-a78b-45a9-8fea-afb38a2c78dc\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 10 15:48:29 crc kubenswrapper[4755]: I1210 15:48:29.126909 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qc5pl\" (UniqueName: \"kubernetes.io/projected/e72469cb-a78b-45a9-8fea-afb38a2c78dc-kube-api-access-qc5pl\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"e72469cb-a78b-45a9-8fea-afb38a2c78dc\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 10 15:48:29 crc kubenswrapper[4755]: I1210 15:48:29.229392 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/e72469cb-a78b-45a9-8fea-afb38a2c78dc-cloudkitty-lokistack-index-gateway-grpc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"e72469cb-a78b-45a9-8fea-afb38a2c78dc\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 10 15:48:29 crc kubenswrapper[4755]: I1210 15:48:29.229765 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e72469cb-a78b-45a9-8fea-afb38a2c78dc-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"e72469cb-a78b-45a9-8fea-afb38a2c78dc\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 10 15:48:29 crc kubenswrapper[4755]: I1210 15:48:29.229942 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/e72469cb-a78b-45a9-8fea-afb38a2c78dc-cloudkitty-lokistack-index-gateway-http\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"e72469cb-a78b-45a9-8fea-afb38a2c78dc\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 10 15:48:29 crc kubenswrapper[4755]: I1210 15:48:29.230057 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qc5pl\" (UniqueName: \"kubernetes.io/projected/e72469cb-a78b-45a9-8fea-afb38a2c78dc-kube-api-access-qc5pl\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"e72469cb-a78b-45a9-8fea-afb38a2c78dc\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 10 15:48:29 crc kubenswrapper[4755]: I1210 15:48:29.230310 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e72469cb-a78b-45a9-8fea-afb38a2c78dc-config\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"e72469cb-a78b-45a9-8fea-afb38a2c78dc\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 10 15:48:29 crc kubenswrapper[4755]: I1210 15:48:29.230505 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/e72469cb-a78b-45a9-8fea-afb38a2c78dc-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"e72469cb-a78b-45a9-8fea-afb38a2c78dc\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 10 15:48:29 crc kubenswrapper[4755]: I1210 15:48:29.230600 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e72469cb-a78b-45a9-8fea-afb38a2c78dc-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"e72469cb-a78b-45a9-8fea-afb38a2c78dc\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 10 15:48:29 crc kubenswrapper[4755]: I1210 15:48:29.230702 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"e72469cb-a78b-45a9-8fea-afb38a2c78dc\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 10 15:48:29 crc kubenswrapper[4755]: I1210 15:48:29.230813 4755 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"e72469cb-a78b-45a9-8fea-afb38a2c78dc\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 10 15:48:29 crc kubenswrapper[4755]: I1210 15:48:29.231743 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e72469cb-a78b-45a9-8fea-afb38a2c78dc-config\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"e72469cb-a78b-45a9-8fea-afb38a2c78dc\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 10 15:48:29 crc kubenswrapper[4755]: I1210 15:48:29.234976 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/e72469cb-a78b-45a9-8fea-afb38a2c78dc-cloudkitty-lokistack-index-gateway-grpc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"e72469cb-a78b-45a9-8fea-afb38a2c78dc\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 10 15:48:29 crc kubenswrapper[4755]: I1210 15:48:29.235142 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/e72469cb-a78b-45a9-8fea-afb38a2c78dc-cloudkitty-lokistack-index-gateway-http\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"e72469cb-a78b-45a9-8fea-afb38a2c78dc\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 10 15:48:29 crc kubenswrapper[4755]: I1210 15:48:29.235948 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/e72469cb-a78b-45a9-8fea-afb38a2c78dc-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"e72469cb-a78b-45a9-8fea-afb38a2c78dc\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 10 15:48:29 crc kubenswrapper[4755]: I1210 15:48:29.249452 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qc5pl\" (UniqueName: \"kubernetes.io/projected/e72469cb-a78b-45a9-8fea-afb38a2c78dc-kube-api-access-qc5pl\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"e72469cb-a78b-45a9-8fea-afb38a2c78dc\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 10 15:48:29 crc kubenswrapper[4755]: I1210 15:48:29.277030 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"e72469cb-a78b-45a9-8fea-afb38a2c78dc\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 10 15:48:29 crc kubenswrapper[4755]: I1210 15:48:29.331358 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-ingester-0"] Dec 10 15:48:29 crc kubenswrapper[4755]: I1210 15:48:29.342744 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 10 15:48:29 crc kubenswrapper[4755]: I1210 15:48:29.771909 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31bbbf2c-5266-4ea7-8428-ed2607013a35" path="/var/lib/kubelet/pods/31bbbf2c-5266-4ea7-8428-ed2607013a35/volumes" Dec 10 15:48:29 crc kubenswrapper[4755]: I1210 15:48:29.774848 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e702de9-8dda-4370-b806-41083a70ac41" path="/var/lib/kubelet/pods/4e702de9-8dda-4370-b806-41083a70ac41/volumes" Dec 10 15:48:29 crc kubenswrapper[4755]: I1210 15:48:29.775564 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5a3871d-6b81-4b3d-9044-fcbcf437effb" path="/var/lib/kubelet/pods/e5a3871d-6b81-4b3d-9044-fcbcf437effb/volumes" Dec 10 15:48:29 crc kubenswrapper[4755]: I1210 15:48:29.827106 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-index-gateway-0"] Dec 10 15:48:29 crc kubenswrapper[4755]: I1210 15:48:29.931724 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-compactor-0" event={"ID":"d69d2cc3-cf06-420b-a629-1a1a924eee12","Type":"ContainerStarted","Data":"6b034dc7be186b48c249d7f86a13fd0688cb9cb274a86d7e9a79c073588bd804"} Dec 10 15:48:29 crc kubenswrapper[4755]: I1210 15:48:29.931773 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-compactor-0" event={"ID":"d69d2cc3-cf06-420b-a629-1a1a924eee12","Type":"ContainerStarted","Data":"47ca7d017525c4268e98b553065db8dbc6a56c00ccd693b3c4d5b5f15e20a7ad"} Dec 10 15:48:29 crc kubenswrapper[4755]: I1210 15:48:29.932318 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-compactor-0" Dec 10 15:48:29 crc kubenswrapper[4755]: I1210 15:48:29.937050 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-ingester-0" event={"ID":"ceb83259-f1d9-4219-a0c3-b42d35e2dc02","Type":"ContainerStarted","Data":"28f33c62003aab2b194b9340eb31473e3bcf7f783c1c95ed2bf6c1289039779a"} Dec 10 15:48:29 crc kubenswrapper[4755]: I1210 15:48:29.937105 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-ingester-0" event={"ID":"ceb83259-f1d9-4219-a0c3-b42d35e2dc02","Type":"ContainerStarted","Data":"d8c286172c47affbc99555a01c433ce783756a951166f543e758dbe4ebd08309"} Dec 10 15:48:29 crc kubenswrapper[4755]: I1210 15:48:29.937734 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 15:48:29 crc kubenswrapper[4755]: I1210 15:48:29.938947 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-index-gateway-0" event={"ID":"e72469cb-a78b-45a9-8fea-afb38a2c78dc","Type":"ContainerStarted","Data":"2999a094474970301dd2104017a961e94c63b0bab7f9ff239bab2a93925b4755"} Dec 10 15:48:29 crc kubenswrapper[4755]: I1210 15:48:29.950821 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-compactor-0" podStartSLOduration=2.950805877 podStartE2EDuration="2.950805877s" podCreationTimestamp="2025-12-10 15:48:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:48:29.949528163 +0000 UTC m=+1506.550411795" watchObservedRunningTime="2025-12-10 15:48:29.950805877 +0000 UTC m=+1506.551689499" Dec 10 15:48:29 crc kubenswrapper[4755]: I1210 15:48:29.986090 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-ingester-0" podStartSLOduration=1.986065039 podStartE2EDuration="1.986065039s" podCreationTimestamp="2025-12-10 15:48:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:48:29.969911778 +0000 UTC m=+1506.570795410" watchObservedRunningTime="2025-12-10 15:48:29.986065039 +0000 UTC m=+1506.586948671" Dec 10 15:48:30 crc kubenswrapper[4755]: I1210 15:48:30.949278 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-index-gateway-0" event={"ID":"e72469cb-a78b-45a9-8fea-afb38a2c78dc","Type":"ContainerStarted","Data":"51fc21e857bd902450a76ae9113f2c64145124dc33d514922e1ba8709508fbd5"} Dec 10 15:48:30 crc kubenswrapper[4755]: I1210 15:48:30.950098 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 10 15:48:30 crc kubenswrapper[4755]: I1210 15:48:30.978537 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-index-gateway-0" podStartSLOduration=2.978510747 podStartE2EDuration="2.978510747s" podCreationTimestamp="2025-12-10 15:48:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:48:30.966979993 +0000 UTC m=+1507.567863625" watchObservedRunningTime="2025-12-10 15:48:30.978510747 +0000 UTC m=+1507.579394379" Dec 10 15:48:34 crc kubenswrapper[4755]: I1210 15:48:34.622554 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-c4b758ff5-f4t6l" Dec 10 15:48:34 crc kubenswrapper[4755]: I1210 15:48:34.685897 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-54dd998c-lc5nf"] Dec 10 15:48:34 crc kubenswrapper[4755]: I1210 15:48:34.686577 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-54dd998c-lc5nf" podUID="ab3e3d49-8054-4653-a857-45138337b2f7" containerName="dnsmasq-dns" containerID="cri-o://21d6d9aa4274df63136332bafc9e03f1e2f4f0c5020e6d6990630d89f0ab7a1a" gracePeriod=10 Dec 10 15:48:34 crc kubenswrapper[4755]: E1210 15:48:34.762355 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 15:48:34 crc kubenswrapper[4755]: I1210 15:48:34.769807 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 10 15:48:34 crc kubenswrapper[4755]: E1210 15:48:34.873882 4755 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 10 15:48:34 crc kubenswrapper[4755]: E1210 15:48:34.873940 4755 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 10 15:48:34 crc kubenswrapper[4755]: E1210 15:48:34.874067 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5d4h5b7hfbh5ddh688h9ch55bh7chf6h5ddh68ch94h69h5c5h596h59bh569hfchc4h676hcbh64dhdbh57fh75h5c9h98h59ch679h566h77h9cq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hw9gj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(6d104bea-ecdc-4fe1-9861-fb1a19fce845): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 10 15:48:34 crc kubenswrapper[4755]: E1210 15:48:34.875217 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 15:48:35 crc kubenswrapper[4755]: I1210 15:48:35.054759 4755 generic.go:334] "Generic (PLEG): container finished" podID="ab3e3d49-8054-4653-a857-45138337b2f7" containerID="21d6d9aa4274df63136332bafc9e03f1e2f4f0c5020e6d6990630d89f0ab7a1a" exitCode=0 Dec 10 15:48:35 crc kubenswrapper[4755]: I1210 15:48:35.057190 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54dd998c-lc5nf" event={"ID":"ab3e3d49-8054-4653-a857-45138337b2f7","Type":"ContainerDied","Data":"21d6d9aa4274df63136332bafc9e03f1e2f4f0c5020e6d6990630d89f0ab7a1a"} Dec 10 15:48:35 crc kubenswrapper[4755]: E1210 15:48:35.058525 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 15:48:35 crc kubenswrapper[4755]: I1210 15:48:35.298486 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54dd998c-lc5nf" Dec 10 15:48:35 crc kubenswrapper[4755]: I1210 15:48:35.363200 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ab3e3d49-8054-4653-a857-45138337b2f7-dns-swift-storage-0\") pod \"ab3e3d49-8054-4653-a857-45138337b2f7\" (UID: \"ab3e3d49-8054-4653-a857-45138337b2f7\") " Dec 10 15:48:35 crc kubenswrapper[4755]: I1210 15:48:35.363599 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab3e3d49-8054-4653-a857-45138337b2f7-dns-svc\") pod \"ab3e3d49-8054-4653-a857-45138337b2f7\" (UID: \"ab3e3d49-8054-4653-a857-45138337b2f7\") " Dec 10 15:48:35 crc kubenswrapper[4755]: I1210 15:48:35.363643 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ab3e3d49-8054-4653-a857-45138337b2f7-ovsdbserver-nb\") pod \"ab3e3d49-8054-4653-a857-45138337b2f7\" (UID: \"ab3e3d49-8054-4653-a857-45138337b2f7\") " Dec 10 15:48:35 crc kubenswrapper[4755]: I1210 15:48:35.363712 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-749rm\" (UniqueName: \"kubernetes.io/projected/ab3e3d49-8054-4653-a857-45138337b2f7-kube-api-access-749rm\") pod \"ab3e3d49-8054-4653-a857-45138337b2f7\" (UID: \"ab3e3d49-8054-4653-a857-45138337b2f7\") " Dec 10 15:48:35 crc kubenswrapper[4755]: I1210 15:48:35.363882 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ab3e3d49-8054-4653-a857-45138337b2f7-ovsdbserver-sb\") pod \"ab3e3d49-8054-4653-a857-45138337b2f7\" (UID: \"ab3e3d49-8054-4653-a857-45138337b2f7\") " Dec 10 15:48:35 crc kubenswrapper[4755]: I1210 15:48:35.363924 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab3e3d49-8054-4653-a857-45138337b2f7-config\") pod \"ab3e3d49-8054-4653-a857-45138337b2f7\" (UID: \"ab3e3d49-8054-4653-a857-45138337b2f7\") " Dec 10 15:48:35 crc kubenswrapper[4755]: I1210 15:48:35.408576 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab3e3d49-8054-4653-a857-45138337b2f7-kube-api-access-749rm" (OuterVolumeSpecName: "kube-api-access-749rm") pod "ab3e3d49-8054-4653-a857-45138337b2f7" (UID: "ab3e3d49-8054-4653-a857-45138337b2f7"). InnerVolumeSpecName "kube-api-access-749rm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:48:35 crc kubenswrapper[4755]: I1210 15:48:35.455818 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab3e3d49-8054-4653-a857-45138337b2f7-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ab3e3d49-8054-4653-a857-45138337b2f7" (UID: "ab3e3d49-8054-4653-a857-45138337b2f7"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:48:35 crc kubenswrapper[4755]: I1210 15:48:35.460925 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab3e3d49-8054-4653-a857-45138337b2f7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ab3e3d49-8054-4653-a857-45138337b2f7" (UID: "ab3e3d49-8054-4653-a857-45138337b2f7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:48:35 crc kubenswrapper[4755]: I1210 15:48:35.467241 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab3e3d49-8054-4653-a857-45138337b2f7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ab3e3d49-8054-4653-a857-45138337b2f7" (UID: "ab3e3d49-8054-4653-a857-45138337b2f7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:48:35 crc kubenswrapper[4755]: I1210 15:48:35.467956 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab3e3d49-8054-4653-a857-45138337b2f7-dns-svc\") pod \"ab3e3d49-8054-4653-a857-45138337b2f7\" (UID: \"ab3e3d49-8054-4653-a857-45138337b2f7\") " Dec 10 15:48:35 crc kubenswrapper[4755]: W1210 15:48:35.468084 4755 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/ab3e3d49-8054-4653-a857-45138337b2f7/volumes/kubernetes.io~configmap/dns-svc Dec 10 15:48:35 crc kubenswrapper[4755]: I1210 15:48:35.468106 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab3e3d49-8054-4653-a857-45138337b2f7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ab3e3d49-8054-4653-a857-45138337b2f7" (UID: "ab3e3d49-8054-4653-a857-45138337b2f7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:48:35 crc kubenswrapper[4755]: I1210 15:48:35.469619 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab3e3d49-8054-4653-a857-45138337b2f7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ab3e3d49-8054-4653-a857-45138337b2f7" (UID: "ab3e3d49-8054-4653-a857-45138337b2f7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:48:35 crc kubenswrapper[4755]: I1210 15:48:35.470398 4755 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ab3e3d49-8054-4653-a857-45138337b2f7-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:35 crc kubenswrapper[4755]: I1210 15:48:35.470432 4755 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab3e3d49-8054-4653-a857-45138337b2f7-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:35 crc kubenswrapper[4755]: I1210 15:48:35.470781 4755 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ab3e3d49-8054-4653-a857-45138337b2f7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:35 crc kubenswrapper[4755]: I1210 15:48:35.470817 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-749rm\" (UniqueName: \"kubernetes.io/projected/ab3e3d49-8054-4653-a857-45138337b2f7-kube-api-access-749rm\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:35 crc kubenswrapper[4755]: I1210 15:48:35.470828 4755 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ab3e3d49-8054-4653-a857-45138337b2f7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:35 crc kubenswrapper[4755]: I1210 15:48:35.473488 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab3e3d49-8054-4653-a857-45138337b2f7-config" (OuterVolumeSpecName: "config") pod "ab3e3d49-8054-4653-a857-45138337b2f7" (UID: "ab3e3d49-8054-4653-a857-45138337b2f7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:48:35 crc kubenswrapper[4755]: I1210 15:48:35.573449 4755 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab3e3d49-8054-4653-a857-45138337b2f7-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:36 crc kubenswrapper[4755]: I1210 15:48:36.066080 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54dd998c-lc5nf" event={"ID":"ab3e3d49-8054-4653-a857-45138337b2f7","Type":"ContainerDied","Data":"7dce6e9bf46fdfae56ca0d8da6c5618ff009a3d501016e47e6ffb78d97aef9f3"} Dec 10 15:48:36 crc kubenswrapper[4755]: I1210 15:48:36.066132 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54dd998c-lc5nf" Dec 10 15:48:36 crc kubenswrapper[4755]: I1210 15:48:36.066140 4755 scope.go:117] "RemoveContainer" containerID="21d6d9aa4274df63136332bafc9e03f1e2f4f0c5020e6d6990630d89f0ab7a1a" Dec 10 15:48:36 crc kubenswrapper[4755]: I1210 15:48:36.092593 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-54dd998c-lc5nf"] Dec 10 15:48:36 crc kubenswrapper[4755]: I1210 15:48:36.094810 4755 scope.go:117] "RemoveContainer" containerID="a05d5628b434305b623c4ea63cf9a4c579a51b1442d8ad0d241cb9b8e6fd7ea5" Dec 10 15:48:36 crc kubenswrapper[4755]: I1210 15:48:36.103122 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-54dd998c-lc5nf"] Dec 10 15:48:37 crc kubenswrapper[4755]: I1210 15:48:37.770634 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab3e3d49-8054-4653-a857-45138337b2f7" path="/var/lib/kubelet/pods/ab3e3d49-8054-4653-a857-45138337b2f7/volumes" Dec 10 15:48:38 crc kubenswrapper[4755]: I1210 15:48:38.150751 4755 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-distributor-664b687b54-qvsww" podUID="ad77f530-dc0b-44ec-b4e2-c580cfe568fe" containerName="loki-distributor" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 10 15:48:38 crc kubenswrapper[4755]: I1210 15:48:38.391652 4755 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-querier-5467947bf7-qpg72" podUID="f9c583d4-e5d0-4c13-9989-dea15920e9e6" containerName="loki-querier" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 10 15:48:43 crc kubenswrapper[4755]: I1210 15:48:43.176521 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-sntj7"] Dec 10 15:48:43 crc kubenswrapper[4755]: E1210 15:48:43.177663 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab3e3d49-8054-4653-a857-45138337b2f7" containerName="init" Dec 10 15:48:43 crc kubenswrapper[4755]: I1210 15:48:43.177679 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab3e3d49-8054-4653-a857-45138337b2f7" containerName="init" Dec 10 15:48:43 crc kubenswrapper[4755]: E1210 15:48:43.177711 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab3e3d49-8054-4653-a857-45138337b2f7" containerName="dnsmasq-dns" Dec 10 15:48:43 crc kubenswrapper[4755]: I1210 15:48:43.177717 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab3e3d49-8054-4653-a857-45138337b2f7" containerName="dnsmasq-dns" Dec 10 15:48:43 crc kubenswrapper[4755]: I1210 15:48:43.178208 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab3e3d49-8054-4653-a857-45138337b2f7" containerName="dnsmasq-dns" Dec 10 15:48:43 crc kubenswrapper[4755]: I1210 15:48:43.179862 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-sntj7" Dec 10 15:48:43 crc kubenswrapper[4755]: I1210 15:48:43.184504 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-74mg7" Dec 10 15:48:43 crc kubenswrapper[4755]: I1210 15:48:43.184589 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 10 15:48:43 crc kubenswrapper[4755]: I1210 15:48:43.184942 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 10 15:48:43 crc kubenswrapper[4755]: I1210 15:48:43.190686 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 10 15:48:43 crc kubenswrapper[4755]: I1210 15:48:43.202913 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-sntj7"] Dec 10 15:48:43 crc kubenswrapper[4755]: I1210 15:48:43.233503 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0700ff42-76b3-4d25-aa15-323a506bb50b-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-sntj7\" (UID: \"0700ff42-76b3-4d25-aa15-323a506bb50b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-sntj7" Dec 10 15:48:43 crc kubenswrapper[4755]: I1210 15:48:43.233637 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kw9x\" (UniqueName: \"kubernetes.io/projected/0700ff42-76b3-4d25-aa15-323a506bb50b-kube-api-access-4kw9x\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-sntj7\" (UID: \"0700ff42-76b3-4d25-aa15-323a506bb50b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-sntj7" Dec 10 15:48:43 crc kubenswrapper[4755]: I1210 15:48:43.233719 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0700ff42-76b3-4d25-aa15-323a506bb50b-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-sntj7\" (UID: \"0700ff42-76b3-4d25-aa15-323a506bb50b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-sntj7" Dec 10 15:48:43 crc kubenswrapper[4755]: I1210 15:48:43.233797 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0700ff42-76b3-4d25-aa15-323a506bb50b-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-sntj7\" (UID: \"0700ff42-76b3-4d25-aa15-323a506bb50b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-sntj7" Dec 10 15:48:43 crc kubenswrapper[4755]: I1210 15:48:43.336459 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4kw9x\" (UniqueName: \"kubernetes.io/projected/0700ff42-76b3-4d25-aa15-323a506bb50b-kube-api-access-4kw9x\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-sntj7\" (UID: \"0700ff42-76b3-4d25-aa15-323a506bb50b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-sntj7" Dec 10 15:48:43 crc kubenswrapper[4755]: I1210 15:48:43.336799 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0700ff42-76b3-4d25-aa15-323a506bb50b-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-sntj7\" (UID: \"0700ff42-76b3-4d25-aa15-323a506bb50b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-sntj7" Dec 10 15:48:43 crc kubenswrapper[4755]: I1210 15:48:43.336903 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0700ff42-76b3-4d25-aa15-323a506bb50b-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-sntj7\" (UID: \"0700ff42-76b3-4d25-aa15-323a506bb50b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-sntj7" Dec 10 15:48:43 crc kubenswrapper[4755]: I1210 15:48:43.336969 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0700ff42-76b3-4d25-aa15-323a506bb50b-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-sntj7\" (UID: \"0700ff42-76b3-4d25-aa15-323a506bb50b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-sntj7" Dec 10 15:48:43 crc kubenswrapper[4755]: I1210 15:48:43.342293 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0700ff42-76b3-4d25-aa15-323a506bb50b-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-sntj7\" (UID: \"0700ff42-76b3-4d25-aa15-323a506bb50b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-sntj7" Dec 10 15:48:43 crc kubenswrapper[4755]: I1210 15:48:43.342439 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0700ff42-76b3-4d25-aa15-323a506bb50b-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-sntj7\" (UID: \"0700ff42-76b3-4d25-aa15-323a506bb50b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-sntj7" Dec 10 15:48:43 crc kubenswrapper[4755]: I1210 15:48:43.343533 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0700ff42-76b3-4d25-aa15-323a506bb50b-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-sntj7\" (UID: \"0700ff42-76b3-4d25-aa15-323a506bb50b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-sntj7" Dec 10 15:48:43 crc kubenswrapper[4755]: I1210 15:48:43.353495 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kw9x\" (UniqueName: \"kubernetes.io/projected/0700ff42-76b3-4d25-aa15-323a506bb50b-kube-api-access-4kw9x\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-sntj7\" (UID: \"0700ff42-76b3-4d25-aa15-323a506bb50b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-sntj7" Dec 10 15:48:43 crc kubenswrapper[4755]: I1210 15:48:43.525965 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-sntj7" Dec 10 15:48:44 crc kubenswrapper[4755]: I1210 15:48:44.087319 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-sntj7"] Dec 10 15:48:44 crc kubenswrapper[4755]: W1210 15:48:44.089654 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0700ff42_76b3_4d25_aa15_323a506bb50b.slice/crio-23a786d9750805e431489885c14a5d3f6986e6f2308ee1f359630fc08b0647ef WatchSource:0}: Error finding container 23a786d9750805e431489885c14a5d3f6986e6f2308ee1f359630fc08b0647ef: Status 404 returned error can't find the container with id 23a786d9750805e431489885c14a5d3f6986e6f2308ee1f359630fc08b0647ef Dec 10 15:48:44 crc kubenswrapper[4755]: I1210 15:48:44.166735 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-sntj7" event={"ID":"0700ff42-76b3-4d25-aa15-323a506bb50b","Type":"ContainerStarted","Data":"23a786d9750805e431489885c14a5d3f6986e6f2308ee1f359630fc08b0647ef"} Dec 10 15:48:48 crc kubenswrapper[4755]: I1210 15:48:48.151436 4755 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-distributor-664b687b54-qvsww" podUID="ad77f530-dc0b-44ec-b4e2-c580cfe568fe" containerName="loki-distributor" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 10 15:48:48 crc kubenswrapper[4755]: I1210 15:48:48.151975 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-distributor-664b687b54-qvsww" Dec 10 15:48:48 crc kubenswrapper[4755]: I1210 15:48:48.391137 4755 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-querier-5467947bf7-qpg72" podUID="f9c583d4-e5d0-4c13-9989-dea15920e9e6" containerName="loki-querier" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 10 15:48:48 crc kubenswrapper[4755]: I1210 15:48:48.391519 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-querier-5467947bf7-qpg72" Dec 10 15:48:48 crc kubenswrapper[4755]: I1210 15:48:48.417281 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-compactor-0" Dec 10 15:48:48 crc kubenswrapper[4755]: I1210 15:48:48.755944 4755 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="ceb83259-f1d9-4219-a0c3-b42d35e2dc02" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 10 15:48:49 crc kubenswrapper[4755]: I1210 15:48:49.352459 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 10 15:48:49 crc kubenswrapper[4755]: E1210 15:48:49.761315 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 15:48:49 crc kubenswrapper[4755]: E1210 15:48:49.893543 4755 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 10 15:48:49 crc kubenswrapper[4755]: E1210 15:48:49.893602 4755 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 10 15:48:49 crc kubenswrapper[4755]: E1210 15:48:49.893729 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mz4t5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-jfc28_openstack(998863b6-4f48-4c8b-8011-a40377686b99): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 10 15:48:49 crc kubenswrapper[4755]: E1210 15:48:49.895209 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 15:48:56 crc kubenswrapper[4755]: I1210 15:48:56.301629 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-sntj7" event={"ID":"0700ff42-76b3-4d25-aa15-323a506bb50b","Type":"ContainerStarted","Data":"5d3a7eda5f9dca04baa43d392705be36c67cb5eb8e2bcc04617ec8e67355838f"} Dec 10 15:48:56 crc kubenswrapper[4755]: I1210 15:48:56.323592 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-sntj7" podStartSLOduration=1.9752972039999999 podStartE2EDuration="13.32357283s" podCreationTimestamp="2025-12-10 15:48:43 +0000 UTC" firstStartedPulling="2025-12-10 15:48:44.092439762 +0000 UTC m=+1520.693323384" lastFinishedPulling="2025-12-10 15:48:55.440715378 +0000 UTC m=+1532.041599010" observedRunningTime="2025-12-10 15:48:56.316552259 +0000 UTC m=+1532.917435891" watchObservedRunningTime="2025-12-10 15:48:56.32357283 +0000 UTC m=+1532.924456462" Dec 10 15:48:57 crc kubenswrapper[4755]: I1210 15:48:57.314449 4755 generic.go:334] "Generic (PLEG): container finished" podID="ad77f530-dc0b-44ec-b4e2-c580cfe568fe" containerID="e118e9654dcdb3ccbeee519bdf5ed84674da1321f88271483d3036cac60db4fe" exitCode=137 Dec 10 15:48:57 crc kubenswrapper[4755]: I1210 15:48:57.314689 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-distributor-664b687b54-qvsww" event={"ID":"ad77f530-dc0b-44ec-b4e2-c580cfe568fe","Type":"ContainerDied","Data":"e118e9654dcdb3ccbeee519bdf5ed84674da1321f88271483d3036cac60db4fe"} Dec 10 15:48:57 crc kubenswrapper[4755]: I1210 15:48:57.315128 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-distributor-664b687b54-qvsww" event={"ID":"ad77f530-dc0b-44ec-b4e2-c580cfe568fe","Type":"ContainerDied","Data":"126166107f5d54b220a517f51d29771518c6a3d2213cab3a9ed72878151b4100"} Dec 10 15:48:57 crc kubenswrapper[4755]: I1210 15:48:57.315145 4755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="126166107f5d54b220a517f51d29771518c6a3d2213cab3a9ed72878151b4100" Dec 10 15:48:57 crc kubenswrapper[4755]: I1210 15:48:57.317521 4755 generic.go:334] "Generic (PLEG): container finished" podID="b7d822e6-7034-4d4d-b3b4-07ecee1fb7cd" containerID="2c8834e81dcfd5307cc22672e14a1b6d5f82877908142397baa4a5b4e90481bf" exitCode=0 Dec 10 15:48:57 crc kubenswrapper[4755]: I1210 15:48:57.317575 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b7d822e6-7034-4d4d-b3b4-07ecee1fb7cd","Type":"ContainerDied","Data":"2c8834e81dcfd5307cc22672e14a1b6d5f82877908142397baa4a5b4e90481bf"} Dec 10 15:48:57 crc kubenswrapper[4755]: I1210 15:48:57.658362 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-distributor-664b687b54-qvsww" Dec 10 15:48:57 crc kubenswrapper[4755]: I1210 15:48:57.762014 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hlp25\" (UniqueName: \"kubernetes.io/projected/ad77f530-dc0b-44ec-b4e2-c580cfe568fe-kube-api-access-hlp25\") pod \"ad77f530-dc0b-44ec-b4e2-c580cfe568fe\" (UID: \"ad77f530-dc0b-44ec-b4e2-c580cfe568fe\") " Dec 10 15:48:57 crc kubenswrapper[4755]: I1210 15:48:57.762300 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cloudkitty-lokistack-distributor-http\" (UniqueName: \"kubernetes.io/secret/ad77f530-dc0b-44ec-b4e2-c580cfe568fe-cloudkitty-lokistack-distributor-http\") pod \"ad77f530-dc0b-44ec-b4e2-c580cfe568fe\" (UID: \"ad77f530-dc0b-44ec-b4e2-c580cfe568fe\") " Dec 10 15:48:57 crc kubenswrapper[4755]: I1210 15:48:57.762393 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cloudkitty-lokistack-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/ad77f530-dc0b-44ec-b4e2-c580cfe568fe-cloudkitty-lokistack-distributor-grpc\") pod \"ad77f530-dc0b-44ec-b4e2-c580cfe568fe\" (UID: \"ad77f530-dc0b-44ec-b4e2-c580cfe568fe\") " Dec 10 15:48:57 crc kubenswrapper[4755]: I1210 15:48:57.762457 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ad77f530-dc0b-44ec-b4e2-c580cfe568fe-cloudkitty-lokistack-ca-bundle\") pod \"ad77f530-dc0b-44ec-b4e2-c580cfe568fe\" (UID: \"ad77f530-dc0b-44ec-b4e2-c580cfe568fe\") " Dec 10 15:48:57 crc kubenswrapper[4755]: I1210 15:48:57.762554 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad77f530-dc0b-44ec-b4e2-c580cfe568fe-config\") pod \"ad77f530-dc0b-44ec-b4e2-c580cfe568fe\" (UID: \"ad77f530-dc0b-44ec-b4e2-c580cfe568fe\") " Dec 10 15:48:57 crc kubenswrapper[4755]: I1210 15:48:57.763325 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad77f530-dc0b-44ec-b4e2-c580cfe568fe-config" (OuterVolumeSpecName: "config") pod "ad77f530-dc0b-44ec-b4e2-c580cfe568fe" (UID: "ad77f530-dc0b-44ec-b4e2-c580cfe568fe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:48:57 crc kubenswrapper[4755]: I1210 15:48:57.763677 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad77f530-dc0b-44ec-b4e2-c580cfe568fe-cloudkitty-lokistack-ca-bundle" (OuterVolumeSpecName: "cloudkitty-lokistack-ca-bundle") pod "ad77f530-dc0b-44ec-b4e2-c580cfe568fe" (UID: "ad77f530-dc0b-44ec-b4e2-c580cfe568fe"). InnerVolumeSpecName "cloudkitty-lokistack-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:48:57 crc kubenswrapper[4755]: I1210 15:48:57.769203 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad77f530-dc0b-44ec-b4e2-c580cfe568fe-cloudkitty-lokistack-distributor-http" (OuterVolumeSpecName: "cloudkitty-lokistack-distributor-http") pod "ad77f530-dc0b-44ec-b4e2-c580cfe568fe" (UID: "ad77f530-dc0b-44ec-b4e2-c580cfe568fe"). InnerVolumeSpecName "cloudkitty-lokistack-distributor-http". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:48:57 crc kubenswrapper[4755]: I1210 15:48:57.771550 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad77f530-dc0b-44ec-b4e2-c580cfe568fe-cloudkitty-lokistack-distributor-grpc" (OuterVolumeSpecName: "cloudkitty-lokistack-distributor-grpc") pod "ad77f530-dc0b-44ec-b4e2-c580cfe568fe" (UID: "ad77f530-dc0b-44ec-b4e2-c580cfe568fe"). InnerVolumeSpecName "cloudkitty-lokistack-distributor-grpc". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:48:57 crc kubenswrapper[4755]: I1210 15:48:57.771593 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad77f530-dc0b-44ec-b4e2-c580cfe568fe-kube-api-access-hlp25" (OuterVolumeSpecName: "kube-api-access-hlp25") pod "ad77f530-dc0b-44ec-b4e2-c580cfe568fe" (UID: "ad77f530-dc0b-44ec-b4e2-c580cfe568fe"). InnerVolumeSpecName "kube-api-access-hlp25". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:48:57 crc kubenswrapper[4755]: I1210 15:48:57.842649 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-querier-5467947bf7-qpg72" Dec 10 15:48:57 crc kubenswrapper[4755]: I1210 15:48:57.865747 4755 reconciler_common.go:293] "Volume detached for volume \"cloudkitty-lokistack-distributor-http\" (UniqueName: \"kubernetes.io/secret/ad77f530-dc0b-44ec-b4e2-c580cfe568fe-cloudkitty-lokistack-distributor-http\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:57 crc kubenswrapper[4755]: I1210 15:48:57.866034 4755 reconciler_common.go:293] "Volume detached for volume \"cloudkitty-lokistack-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/ad77f530-dc0b-44ec-b4e2-c580cfe568fe-cloudkitty-lokistack-distributor-grpc\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:57 crc kubenswrapper[4755]: I1210 15:48:57.866166 4755 reconciler_common.go:293] "Volume detached for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ad77f530-dc0b-44ec-b4e2-c580cfe568fe-cloudkitty-lokistack-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:57 crc kubenswrapper[4755]: I1210 15:48:57.866277 4755 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad77f530-dc0b-44ec-b4e2-c580cfe568fe-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:57 crc kubenswrapper[4755]: I1210 15:48:57.866375 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hlp25\" (UniqueName: \"kubernetes.io/projected/ad77f530-dc0b-44ec-b4e2-c580cfe568fe-kube-api-access-hlp25\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:57 crc kubenswrapper[4755]: I1210 15:48:57.967900 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9c583d4-e5d0-4c13-9989-dea15920e9e6-cloudkitty-lokistack-ca-bundle\") pod \"f9c583d4-e5d0-4c13-9989-dea15920e9e6\" (UID: \"f9c583d4-e5d0-4c13-9989-dea15920e9e6\") " Dec 10 15:48:57 crc kubenswrapper[4755]: I1210 15:48:57.967969 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cloudkitty-lokistack-querier-http\" (UniqueName: \"kubernetes.io/secret/f9c583d4-e5d0-4c13-9989-dea15920e9e6-cloudkitty-lokistack-querier-http\") pod \"f9c583d4-e5d0-4c13-9989-dea15920e9e6\" (UID: \"f9c583d4-e5d0-4c13-9989-dea15920e9e6\") " Dec 10 15:48:57 crc kubenswrapper[4755]: I1210 15:48:57.968008 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/f9c583d4-e5d0-4c13-9989-dea15920e9e6-cloudkitty-loki-s3\") pod \"f9c583d4-e5d0-4c13-9989-dea15920e9e6\" (UID: \"f9c583d4-e5d0-4c13-9989-dea15920e9e6\") " Dec 10 15:48:57 crc kubenswrapper[4755]: I1210 15:48:57.968060 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cloudkitty-lokistack-querier-grpc\" (UniqueName: \"kubernetes.io/secret/f9c583d4-e5d0-4c13-9989-dea15920e9e6-cloudkitty-lokistack-querier-grpc\") pod \"f9c583d4-e5d0-4c13-9989-dea15920e9e6\" (UID: \"f9c583d4-e5d0-4c13-9989-dea15920e9e6\") " Dec 10 15:48:57 crc kubenswrapper[4755]: I1210 15:48:57.968191 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hqp2z\" (UniqueName: \"kubernetes.io/projected/f9c583d4-e5d0-4c13-9989-dea15920e9e6-kube-api-access-hqp2z\") pod \"f9c583d4-e5d0-4c13-9989-dea15920e9e6\" (UID: \"f9c583d4-e5d0-4c13-9989-dea15920e9e6\") " Dec 10 15:48:57 crc kubenswrapper[4755]: I1210 15:48:57.968336 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9c583d4-e5d0-4c13-9989-dea15920e9e6-config\") pod \"f9c583d4-e5d0-4c13-9989-dea15920e9e6\" (UID: \"f9c583d4-e5d0-4c13-9989-dea15920e9e6\") " Dec 10 15:48:57 crc kubenswrapper[4755]: I1210 15:48:57.968624 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9c583d4-e5d0-4c13-9989-dea15920e9e6-cloudkitty-lokistack-ca-bundle" (OuterVolumeSpecName: "cloudkitty-lokistack-ca-bundle") pod "f9c583d4-e5d0-4c13-9989-dea15920e9e6" (UID: "f9c583d4-e5d0-4c13-9989-dea15920e9e6"). InnerVolumeSpecName "cloudkitty-lokistack-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:48:57 crc kubenswrapper[4755]: I1210 15:48:57.968975 4755 reconciler_common.go:293] "Volume detached for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9c583d4-e5d0-4c13-9989-dea15920e9e6-cloudkitty-lokistack-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:57 crc kubenswrapper[4755]: I1210 15:48:57.968999 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9c583d4-e5d0-4c13-9989-dea15920e9e6-config" (OuterVolumeSpecName: "config") pod "f9c583d4-e5d0-4c13-9989-dea15920e9e6" (UID: "f9c583d4-e5d0-4c13-9989-dea15920e9e6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:48:57 crc kubenswrapper[4755]: I1210 15:48:57.971722 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9c583d4-e5d0-4c13-9989-dea15920e9e6-cloudkitty-loki-s3" (OuterVolumeSpecName: "cloudkitty-loki-s3") pod "f9c583d4-e5d0-4c13-9989-dea15920e9e6" (UID: "f9c583d4-e5d0-4c13-9989-dea15920e9e6"). InnerVolumeSpecName "cloudkitty-loki-s3". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:48:57 crc kubenswrapper[4755]: I1210 15:48:57.972096 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9c583d4-e5d0-4c13-9989-dea15920e9e6-cloudkitty-lokistack-querier-http" (OuterVolumeSpecName: "cloudkitty-lokistack-querier-http") pod "f9c583d4-e5d0-4c13-9989-dea15920e9e6" (UID: "f9c583d4-e5d0-4c13-9989-dea15920e9e6"). InnerVolumeSpecName "cloudkitty-lokistack-querier-http". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:48:57 crc kubenswrapper[4755]: I1210 15:48:57.973449 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9c583d4-e5d0-4c13-9989-dea15920e9e6-cloudkitty-lokistack-querier-grpc" (OuterVolumeSpecName: "cloudkitty-lokistack-querier-grpc") pod "f9c583d4-e5d0-4c13-9989-dea15920e9e6" (UID: "f9c583d4-e5d0-4c13-9989-dea15920e9e6"). InnerVolumeSpecName "cloudkitty-lokistack-querier-grpc". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:48:57 crc kubenswrapper[4755]: I1210 15:48:57.979945 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9c583d4-e5d0-4c13-9989-dea15920e9e6-kube-api-access-hqp2z" (OuterVolumeSpecName: "kube-api-access-hqp2z") pod "f9c583d4-e5d0-4c13-9989-dea15920e9e6" (UID: "f9c583d4-e5d0-4c13-9989-dea15920e9e6"). InnerVolumeSpecName "kube-api-access-hqp2z". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:48:58 crc kubenswrapper[4755]: I1210 15:48:58.071528 4755 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9c583d4-e5d0-4c13-9989-dea15920e9e6-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:58 crc kubenswrapper[4755]: I1210 15:48:58.071577 4755 reconciler_common.go:293] "Volume detached for volume \"cloudkitty-lokistack-querier-http\" (UniqueName: \"kubernetes.io/secret/f9c583d4-e5d0-4c13-9989-dea15920e9e6-cloudkitty-lokistack-querier-http\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:58 crc kubenswrapper[4755]: I1210 15:48:58.071597 4755 reconciler_common.go:293] "Volume detached for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/f9c583d4-e5d0-4c13-9989-dea15920e9e6-cloudkitty-loki-s3\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:58 crc kubenswrapper[4755]: I1210 15:48:58.071613 4755 reconciler_common.go:293] "Volume detached for volume \"cloudkitty-lokistack-querier-grpc\" (UniqueName: \"kubernetes.io/secret/f9c583d4-e5d0-4c13-9989-dea15920e9e6-cloudkitty-lokistack-querier-grpc\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:58 crc kubenswrapper[4755]: I1210 15:48:58.071681 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hqp2z\" (UniqueName: \"kubernetes.io/projected/f9c583d4-e5d0-4c13-9989-dea15920e9e6-kube-api-access-hqp2z\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:58 crc kubenswrapper[4755]: I1210 15:48:58.339046 4755 generic.go:334] "Generic (PLEG): container finished" podID="f9c583d4-e5d0-4c13-9989-dea15920e9e6" containerID="4164eb28ac9f29baeb1e602db84f2fe27c26cb04d570c22fd72e60c0a69e8dc2" exitCode=137 Dec 10 15:48:58 crc kubenswrapper[4755]: I1210 15:48:58.339099 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-querier-5467947bf7-qpg72" event={"ID":"f9c583d4-e5d0-4c13-9989-dea15920e9e6","Type":"ContainerDied","Data":"4164eb28ac9f29baeb1e602db84f2fe27c26cb04d570c22fd72e60c0a69e8dc2"} Dec 10 15:48:58 crc kubenswrapper[4755]: I1210 15:48:58.339349 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-querier-5467947bf7-qpg72" event={"ID":"f9c583d4-e5d0-4c13-9989-dea15920e9e6","Type":"ContainerDied","Data":"30fa51463bb53c3292d4b01384154150079c0596efdb9017176712ee8530aaf5"} Dec 10 15:48:58 crc kubenswrapper[4755]: I1210 15:48:58.339122 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-querier-5467947bf7-qpg72" Dec 10 15:48:58 crc kubenswrapper[4755]: I1210 15:48:58.339370 4755 scope.go:117] "RemoveContainer" containerID="4164eb28ac9f29baeb1e602db84f2fe27c26cb04d570c22fd72e60c0a69e8dc2" Dec 10 15:48:58 crc kubenswrapper[4755]: I1210 15:48:58.342505 4755 generic.go:334] "Generic (PLEG): container finished" podID="c5084508-a21d-4f43-bc50-2f0c7f13edbe" containerID="bb0752734f286c2a44b5604653e613e9de46509f1688c094364b4e4280b0706b" exitCode=0 Dec 10 15:48:58 crc kubenswrapper[4755]: I1210 15:48:58.342567 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c5084508-a21d-4f43-bc50-2f0c7f13edbe","Type":"ContainerDied","Data":"bb0752734f286c2a44b5604653e613e9de46509f1688c094364b4e4280b0706b"} Dec 10 15:48:58 crc kubenswrapper[4755]: I1210 15:48:58.351067 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b7d822e6-7034-4d4d-b3b4-07ecee1fb7cd","Type":"ContainerStarted","Data":"ce73c53ce5f2f04c6906fb51b347c54b0ffc768c247deaf4febc91707bca71ef"} Dec 10 15:48:58 crc kubenswrapper[4755]: I1210 15:48:58.351294 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Dec 10 15:48:58 crc kubenswrapper[4755]: I1210 15:48:58.374175 4755 generic.go:334] "Generic (PLEG): container finished" podID="8fbdd63a-fd88-4a37-85fb-08e7d21af574" containerID="4e315c5d02bd4b65abbb32f80d628db448bc67df9957d09b0d80d70ea9b98178" exitCode=137 Dec 10 15:48:58 crc kubenswrapper[4755]: I1210 15:48:58.374261 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-distributor-664b687b54-qvsww" Dec 10 15:48:58 crc kubenswrapper[4755]: I1210 15:48:58.374933 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-dr466" event={"ID":"8fbdd63a-fd88-4a37-85fb-08e7d21af574","Type":"ContainerDied","Data":"4e315c5d02bd4b65abbb32f80d628db448bc67df9957d09b0d80d70ea9b98178"} Dec 10 15:48:58 crc kubenswrapper[4755]: I1210 15:48:58.494224 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-dr466" Dec 10 15:48:58 crc kubenswrapper[4755]: I1210 15:48:58.533642 4755 scope.go:117] "RemoveContainer" containerID="4164eb28ac9f29baeb1e602db84f2fe27c26cb04d570c22fd72e60c0a69e8dc2" Dec 10 15:48:58 crc kubenswrapper[4755]: E1210 15:48:58.535055 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4164eb28ac9f29baeb1e602db84f2fe27c26cb04d570c22fd72e60c0a69e8dc2\": container with ID starting with 4164eb28ac9f29baeb1e602db84f2fe27c26cb04d570c22fd72e60c0a69e8dc2 not found: ID does not exist" containerID="4164eb28ac9f29baeb1e602db84f2fe27c26cb04d570c22fd72e60c0a69e8dc2" Dec 10 15:48:58 crc kubenswrapper[4755]: I1210 15:48:58.535112 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4164eb28ac9f29baeb1e602db84f2fe27c26cb04d570c22fd72e60c0a69e8dc2"} err="failed to get container status \"4164eb28ac9f29baeb1e602db84f2fe27c26cb04d570c22fd72e60c0a69e8dc2\": rpc error: code = NotFound desc = could not find container \"4164eb28ac9f29baeb1e602db84f2fe27c26cb04d570c22fd72e60c0a69e8dc2\": container with ID starting with 4164eb28ac9f29baeb1e602db84f2fe27c26cb04d570c22fd72e60c0a69e8dc2 not found: ID does not exist" Dec 10 15:48:58 crc kubenswrapper[4755]: I1210 15:48:58.563459 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.563433925 podStartE2EDuration="37.563433925s" podCreationTimestamp="2025-12-10 15:48:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:48:58.401988395 +0000 UTC m=+1535.002872037" watchObservedRunningTime="2025-12-10 15:48:58.563433925 +0000 UTC m=+1535.164317557" Dec 10 15:48:58 crc kubenswrapper[4755]: I1210 15:48:58.566212 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-lokistack-distributor-664b687b54-qvsww"] Dec 10 15:48:58 crc kubenswrapper[4755]: I1210 15:48:58.577658 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-lokistack-distributor-664b687b54-qvsww"] Dec 10 15:48:58 crc kubenswrapper[4755]: I1210 15:48:58.582300 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8fbdd63a-fd88-4a37-85fb-08e7d21af574-config\") pod \"8fbdd63a-fd88-4a37-85fb-08e7d21af574\" (UID: \"8fbdd63a-fd88-4a37-85fb-08e7d21af574\") " Dec 10 15:48:58 crc kubenswrapper[4755]: I1210 15:48:58.582374 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cloudkitty-lokistack-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/8fbdd63a-fd88-4a37-85fb-08e7d21af574-cloudkitty-lokistack-query-frontend-http\") pod \"8fbdd63a-fd88-4a37-85fb-08e7d21af574\" (UID: \"8fbdd63a-fd88-4a37-85fb-08e7d21af574\") " Dec 10 15:48:58 crc kubenswrapper[4755]: I1210 15:48:58.582709 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cloudkitty-lokistack-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/8fbdd63a-fd88-4a37-85fb-08e7d21af574-cloudkitty-lokistack-query-frontend-grpc\") pod \"8fbdd63a-fd88-4a37-85fb-08e7d21af574\" (UID: \"8fbdd63a-fd88-4a37-85fb-08e7d21af574\") " Dec 10 15:48:58 crc kubenswrapper[4755]: I1210 15:48:58.582901 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfj4t\" (UniqueName: \"kubernetes.io/projected/8fbdd63a-fd88-4a37-85fb-08e7d21af574-kube-api-access-cfj4t\") pod \"8fbdd63a-fd88-4a37-85fb-08e7d21af574\" (UID: \"8fbdd63a-fd88-4a37-85fb-08e7d21af574\") " Dec 10 15:48:58 crc kubenswrapper[4755]: I1210 15:48:58.583044 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8fbdd63a-fd88-4a37-85fb-08e7d21af574-cloudkitty-lokistack-ca-bundle\") pod \"8fbdd63a-fd88-4a37-85fb-08e7d21af574\" (UID: \"8fbdd63a-fd88-4a37-85fb-08e7d21af574\") " Dec 10 15:48:58 crc kubenswrapper[4755]: I1210 15:48:58.583999 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8fbdd63a-fd88-4a37-85fb-08e7d21af574-config" (OuterVolumeSpecName: "config") pod "8fbdd63a-fd88-4a37-85fb-08e7d21af574" (UID: "8fbdd63a-fd88-4a37-85fb-08e7d21af574"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:48:58 crc kubenswrapper[4755]: I1210 15:48:58.584380 4755 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8fbdd63a-fd88-4a37-85fb-08e7d21af574-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:58 crc kubenswrapper[4755]: I1210 15:48:58.584627 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8fbdd63a-fd88-4a37-85fb-08e7d21af574-cloudkitty-lokistack-ca-bundle" (OuterVolumeSpecName: "cloudkitty-lokistack-ca-bundle") pod "8fbdd63a-fd88-4a37-85fb-08e7d21af574" (UID: "8fbdd63a-fd88-4a37-85fb-08e7d21af574"). InnerVolumeSpecName "cloudkitty-lokistack-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:48:58 crc kubenswrapper[4755]: I1210 15:48:58.588559 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fbdd63a-fd88-4a37-85fb-08e7d21af574-cloudkitty-lokistack-query-frontend-http" (OuterVolumeSpecName: "cloudkitty-lokistack-query-frontend-http") pod "8fbdd63a-fd88-4a37-85fb-08e7d21af574" (UID: "8fbdd63a-fd88-4a37-85fb-08e7d21af574"). InnerVolumeSpecName "cloudkitty-lokistack-query-frontend-http". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:48:58 crc kubenswrapper[4755]: I1210 15:48:58.588642 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fbdd63a-fd88-4a37-85fb-08e7d21af574-cloudkitty-lokistack-query-frontend-grpc" (OuterVolumeSpecName: "cloudkitty-lokistack-query-frontend-grpc") pod "8fbdd63a-fd88-4a37-85fb-08e7d21af574" (UID: "8fbdd63a-fd88-4a37-85fb-08e7d21af574"). InnerVolumeSpecName "cloudkitty-lokistack-query-frontend-grpc". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:48:58 crc kubenswrapper[4755]: I1210 15:48:58.588929 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fbdd63a-fd88-4a37-85fb-08e7d21af574-kube-api-access-cfj4t" (OuterVolumeSpecName: "kube-api-access-cfj4t") pod "8fbdd63a-fd88-4a37-85fb-08e7d21af574" (UID: "8fbdd63a-fd88-4a37-85fb-08e7d21af574"). InnerVolumeSpecName "kube-api-access-cfj4t". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:48:58 crc kubenswrapper[4755]: I1210 15:48:58.590843 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-lokistack-querier-5467947bf7-qpg72"] Dec 10 15:48:58 crc kubenswrapper[4755]: I1210 15:48:58.605531 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-lokistack-querier-5467947bf7-qpg72"] Dec 10 15:48:58 crc kubenswrapper[4755]: I1210 15:48:58.717167 4755 reconciler_common.go:293] "Volume detached for volume \"cloudkitty-lokistack-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/8fbdd63a-fd88-4a37-85fb-08e7d21af574-cloudkitty-lokistack-query-frontend-http\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:58 crc kubenswrapper[4755]: I1210 15:48:58.717216 4755 reconciler_common.go:293] "Volume detached for volume \"cloudkitty-lokistack-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/8fbdd63a-fd88-4a37-85fb-08e7d21af574-cloudkitty-lokistack-query-frontend-grpc\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:58 crc kubenswrapper[4755]: I1210 15:48:58.717235 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfj4t\" (UniqueName: \"kubernetes.io/projected/8fbdd63a-fd88-4a37-85fb-08e7d21af574-kube-api-access-cfj4t\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:58 crc kubenswrapper[4755]: I1210 15:48:58.717280 4755 reconciler_common.go:293] "Volume detached for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8fbdd63a-fd88-4a37-85fb-08e7d21af574-cloudkitty-lokistack-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:58 crc kubenswrapper[4755]: I1210 15:48:58.757903 4755 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="ceb83259-f1d9-4219-a0c3-b42d35e2dc02" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 10 15:48:59 crc kubenswrapper[4755]: I1210 15:48:59.385375 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c5084508-a21d-4f43-bc50-2f0c7f13edbe","Type":"ContainerStarted","Data":"a7df5db6262cccf45c89450f311b1bfca8b963cbd28c13bf062281c54c07bd7d"} Dec 10 15:48:59 crc kubenswrapper[4755]: I1210 15:48:59.386576 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:48:59 crc kubenswrapper[4755]: I1210 15:48:59.388003 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-dr466" event={"ID":"8fbdd63a-fd88-4a37-85fb-08e7d21af574","Type":"ContainerDied","Data":"253c31853c8d3267cc701ee660fc372b8a8d5745f09a3dffb2474a8dc57ba1a4"} Dec 10 15:48:59 crc kubenswrapper[4755]: I1210 15:48:59.388033 4755 scope.go:117] "RemoveContainer" containerID="4e315c5d02bd4b65abbb32f80d628db448bc67df9957d09b0d80d70ea9b98178" Dec 10 15:48:59 crc kubenswrapper[4755]: I1210 15:48:59.388115 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-dr466" Dec 10 15:48:59 crc kubenswrapper[4755]: I1210 15:48:59.427317 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.427299608 podStartE2EDuration="36.427299608s" podCreationTimestamp="2025-12-10 15:48:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:48:59.421974143 +0000 UTC m=+1536.022857775" watchObservedRunningTime="2025-12-10 15:48:59.427299608 +0000 UTC m=+1536.028183230" Dec 10 15:48:59 crc kubenswrapper[4755]: I1210 15:48:59.446041 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-dr466"] Dec 10 15:48:59 crc kubenswrapper[4755]: I1210 15:48:59.456016 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-dr466"] Dec 10 15:48:59 crc kubenswrapper[4755]: I1210 15:48:59.778625 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8fbdd63a-fd88-4a37-85fb-08e7d21af574" path="/var/lib/kubelet/pods/8fbdd63a-fd88-4a37-85fb-08e7d21af574/volumes" Dec 10 15:48:59 crc kubenswrapper[4755]: I1210 15:48:59.779400 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad77f530-dc0b-44ec-b4e2-c580cfe568fe" path="/var/lib/kubelet/pods/ad77f530-dc0b-44ec-b4e2-c580cfe568fe/volumes" Dec 10 15:48:59 crc kubenswrapper[4755]: I1210 15:48:59.780136 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9c583d4-e5d0-4c13-9989-dea15920e9e6" path="/var/lib/kubelet/pods/f9c583d4-e5d0-4c13-9989-dea15920e9e6/volumes" Dec 10 15:49:02 crc kubenswrapper[4755]: I1210 15:49:02.760270 4755 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 10 15:49:02 crc kubenswrapper[4755]: E1210 15:49:02.880281 4755 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 10 15:49:02 crc kubenswrapper[4755]: E1210 15:49:02.880348 4755 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 10 15:49:02 crc kubenswrapper[4755]: E1210 15:49:02.880508 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5d4h5b7hfbh5ddh688h9ch55bh7chf6h5ddh68ch94h69h5c5h596h59bh569hfchc4h676hcbh64dhdbh57fh75h5c9h98h59ch679h566h77h9cq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hw9gj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(6d104bea-ecdc-4fe1-9861-fb1a19fce845): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 10 15:49:02 crc kubenswrapper[4755]: E1210 15:49:02.881933 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 15:49:03 crc kubenswrapper[4755]: E1210 15:49:03.770273 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 15:49:03 crc kubenswrapper[4755]: I1210 15:49:03.864067 4755 scope.go:117] "RemoveContainer" containerID="5b099ca757ac2e34473fbea32c250e94393a2cf6309d4f94cee5e4b8e9156cf9" Dec 10 15:49:03 crc kubenswrapper[4755]: I1210 15:49:03.888691 4755 scope.go:117] "RemoveContainer" containerID="0b07a868e55e37c9fc9e7065ab222ca3505acf032003beff0d2b8b01573b2cb9" Dec 10 15:49:03 crc kubenswrapper[4755]: I1210 15:49:03.942525 4755 scope.go:117] "RemoveContainer" containerID="e6f2e2237f54e7208333bc2fd411a233dee4e8ec6575fb8a13da88a568f0e066" Dec 10 15:49:03 crc kubenswrapper[4755]: I1210 15:49:03.999179 4755 scope.go:117] "RemoveContainer" containerID="e118e9654dcdb3ccbeee519bdf5ed84674da1321f88271483d3036cac60db4fe" Dec 10 15:49:04 crc kubenswrapper[4755]: I1210 15:49:04.047000 4755 scope.go:117] "RemoveContainer" containerID="948b47c684c1989af9d3da1cd3e56931fc5a54264da22b4c16fe7da963b631f8" Dec 10 15:49:07 crc kubenswrapper[4755]: I1210 15:49:07.481500 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-sntj7" event={"ID":"0700ff42-76b3-4d25-aa15-323a506bb50b","Type":"ContainerDied","Data":"5d3a7eda5f9dca04baa43d392705be36c67cb5eb8e2bcc04617ec8e67355838f"} Dec 10 15:49:07 crc kubenswrapper[4755]: I1210 15:49:07.481448 4755 generic.go:334] "Generic (PLEG): container finished" podID="0700ff42-76b3-4d25-aa15-323a506bb50b" containerID="5d3a7eda5f9dca04baa43d392705be36c67cb5eb8e2bcc04617ec8e67355838f" exitCode=0 Dec 10 15:49:08 crc kubenswrapper[4755]: I1210 15:49:08.760758 4755 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="ceb83259-f1d9-4219-a0c3-b42d35e2dc02" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 10 15:49:09 crc kubenswrapper[4755]: I1210 15:49:09.034211 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-sntj7" Dec 10 15:49:09 crc kubenswrapper[4755]: I1210 15:49:09.064282 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0700ff42-76b3-4d25-aa15-323a506bb50b-ssh-key\") pod \"0700ff42-76b3-4d25-aa15-323a506bb50b\" (UID: \"0700ff42-76b3-4d25-aa15-323a506bb50b\") " Dec 10 15:49:09 crc kubenswrapper[4755]: I1210 15:49:09.064409 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4kw9x\" (UniqueName: \"kubernetes.io/projected/0700ff42-76b3-4d25-aa15-323a506bb50b-kube-api-access-4kw9x\") pod \"0700ff42-76b3-4d25-aa15-323a506bb50b\" (UID: \"0700ff42-76b3-4d25-aa15-323a506bb50b\") " Dec 10 15:49:09 crc kubenswrapper[4755]: I1210 15:49:09.065208 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0700ff42-76b3-4d25-aa15-323a506bb50b-inventory\") pod \"0700ff42-76b3-4d25-aa15-323a506bb50b\" (UID: \"0700ff42-76b3-4d25-aa15-323a506bb50b\") " Dec 10 15:49:09 crc kubenswrapper[4755]: I1210 15:49:09.065629 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0700ff42-76b3-4d25-aa15-323a506bb50b-repo-setup-combined-ca-bundle\") pod \"0700ff42-76b3-4d25-aa15-323a506bb50b\" (UID: \"0700ff42-76b3-4d25-aa15-323a506bb50b\") " Dec 10 15:49:09 crc kubenswrapper[4755]: I1210 15:49:09.070713 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0700ff42-76b3-4d25-aa15-323a506bb50b-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "0700ff42-76b3-4d25-aa15-323a506bb50b" (UID: "0700ff42-76b3-4d25-aa15-323a506bb50b"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:49:09 crc kubenswrapper[4755]: I1210 15:49:09.071292 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0700ff42-76b3-4d25-aa15-323a506bb50b-kube-api-access-4kw9x" (OuterVolumeSpecName: "kube-api-access-4kw9x") pod "0700ff42-76b3-4d25-aa15-323a506bb50b" (UID: "0700ff42-76b3-4d25-aa15-323a506bb50b"). InnerVolumeSpecName "kube-api-access-4kw9x". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:49:09 crc kubenswrapper[4755]: I1210 15:49:09.103887 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0700ff42-76b3-4d25-aa15-323a506bb50b-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "0700ff42-76b3-4d25-aa15-323a506bb50b" (UID: "0700ff42-76b3-4d25-aa15-323a506bb50b"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:49:09 crc kubenswrapper[4755]: I1210 15:49:09.111653 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0700ff42-76b3-4d25-aa15-323a506bb50b-inventory" (OuterVolumeSpecName: "inventory") pod "0700ff42-76b3-4d25-aa15-323a506bb50b" (UID: "0700ff42-76b3-4d25-aa15-323a506bb50b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:49:09 crc kubenswrapper[4755]: I1210 15:49:09.168432 4755 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0700ff42-76b3-4d25-aa15-323a506bb50b-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 10 15:49:09 crc kubenswrapper[4755]: I1210 15:49:09.168502 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4kw9x\" (UniqueName: \"kubernetes.io/projected/0700ff42-76b3-4d25-aa15-323a506bb50b-kube-api-access-4kw9x\") on node \"crc\" DevicePath \"\"" Dec 10 15:49:09 crc kubenswrapper[4755]: I1210 15:49:09.168516 4755 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0700ff42-76b3-4d25-aa15-323a506bb50b-inventory\") on node \"crc\" DevicePath \"\"" Dec 10 15:49:09 crc kubenswrapper[4755]: I1210 15:49:09.168528 4755 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0700ff42-76b3-4d25-aa15-323a506bb50b-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:49:09 crc kubenswrapper[4755]: I1210 15:49:09.506369 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-sntj7" event={"ID":"0700ff42-76b3-4d25-aa15-323a506bb50b","Type":"ContainerDied","Data":"23a786d9750805e431489885c14a5d3f6986e6f2308ee1f359630fc08b0647ef"} Dec 10 15:49:09 crc kubenswrapper[4755]: I1210 15:49:09.506419 4755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="23a786d9750805e431489885c14a5d3f6986e6f2308ee1f359630fc08b0647ef" Dec 10 15:49:09 crc kubenswrapper[4755]: I1210 15:49:09.506430 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-sntj7" Dec 10 15:49:09 crc kubenswrapper[4755]: I1210 15:49:09.591810 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-7jwjf"] Dec 10 15:49:09 crc kubenswrapper[4755]: E1210 15:49:09.594611 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0700ff42-76b3-4d25-aa15-323a506bb50b" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 10 15:49:09 crc kubenswrapper[4755]: I1210 15:49:09.594647 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="0700ff42-76b3-4d25-aa15-323a506bb50b" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 10 15:49:09 crc kubenswrapper[4755]: E1210 15:49:09.594694 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9c583d4-e5d0-4c13-9989-dea15920e9e6" containerName="loki-querier" Dec 10 15:49:09 crc kubenswrapper[4755]: I1210 15:49:09.594700 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9c583d4-e5d0-4c13-9989-dea15920e9e6" containerName="loki-querier" Dec 10 15:49:09 crc kubenswrapper[4755]: E1210 15:49:09.594713 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fbdd63a-fd88-4a37-85fb-08e7d21af574" containerName="loki-query-frontend" Dec 10 15:49:09 crc kubenswrapper[4755]: I1210 15:49:09.594910 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fbdd63a-fd88-4a37-85fb-08e7d21af574" containerName="loki-query-frontend" Dec 10 15:49:09 crc kubenswrapper[4755]: E1210 15:49:09.594928 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad77f530-dc0b-44ec-b4e2-c580cfe568fe" containerName="loki-distributor" Dec 10 15:49:09 crc kubenswrapper[4755]: I1210 15:49:09.594935 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad77f530-dc0b-44ec-b4e2-c580cfe568fe" containerName="loki-distributor" Dec 10 15:49:09 crc kubenswrapper[4755]: I1210 15:49:09.595168 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fbdd63a-fd88-4a37-85fb-08e7d21af574" containerName="loki-query-frontend" Dec 10 15:49:09 crc kubenswrapper[4755]: I1210 15:49:09.595191 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9c583d4-e5d0-4c13-9989-dea15920e9e6" containerName="loki-querier" Dec 10 15:49:09 crc kubenswrapper[4755]: I1210 15:49:09.595208 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="0700ff42-76b3-4d25-aa15-323a506bb50b" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 10 15:49:09 crc kubenswrapper[4755]: I1210 15:49:09.595235 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad77f530-dc0b-44ec-b4e2-c580cfe568fe" containerName="loki-distributor" Dec 10 15:49:09 crc kubenswrapper[4755]: I1210 15:49:09.597483 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7jwjf" Dec 10 15:49:09 crc kubenswrapper[4755]: I1210 15:49:09.599997 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 10 15:49:09 crc kubenswrapper[4755]: I1210 15:49:09.600232 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 10 15:49:09 crc kubenswrapper[4755]: I1210 15:49:09.600408 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-74mg7" Dec 10 15:49:09 crc kubenswrapper[4755]: I1210 15:49:09.600580 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 10 15:49:09 crc kubenswrapper[4755]: I1210 15:49:09.622825 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-7jwjf"] Dec 10 15:49:09 crc kubenswrapper[4755]: I1210 15:49:09.676122 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7c07d7db-e17e-446c-9576-8baae941768e-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-7jwjf\" (UID: \"7c07d7db-e17e-446c-9576-8baae941768e\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7jwjf" Dec 10 15:49:09 crc kubenswrapper[4755]: I1210 15:49:09.676486 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mbrs\" (UniqueName: \"kubernetes.io/projected/7c07d7db-e17e-446c-9576-8baae941768e-kube-api-access-8mbrs\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-7jwjf\" (UID: \"7c07d7db-e17e-446c-9576-8baae941768e\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7jwjf" Dec 10 15:49:09 crc kubenswrapper[4755]: I1210 15:49:09.676530 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7c07d7db-e17e-446c-9576-8baae941768e-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-7jwjf\" (UID: \"7c07d7db-e17e-446c-9576-8baae941768e\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7jwjf" Dec 10 15:49:09 crc kubenswrapper[4755]: I1210 15:49:09.778624 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mbrs\" (UniqueName: \"kubernetes.io/projected/7c07d7db-e17e-446c-9576-8baae941768e-kube-api-access-8mbrs\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-7jwjf\" (UID: \"7c07d7db-e17e-446c-9576-8baae941768e\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7jwjf" Dec 10 15:49:09 crc kubenswrapper[4755]: I1210 15:49:09.778676 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7c07d7db-e17e-446c-9576-8baae941768e-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-7jwjf\" (UID: \"7c07d7db-e17e-446c-9576-8baae941768e\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7jwjf" Dec 10 15:49:09 crc kubenswrapper[4755]: I1210 15:49:09.779648 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7c07d7db-e17e-446c-9576-8baae941768e-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-7jwjf\" (UID: \"7c07d7db-e17e-446c-9576-8baae941768e\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7jwjf" Dec 10 15:49:09 crc kubenswrapper[4755]: I1210 15:49:09.783400 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7c07d7db-e17e-446c-9576-8baae941768e-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-7jwjf\" (UID: \"7c07d7db-e17e-446c-9576-8baae941768e\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7jwjf" Dec 10 15:49:09 crc kubenswrapper[4755]: I1210 15:49:09.783551 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7c07d7db-e17e-446c-9576-8baae941768e-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-7jwjf\" (UID: \"7c07d7db-e17e-446c-9576-8baae941768e\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7jwjf" Dec 10 15:49:09 crc kubenswrapper[4755]: I1210 15:49:09.798406 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mbrs\" (UniqueName: \"kubernetes.io/projected/7c07d7db-e17e-446c-9576-8baae941768e-kube-api-access-8mbrs\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-7jwjf\" (UID: \"7c07d7db-e17e-446c-9576-8baae941768e\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7jwjf" Dec 10 15:49:09 crc kubenswrapper[4755]: I1210 15:49:09.928038 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7jwjf" Dec 10 15:49:10 crc kubenswrapper[4755]: I1210 15:49:10.359166 4755 patch_prober.go:28] interesting pod/machine-config-daemon-ggt8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 15:49:10 crc kubenswrapper[4755]: I1210 15:49:10.359546 4755 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 15:49:10 crc kubenswrapper[4755]: W1210 15:49:10.608672 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7c07d7db_e17e_446c_9576_8baae941768e.slice/crio-926968d304acd9843fe404105aeb7583c6ec66ee3d3a5405bae0c71064c2628b WatchSource:0}: Error finding container 926968d304acd9843fe404105aeb7583c6ec66ee3d3a5405bae0c71064c2628b: Status 404 returned error can't find the container with id 926968d304acd9843fe404105aeb7583c6ec66ee3d3a5405bae0c71064c2628b Dec 10 15:49:10 crc kubenswrapper[4755]: I1210 15:49:10.617457 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-7jwjf"] Dec 10 15:49:11 crc kubenswrapper[4755]: I1210 15:49:11.526990 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7jwjf" event={"ID":"7c07d7db-e17e-446c-9576-8baae941768e","Type":"ContainerStarted","Data":"5a4832eea2e42d13a4f487ccb63bda52885e319464e370345fbbfc678547e5f4"} Dec 10 15:49:11 crc kubenswrapper[4755]: I1210 15:49:11.527514 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7jwjf" event={"ID":"7c07d7db-e17e-446c-9576-8baae941768e","Type":"ContainerStarted","Data":"926968d304acd9843fe404105aeb7583c6ec66ee3d3a5405bae0c71064c2628b"} Dec 10 15:49:11 crc kubenswrapper[4755]: I1210 15:49:11.551401 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7jwjf" podStartSLOduration=2.063651075 podStartE2EDuration="2.551382418s" podCreationTimestamp="2025-12-10 15:49:09 +0000 UTC" firstStartedPulling="2025-12-10 15:49:10.611360139 +0000 UTC m=+1547.212243771" lastFinishedPulling="2025-12-10 15:49:11.099091482 +0000 UTC m=+1547.699975114" observedRunningTime="2025-12-10 15:49:11.544533302 +0000 UTC m=+1548.145416944" watchObservedRunningTime="2025-12-10 15:49:11.551382418 +0000 UTC m=+1548.152266050" Dec 10 15:49:12 crc kubenswrapper[4755]: I1210 15:49:12.053713 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Dec 10 15:49:13 crc kubenswrapper[4755]: I1210 15:49:13.431642 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:49:14 crc kubenswrapper[4755]: I1210 15:49:14.575179 4755 generic.go:334] "Generic (PLEG): container finished" podID="7c07d7db-e17e-446c-9576-8baae941768e" containerID="5a4832eea2e42d13a4f487ccb63bda52885e319464e370345fbbfc678547e5f4" exitCode=0 Dec 10 15:49:14 crc kubenswrapper[4755]: I1210 15:49:14.575233 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7jwjf" event={"ID":"7c07d7db-e17e-446c-9576-8baae941768e","Type":"ContainerDied","Data":"5a4832eea2e42d13a4f487ccb63bda52885e319464e370345fbbfc678547e5f4"} Dec 10 15:49:14 crc kubenswrapper[4755]: E1210 15:49:14.759593 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 15:49:16 crc kubenswrapper[4755]: I1210 15:49:16.214389 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7jwjf" Dec 10 15:49:16 crc kubenswrapper[4755]: I1210 15:49:16.331490 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8mbrs\" (UniqueName: \"kubernetes.io/projected/7c07d7db-e17e-446c-9576-8baae941768e-kube-api-access-8mbrs\") pod \"7c07d7db-e17e-446c-9576-8baae941768e\" (UID: \"7c07d7db-e17e-446c-9576-8baae941768e\") " Dec 10 15:49:16 crc kubenswrapper[4755]: I1210 15:49:16.331562 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7c07d7db-e17e-446c-9576-8baae941768e-ssh-key\") pod \"7c07d7db-e17e-446c-9576-8baae941768e\" (UID: \"7c07d7db-e17e-446c-9576-8baae941768e\") " Dec 10 15:49:16 crc kubenswrapper[4755]: I1210 15:49:16.331680 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7c07d7db-e17e-446c-9576-8baae941768e-inventory\") pod \"7c07d7db-e17e-446c-9576-8baae941768e\" (UID: \"7c07d7db-e17e-446c-9576-8baae941768e\") " Dec 10 15:49:16 crc kubenswrapper[4755]: I1210 15:49:16.337356 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c07d7db-e17e-446c-9576-8baae941768e-kube-api-access-8mbrs" (OuterVolumeSpecName: "kube-api-access-8mbrs") pod "7c07d7db-e17e-446c-9576-8baae941768e" (UID: "7c07d7db-e17e-446c-9576-8baae941768e"). InnerVolumeSpecName "kube-api-access-8mbrs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:49:16 crc kubenswrapper[4755]: I1210 15:49:16.364812 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c07d7db-e17e-446c-9576-8baae941768e-inventory" (OuterVolumeSpecName: "inventory") pod "7c07d7db-e17e-446c-9576-8baae941768e" (UID: "7c07d7db-e17e-446c-9576-8baae941768e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:49:16 crc kubenswrapper[4755]: I1210 15:49:16.379684 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c07d7db-e17e-446c-9576-8baae941768e-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "7c07d7db-e17e-446c-9576-8baae941768e" (UID: "7c07d7db-e17e-446c-9576-8baae941768e"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:49:16 crc kubenswrapper[4755]: I1210 15:49:16.434891 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8mbrs\" (UniqueName: \"kubernetes.io/projected/7c07d7db-e17e-446c-9576-8baae941768e-kube-api-access-8mbrs\") on node \"crc\" DevicePath \"\"" Dec 10 15:49:16 crc kubenswrapper[4755]: I1210 15:49:16.434933 4755 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7c07d7db-e17e-446c-9576-8baae941768e-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 10 15:49:16 crc kubenswrapper[4755]: I1210 15:49:16.434948 4755 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7c07d7db-e17e-446c-9576-8baae941768e-inventory\") on node \"crc\" DevicePath \"\"" Dec 10 15:49:16 crc kubenswrapper[4755]: I1210 15:49:16.598984 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7jwjf" event={"ID":"7c07d7db-e17e-446c-9576-8baae941768e","Type":"ContainerDied","Data":"926968d304acd9843fe404105aeb7583c6ec66ee3d3a5405bae0c71064c2628b"} Dec 10 15:49:16 crc kubenswrapper[4755]: I1210 15:49:16.599035 4755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="926968d304acd9843fe404105aeb7583c6ec66ee3d3a5405bae0c71064c2628b" Dec 10 15:49:16 crc kubenswrapper[4755]: I1210 15:49:16.599057 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7jwjf" Dec 10 15:49:16 crc kubenswrapper[4755]: I1210 15:49:16.662998 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jwpvp"] Dec 10 15:49:16 crc kubenswrapper[4755]: E1210 15:49:16.663589 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c07d7db-e17e-446c-9576-8baae941768e" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Dec 10 15:49:16 crc kubenswrapper[4755]: I1210 15:49:16.663615 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c07d7db-e17e-446c-9576-8baae941768e" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Dec 10 15:49:16 crc kubenswrapper[4755]: I1210 15:49:16.663898 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c07d7db-e17e-446c-9576-8baae941768e" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Dec 10 15:49:16 crc kubenswrapper[4755]: I1210 15:49:16.664833 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jwpvp" Dec 10 15:49:16 crc kubenswrapper[4755]: I1210 15:49:16.670640 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-74mg7" Dec 10 15:49:16 crc kubenswrapper[4755]: I1210 15:49:16.670921 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 10 15:49:16 crc kubenswrapper[4755]: I1210 15:49:16.671079 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 10 15:49:16 crc kubenswrapper[4755]: I1210 15:49:16.671228 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 10 15:49:16 crc kubenswrapper[4755]: I1210 15:49:16.676863 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jwpvp"] Dec 10 15:49:16 crc kubenswrapper[4755]: I1210 15:49:16.741817 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c36d5e87-d120-4bf2-8680-cd2c7634f1cf-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-jwpvp\" (UID: \"c36d5e87-d120-4bf2-8680-cd2c7634f1cf\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jwpvp" Dec 10 15:49:16 crc kubenswrapper[4755]: I1210 15:49:16.741898 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9j429\" (UniqueName: \"kubernetes.io/projected/c36d5e87-d120-4bf2-8680-cd2c7634f1cf-kube-api-access-9j429\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-jwpvp\" (UID: \"c36d5e87-d120-4bf2-8680-cd2c7634f1cf\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jwpvp" Dec 10 15:49:16 crc kubenswrapper[4755]: I1210 15:49:16.741925 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c36d5e87-d120-4bf2-8680-cd2c7634f1cf-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-jwpvp\" (UID: \"c36d5e87-d120-4bf2-8680-cd2c7634f1cf\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jwpvp" Dec 10 15:49:16 crc kubenswrapper[4755]: I1210 15:49:16.741991 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c36d5e87-d120-4bf2-8680-cd2c7634f1cf-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-jwpvp\" (UID: \"c36d5e87-d120-4bf2-8680-cd2c7634f1cf\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jwpvp" Dec 10 15:49:16 crc kubenswrapper[4755]: E1210 15:49:16.759797 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 15:49:16 crc kubenswrapper[4755]: I1210 15:49:16.844165 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c36d5e87-d120-4bf2-8680-cd2c7634f1cf-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-jwpvp\" (UID: \"c36d5e87-d120-4bf2-8680-cd2c7634f1cf\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jwpvp" Dec 10 15:49:16 crc kubenswrapper[4755]: I1210 15:49:16.844367 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c36d5e87-d120-4bf2-8680-cd2c7634f1cf-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-jwpvp\" (UID: \"c36d5e87-d120-4bf2-8680-cd2c7634f1cf\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jwpvp" Dec 10 15:49:16 crc kubenswrapper[4755]: I1210 15:49:16.844387 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9j429\" (UniqueName: \"kubernetes.io/projected/c36d5e87-d120-4bf2-8680-cd2c7634f1cf-kube-api-access-9j429\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-jwpvp\" (UID: \"c36d5e87-d120-4bf2-8680-cd2c7634f1cf\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jwpvp" Dec 10 15:49:16 crc kubenswrapper[4755]: I1210 15:49:16.844493 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c36d5e87-d120-4bf2-8680-cd2c7634f1cf-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-jwpvp\" (UID: \"c36d5e87-d120-4bf2-8680-cd2c7634f1cf\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jwpvp" Dec 10 15:49:16 crc kubenswrapper[4755]: I1210 15:49:16.855508 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c36d5e87-d120-4bf2-8680-cd2c7634f1cf-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-jwpvp\" (UID: \"c36d5e87-d120-4bf2-8680-cd2c7634f1cf\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jwpvp" Dec 10 15:49:16 crc kubenswrapper[4755]: I1210 15:49:16.856111 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c36d5e87-d120-4bf2-8680-cd2c7634f1cf-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-jwpvp\" (UID: \"c36d5e87-d120-4bf2-8680-cd2c7634f1cf\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jwpvp" Dec 10 15:49:16 crc kubenswrapper[4755]: I1210 15:49:16.856322 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c36d5e87-d120-4bf2-8680-cd2c7634f1cf-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-jwpvp\" (UID: \"c36d5e87-d120-4bf2-8680-cd2c7634f1cf\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jwpvp" Dec 10 15:49:16 crc kubenswrapper[4755]: I1210 15:49:16.872993 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9j429\" (UniqueName: \"kubernetes.io/projected/c36d5e87-d120-4bf2-8680-cd2c7634f1cf-kube-api-access-9j429\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-jwpvp\" (UID: \"c36d5e87-d120-4bf2-8680-cd2c7634f1cf\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jwpvp" Dec 10 15:49:16 crc kubenswrapper[4755]: I1210 15:49:16.995722 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jwpvp" Dec 10 15:49:17 crc kubenswrapper[4755]: I1210 15:49:17.625313 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jwpvp"] Dec 10 15:49:17 crc kubenswrapper[4755]: W1210 15:49:17.627072 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc36d5e87_d120_4bf2_8680_cd2c7634f1cf.slice/crio-4067039b32b41f41426a3cc0a22cea29f3c18aef326ac39199e25dd7a2a0037a WatchSource:0}: Error finding container 4067039b32b41f41426a3cc0a22cea29f3c18aef326ac39199e25dd7a2a0037a: Status 404 returned error can't find the container with id 4067039b32b41f41426a3cc0a22cea29f3c18aef326ac39199e25dd7a2a0037a Dec 10 15:49:18 crc kubenswrapper[4755]: I1210 15:49:18.618550 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jwpvp" event={"ID":"c36d5e87-d120-4bf2-8680-cd2c7634f1cf","Type":"ContainerStarted","Data":"4067039b32b41f41426a3cc0a22cea29f3c18aef326ac39199e25dd7a2a0037a"} Dec 10 15:49:18 crc kubenswrapper[4755]: I1210 15:49:18.751617 4755 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="ceb83259-f1d9-4219-a0c3-b42d35e2dc02" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 10 15:49:19 crc kubenswrapper[4755]: I1210 15:49:19.630550 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jwpvp" event={"ID":"c36d5e87-d120-4bf2-8680-cd2c7634f1cf","Type":"ContainerStarted","Data":"2ae79d726ebb743c4ee8aa20ec4401422923266b0f1468d4784ad753be0f47d3"} Dec 10 15:49:26 crc kubenswrapper[4755]: I1210 15:49:26.051418 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jwpvp" podStartSLOduration=8.648457155 podStartE2EDuration="10.051394707s" podCreationTimestamp="2025-12-10 15:49:16 +0000 UTC" firstStartedPulling="2025-12-10 15:49:17.63239042 +0000 UTC m=+1554.233274052" lastFinishedPulling="2025-12-10 15:49:19.035327972 +0000 UTC m=+1555.636211604" observedRunningTime="2025-12-10 15:49:19.64517176 +0000 UTC m=+1556.246055392" watchObservedRunningTime="2025-12-10 15:49:26.051394707 +0000 UTC m=+1562.652278329" Dec 10 15:49:26 crc kubenswrapper[4755]: I1210 15:49:26.054434 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vq99v"] Dec 10 15:49:26 crc kubenswrapper[4755]: I1210 15:49:26.057165 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vq99v" Dec 10 15:49:26 crc kubenswrapper[4755]: I1210 15:49:26.065672 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vq99v"] Dec 10 15:49:26 crc kubenswrapper[4755]: I1210 15:49:26.162154 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb95b371-84cf-44f5-9c90-1e7abc4317e5-catalog-content\") pod \"community-operators-vq99v\" (UID: \"bb95b371-84cf-44f5-9c90-1e7abc4317e5\") " pod="openshift-marketplace/community-operators-vq99v" Dec 10 15:49:26 crc kubenswrapper[4755]: I1210 15:49:26.162219 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb95b371-84cf-44f5-9c90-1e7abc4317e5-utilities\") pod \"community-operators-vq99v\" (UID: \"bb95b371-84cf-44f5-9c90-1e7abc4317e5\") " pod="openshift-marketplace/community-operators-vq99v" Dec 10 15:49:26 crc kubenswrapper[4755]: I1210 15:49:26.162298 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qv9k\" (UniqueName: \"kubernetes.io/projected/bb95b371-84cf-44f5-9c90-1e7abc4317e5-kube-api-access-9qv9k\") pod \"community-operators-vq99v\" (UID: \"bb95b371-84cf-44f5-9c90-1e7abc4317e5\") " pod="openshift-marketplace/community-operators-vq99v" Dec 10 15:49:26 crc kubenswrapper[4755]: I1210 15:49:26.264172 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb95b371-84cf-44f5-9c90-1e7abc4317e5-catalog-content\") pod \"community-operators-vq99v\" (UID: \"bb95b371-84cf-44f5-9c90-1e7abc4317e5\") " pod="openshift-marketplace/community-operators-vq99v" Dec 10 15:49:26 crc kubenswrapper[4755]: I1210 15:49:26.264233 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb95b371-84cf-44f5-9c90-1e7abc4317e5-utilities\") pod \"community-operators-vq99v\" (UID: \"bb95b371-84cf-44f5-9c90-1e7abc4317e5\") " pod="openshift-marketplace/community-operators-vq99v" Dec 10 15:49:26 crc kubenswrapper[4755]: I1210 15:49:26.264299 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qv9k\" (UniqueName: \"kubernetes.io/projected/bb95b371-84cf-44f5-9c90-1e7abc4317e5-kube-api-access-9qv9k\") pod \"community-operators-vq99v\" (UID: \"bb95b371-84cf-44f5-9c90-1e7abc4317e5\") " pod="openshift-marketplace/community-operators-vq99v" Dec 10 15:49:26 crc kubenswrapper[4755]: I1210 15:49:26.265119 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb95b371-84cf-44f5-9c90-1e7abc4317e5-utilities\") pod \"community-operators-vq99v\" (UID: \"bb95b371-84cf-44f5-9c90-1e7abc4317e5\") " pod="openshift-marketplace/community-operators-vq99v" Dec 10 15:49:26 crc kubenswrapper[4755]: I1210 15:49:26.265359 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb95b371-84cf-44f5-9c90-1e7abc4317e5-catalog-content\") pod \"community-operators-vq99v\" (UID: \"bb95b371-84cf-44f5-9c90-1e7abc4317e5\") " pod="openshift-marketplace/community-operators-vq99v" Dec 10 15:49:26 crc kubenswrapper[4755]: I1210 15:49:26.296507 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qv9k\" (UniqueName: \"kubernetes.io/projected/bb95b371-84cf-44f5-9c90-1e7abc4317e5-kube-api-access-9qv9k\") pod \"community-operators-vq99v\" (UID: \"bb95b371-84cf-44f5-9c90-1e7abc4317e5\") " pod="openshift-marketplace/community-operators-vq99v" Dec 10 15:49:26 crc kubenswrapper[4755]: I1210 15:49:26.382444 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vq99v" Dec 10 15:49:26 crc kubenswrapper[4755]: I1210 15:49:26.955941 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vq99v"] Dec 10 15:49:26 crc kubenswrapper[4755]: W1210 15:49:26.956072 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb95b371_84cf_44f5_9c90_1e7abc4317e5.slice/crio-b01654ab557cc60f5a91baffeffc26c81e6e0cbd76db0244788844a192622451 WatchSource:0}: Error finding container b01654ab557cc60f5a91baffeffc26c81e6e0cbd76db0244788844a192622451: Status 404 returned error can't find the container with id b01654ab557cc60f5a91baffeffc26c81e6e0cbd76db0244788844a192622451 Dec 10 15:49:27 crc kubenswrapper[4755]: I1210 15:49:27.711438 4755 generic.go:334] "Generic (PLEG): container finished" podID="bb95b371-84cf-44f5-9c90-1e7abc4317e5" containerID="375da8e07d3659dfa71058f2091e918fdb3c5702f1e1a4737d48a0087299803b" exitCode=0 Dec 10 15:49:27 crc kubenswrapper[4755]: I1210 15:49:27.711499 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vq99v" event={"ID":"bb95b371-84cf-44f5-9c90-1e7abc4317e5","Type":"ContainerDied","Data":"375da8e07d3659dfa71058f2091e918fdb3c5702f1e1a4737d48a0087299803b"} Dec 10 15:49:27 crc kubenswrapper[4755]: I1210 15:49:27.711959 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vq99v" event={"ID":"bb95b371-84cf-44f5-9c90-1e7abc4317e5","Type":"ContainerStarted","Data":"b01654ab557cc60f5a91baffeffc26c81e6e0cbd76db0244788844a192622451"} Dec 10 15:49:27 crc kubenswrapper[4755]: E1210 15:49:27.759077 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 15:49:27 crc kubenswrapper[4755]: E1210 15:49:27.759129 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 15:49:28 crc kubenswrapper[4755]: I1210 15:49:28.752820 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-ingester-0" Dec 10 15:49:29 crc kubenswrapper[4755]: I1210 15:49:29.733994 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vq99v" event={"ID":"bb95b371-84cf-44f5-9c90-1e7abc4317e5","Type":"ContainerStarted","Data":"fb0aed23d03209c88f5b51e6682d63b6f3ea369525783a2d3138f9d3955fdb6d"} Dec 10 15:49:31 crc kubenswrapper[4755]: I1210 15:49:31.758457 4755 generic.go:334] "Generic (PLEG): container finished" podID="bb95b371-84cf-44f5-9c90-1e7abc4317e5" containerID="fb0aed23d03209c88f5b51e6682d63b6f3ea369525783a2d3138f9d3955fdb6d" exitCode=0 Dec 10 15:49:31 crc kubenswrapper[4755]: I1210 15:49:31.770343 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vq99v" event={"ID":"bb95b371-84cf-44f5-9c90-1e7abc4317e5","Type":"ContainerDied","Data":"fb0aed23d03209c88f5b51e6682d63b6f3ea369525783a2d3138f9d3955fdb6d"} Dec 10 15:49:32 crc kubenswrapper[4755]: I1210 15:49:32.787273 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vq99v" event={"ID":"bb95b371-84cf-44f5-9c90-1e7abc4317e5","Type":"ContainerStarted","Data":"54a8e5485ee78dca155964e26290b01f9e463aca2de7316599d7ad04c9282b4e"} Dec 10 15:49:32 crc kubenswrapper[4755]: I1210 15:49:32.817904 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vq99v" podStartSLOduration=2.343320013 podStartE2EDuration="6.817879086s" podCreationTimestamp="2025-12-10 15:49:26 +0000 UTC" firstStartedPulling="2025-12-10 15:49:27.713460578 +0000 UTC m=+1564.314344210" lastFinishedPulling="2025-12-10 15:49:32.188019651 +0000 UTC m=+1568.788903283" observedRunningTime="2025-12-10 15:49:32.809663682 +0000 UTC m=+1569.410547334" watchObservedRunningTime="2025-12-10 15:49:32.817879086 +0000 UTC m=+1569.418762718" Dec 10 15:49:36 crc kubenswrapper[4755]: I1210 15:49:36.383361 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vq99v" Dec 10 15:49:36 crc kubenswrapper[4755]: I1210 15:49:36.383740 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vq99v" Dec 10 15:49:36 crc kubenswrapper[4755]: I1210 15:49:36.435961 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vq99v" Dec 10 15:49:38 crc kubenswrapper[4755]: E1210 15:49:38.884670 4755 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 10 15:49:38 crc kubenswrapper[4755]: E1210 15:49:38.885306 4755 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 10 15:49:38 crc kubenswrapper[4755]: E1210 15:49:38.885435 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mz4t5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-jfc28_openstack(998863b6-4f48-4c8b-8011-a40377686b99): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 10 15:49:38 crc kubenswrapper[4755]: E1210 15:49:38.886797 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 15:49:40 crc kubenswrapper[4755]: I1210 15:49:40.358943 4755 patch_prober.go:28] interesting pod/machine-config-daemon-ggt8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 15:49:40 crc kubenswrapper[4755]: I1210 15:49:40.359013 4755 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 15:49:42 crc kubenswrapper[4755]: E1210 15:49:42.759813 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 15:49:46 crc kubenswrapper[4755]: I1210 15:49:46.432737 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vq99v" Dec 10 15:49:46 crc kubenswrapper[4755]: I1210 15:49:46.484649 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vq99v"] Dec 10 15:49:46 crc kubenswrapper[4755]: I1210 15:49:46.938747 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vq99v" podUID="bb95b371-84cf-44f5-9c90-1e7abc4317e5" containerName="registry-server" containerID="cri-o://54a8e5485ee78dca155964e26290b01f9e463aca2de7316599d7ad04c9282b4e" gracePeriod=2 Dec 10 15:49:47 crc kubenswrapper[4755]: I1210 15:49:47.951130 4755 generic.go:334] "Generic (PLEG): container finished" podID="bb95b371-84cf-44f5-9c90-1e7abc4317e5" containerID="54a8e5485ee78dca155964e26290b01f9e463aca2de7316599d7ad04c9282b4e" exitCode=0 Dec 10 15:49:47 crc kubenswrapper[4755]: I1210 15:49:47.951223 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vq99v" event={"ID":"bb95b371-84cf-44f5-9c90-1e7abc4317e5","Type":"ContainerDied","Data":"54a8e5485ee78dca155964e26290b01f9e463aca2de7316599d7ad04c9282b4e"} Dec 10 15:49:49 crc kubenswrapper[4755]: I1210 15:49:49.087941 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vq99v" Dec 10 15:49:49 crc kubenswrapper[4755]: I1210 15:49:49.151862 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9qv9k\" (UniqueName: \"kubernetes.io/projected/bb95b371-84cf-44f5-9c90-1e7abc4317e5-kube-api-access-9qv9k\") pod \"bb95b371-84cf-44f5-9c90-1e7abc4317e5\" (UID: \"bb95b371-84cf-44f5-9c90-1e7abc4317e5\") " Dec 10 15:49:49 crc kubenswrapper[4755]: I1210 15:49:49.151906 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb95b371-84cf-44f5-9c90-1e7abc4317e5-utilities\") pod \"bb95b371-84cf-44f5-9c90-1e7abc4317e5\" (UID: \"bb95b371-84cf-44f5-9c90-1e7abc4317e5\") " Dec 10 15:49:49 crc kubenswrapper[4755]: I1210 15:49:49.151945 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb95b371-84cf-44f5-9c90-1e7abc4317e5-catalog-content\") pod \"bb95b371-84cf-44f5-9c90-1e7abc4317e5\" (UID: \"bb95b371-84cf-44f5-9c90-1e7abc4317e5\") " Dec 10 15:49:49 crc kubenswrapper[4755]: I1210 15:49:49.152727 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb95b371-84cf-44f5-9c90-1e7abc4317e5-utilities" (OuterVolumeSpecName: "utilities") pod "bb95b371-84cf-44f5-9c90-1e7abc4317e5" (UID: "bb95b371-84cf-44f5-9c90-1e7abc4317e5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:49:49 crc kubenswrapper[4755]: I1210 15:49:49.153293 4755 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb95b371-84cf-44f5-9c90-1e7abc4317e5-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 15:49:49 crc kubenswrapper[4755]: I1210 15:49:49.157493 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb95b371-84cf-44f5-9c90-1e7abc4317e5-kube-api-access-9qv9k" (OuterVolumeSpecName: "kube-api-access-9qv9k") pod "bb95b371-84cf-44f5-9c90-1e7abc4317e5" (UID: "bb95b371-84cf-44f5-9c90-1e7abc4317e5"). InnerVolumeSpecName "kube-api-access-9qv9k". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:49:49 crc kubenswrapper[4755]: I1210 15:49:49.204983 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb95b371-84cf-44f5-9c90-1e7abc4317e5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bb95b371-84cf-44f5-9c90-1e7abc4317e5" (UID: "bb95b371-84cf-44f5-9c90-1e7abc4317e5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:49:49 crc kubenswrapper[4755]: I1210 15:49:49.255335 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9qv9k\" (UniqueName: \"kubernetes.io/projected/bb95b371-84cf-44f5-9c90-1e7abc4317e5-kube-api-access-9qv9k\") on node \"crc\" DevicePath \"\"" Dec 10 15:49:49 crc kubenswrapper[4755]: I1210 15:49:49.255674 4755 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb95b371-84cf-44f5-9c90-1e7abc4317e5-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 15:49:49 crc kubenswrapper[4755]: I1210 15:49:49.977564 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vq99v" event={"ID":"bb95b371-84cf-44f5-9c90-1e7abc4317e5","Type":"ContainerDied","Data":"b01654ab557cc60f5a91baffeffc26c81e6e0cbd76db0244788844a192622451"} Dec 10 15:49:49 crc kubenswrapper[4755]: I1210 15:49:49.977872 4755 scope.go:117] "RemoveContainer" containerID="54a8e5485ee78dca155964e26290b01f9e463aca2de7316599d7ad04c9282b4e" Dec 10 15:49:49 crc kubenswrapper[4755]: I1210 15:49:49.977641 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vq99v" Dec 10 15:49:49 crc kubenswrapper[4755]: I1210 15:49:49.999394 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vq99v"] Dec 10 15:49:50 crc kubenswrapper[4755]: I1210 15:49:50.004803 4755 scope.go:117] "RemoveContainer" containerID="fb0aed23d03209c88f5b51e6682d63b6f3ea369525783a2d3138f9d3955fdb6d" Dec 10 15:49:50 crc kubenswrapper[4755]: I1210 15:49:50.008694 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vq99v"] Dec 10 15:49:50 crc kubenswrapper[4755]: I1210 15:49:50.028391 4755 scope.go:117] "RemoveContainer" containerID="375da8e07d3659dfa71058f2091e918fdb3c5702f1e1a4737d48a0087299803b" Dec 10 15:49:51 crc kubenswrapper[4755]: I1210 15:49:51.769566 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb95b371-84cf-44f5-9c90-1e7abc4317e5" path="/var/lib/kubelet/pods/bb95b371-84cf-44f5-9c90-1e7abc4317e5/volumes" Dec 10 15:49:52 crc kubenswrapper[4755]: E1210 15:49:52.761256 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 15:49:57 crc kubenswrapper[4755]: I1210 15:49:57.702844 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-dhtcf"] Dec 10 15:49:57 crc kubenswrapper[4755]: E1210 15:49:57.706252 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb95b371-84cf-44f5-9c90-1e7abc4317e5" containerName="registry-server" Dec 10 15:49:57 crc kubenswrapper[4755]: I1210 15:49:57.706279 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb95b371-84cf-44f5-9c90-1e7abc4317e5" containerName="registry-server" Dec 10 15:49:57 crc kubenswrapper[4755]: E1210 15:49:57.706299 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb95b371-84cf-44f5-9c90-1e7abc4317e5" containerName="extract-utilities" Dec 10 15:49:57 crc kubenswrapper[4755]: I1210 15:49:57.706311 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb95b371-84cf-44f5-9c90-1e7abc4317e5" containerName="extract-utilities" Dec 10 15:49:57 crc kubenswrapper[4755]: E1210 15:49:57.706336 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb95b371-84cf-44f5-9c90-1e7abc4317e5" containerName="extract-content" Dec 10 15:49:57 crc kubenswrapper[4755]: I1210 15:49:57.706342 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb95b371-84cf-44f5-9c90-1e7abc4317e5" containerName="extract-content" Dec 10 15:49:57 crc kubenswrapper[4755]: I1210 15:49:57.706560 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb95b371-84cf-44f5-9c90-1e7abc4317e5" containerName="registry-server" Dec 10 15:49:57 crc kubenswrapper[4755]: I1210 15:49:57.708303 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dhtcf" Dec 10 15:49:57 crc kubenswrapper[4755]: I1210 15:49:57.718296 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dhtcf"] Dec 10 15:49:57 crc kubenswrapper[4755]: I1210 15:49:57.821264 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qq4pb\" (UniqueName: \"kubernetes.io/projected/7482badc-03b7-4594-86dd-755f4d346232-kube-api-access-qq4pb\") pod \"redhat-marketplace-dhtcf\" (UID: \"7482badc-03b7-4594-86dd-755f4d346232\") " pod="openshift-marketplace/redhat-marketplace-dhtcf" Dec 10 15:49:57 crc kubenswrapper[4755]: I1210 15:49:57.821311 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7482badc-03b7-4594-86dd-755f4d346232-catalog-content\") pod \"redhat-marketplace-dhtcf\" (UID: \"7482badc-03b7-4594-86dd-755f4d346232\") " pod="openshift-marketplace/redhat-marketplace-dhtcf" Dec 10 15:49:57 crc kubenswrapper[4755]: I1210 15:49:57.821519 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7482badc-03b7-4594-86dd-755f4d346232-utilities\") pod \"redhat-marketplace-dhtcf\" (UID: \"7482badc-03b7-4594-86dd-755f4d346232\") " pod="openshift-marketplace/redhat-marketplace-dhtcf" Dec 10 15:49:57 crc kubenswrapper[4755]: E1210 15:49:57.882206 4755 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 10 15:49:57 crc kubenswrapper[4755]: E1210 15:49:57.882292 4755 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 10 15:49:57 crc kubenswrapper[4755]: E1210 15:49:57.882438 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5d4h5b7hfbh5ddh688h9ch55bh7chf6h5ddh68ch94h69h5c5h596h59bh569hfchc4h676hcbh64dhdbh57fh75h5c9h98h59ch679h566h77h9cq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hw9gj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(6d104bea-ecdc-4fe1-9861-fb1a19fce845): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 10 15:49:57 crc kubenswrapper[4755]: E1210 15:49:57.883678 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 15:49:57 crc kubenswrapper[4755]: I1210 15:49:57.923457 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7482badc-03b7-4594-86dd-755f4d346232-utilities\") pod \"redhat-marketplace-dhtcf\" (UID: \"7482badc-03b7-4594-86dd-755f4d346232\") " pod="openshift-marketplace/redhat-marketplace-dhtcf" Dec 10 15:49:57 crc kubenswrapper[4755]: I1210 15:49:57.923661 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qq4pb\" (UniqueName: \"kubernetes.io/projected/7482badc-03b7-4594-86dd-755f4d346232-kube-api-access-qq4pb\") pod \"redhat-marketplace-dhtcf\" (UID: \"7482badc-03b7-4594-86dd-755f4d346232\") " pod="openshift-marketplace/redhat-marketplace-dhtcf" Dec 10 15:49:57 crc kubenswrapper[4755]: I1210 15:49:57.923699 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7482badc-03b7-4594-86dd-755f4d346232-catalog-content\") pod \"redhat-marketplace-dhtcf\" (UID: \"7482badc-03b7-4594-86dd-755f4d346232\") " pod="openshift-marketplace/redhat-marketplace-dhtcf" Dec 10 15:49:57 crc kubenswrapper[4755]: I1210 15:49:57.924190 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7482badc-03b7-4594-86dd-755f4d346232-catalog-content\") pod \"redhat-marketplace-dhtcf\" (UID: \"7482badc-03b7-4594-86dd-755f4d346232\") " pod="openshift-marketplace/redhat-marketplace-dhtcf" Dec 10 15:49:57 crc kubenswrapper[4755]: I1210 15:49:57.924595 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7482badc-03b7-4594-86dd-755f4d346232-utilities\") pod \"redhat-marketplace-dhtcf\" (UID: \"7482badc-03b7-4594-86dd-755f4d346232\") " pod="openshift-marketplace/redhat-marketplace-dhtcf" Dec 10 15:49:57 crc kubenswrapper[4755]: I1210 15:49:57.943599 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qq4pb\" (UniqueName: \"kubernetes.io/projected/7482badc-03b7-4594-86dd-755f4d346232-kube-api-access-qq4pb\") pod \"redhat-marketplace-dhtcf\" (UID: \"7482badc-03b7-4594-86dd-755f4d346232\") " pod="openshift-marketplace/redhat-marketplace-dhtcf" Dec 10 15:49:58 crc kubenswrapper[4755]: I1210 15:49:58.041126 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dhtcf" Dec 10 15:49:58 crc kubenswrapper[4755]: I1210 15:49:58.530524 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dhtcf"] Dec 10 15:49:59 crc kubenswrapper[4755]: I1210 15:49:59.064700 4755 generic.go:334] "Generic (PLEG): container finished" podID="7482badc-03b7-4594-86dd-755f4d346232" containerID="1aff9276a5b27e398b655a53c6e21367a4234d2b13bdd80ad9e7083bc1edc173" exitCode=0 Dec 10 15:49:59 crc kubenswrapper[4755]: I1210 15:49:59.064796 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dhtcf" event={"ID":"7482badc-03b7-4594-86dd-755f4d346232","Type":"ContainerDied","Data":"1aff9276a5b27e398b655a53c6e21367a4234d2b13bdd80ad9e7083bc1edc173"} Dec 10 15:49:59 crc kubenswrapper[4755]: I1210 15:49:59.065203 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dhtcf" event={"ID":"7482badc-03b7-4594-86dd-755f4d346232","Type":"ContainerStarted","Data":"bb39ac4d4f42af33343994dfae0a80e6dda863c16814aaff987ef7cbcaaca5f7"} Dec 10 15:50:00 crc kubenswrapper[4755]: I1210 15:50:00.080732 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dhtcf" event={"ID":"7482badc-03b7-4594-86dd-755f4d346232","Type":"ContainerStarted","Data":"c5db2bb697d25ac1c981a3a3b2d18fff8da44809c425f28ef686a443b7b5adbd"} Dec 10 15:50:01 crc kubenswrapper[4755]: I1210 15:50:01.094769 4755 generic.go:334] "Generic (PLEG): container finished" podID="7482badc-03b7-4594-86dd-755f4d346232" containerID="c5db2bb697d25ac1c981a3a3b2d18fff8da44809c425f28ef686a443b7b5adbd" exitCode=0 Dec 10 15:50:01 crc kubenswrapper[4755]: I1210 15:50:01.094829 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dhtcf" event={"ID":"7482badc-03b7-4594-86dd-755f4d346232","Type":"ContainerDied","Data":"c5db2bb697d25ac1c981a3a3b2d18fff8da44809c425f28ef686a443b7b5adbd"} Dec 10 15:50:02 crc kubenswrapper[4755]: I1210 15:50:02.686825 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bp4xt"] Dec 10 15:50:02 crc kubenswrapper[4755]: I1210 15:50:02.689480 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bp4xt" Dec 10 15:50:02 crc kubenswrapper[4755]: I1210 15:50:02.703731 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bp4xt"] Dec 10 15:50:02 crc kubenswrapper[4755]: I1210 15:50:02.829321 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d510d56d-9e16-40ca-9359-6cc1b2a29927-catalog-content\") pod \"certified-operators-bp4xt\" (UID: \"d510d56d-9e16-40ca-9359-6cc1b2a29927\") " pod="openshift-marketplace/certified-operators-bp4xt" Dec 10 15:50:02 crc kubenswrapper[4755]: I1210 15:50:02.829827 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d510d56d-9e16-40ca-9359-6cc1b2a29927-utilities\") pod \"certified-operators-bp4xt\" (UID: \"d510d56d-9e16-40ca-9359-6cc1b2a29927\") " pod="openshift-marketplace/certified-operators-bp4xt" Dec 10 15:50:02 crc kubenswrapper[4755]: I1210 15:50:02.829975 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhstm\" (UniqueName: \"kubernetes.io/projected/d510d56d-9e16-40ca-9359-6cc1b2a29927-kube-api-access-nhstm\") pod \"certified-operators-bp4xt\" (UID: \"d510d56d-9e16-40ca-9359-6cc1b2a29927\") " pod="openshift-marketplace/certified-operators-bp4xt" Dec 10 15:50:02 crc kubenswrapper[4755]: I1210 15:50:02.932194 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d510d56d-9e16-40ca-9359-6cc1b2a29927-utilities\") pod \"certified-operators-bp4xt\" (UID: \"d510d56d-9e16-40ca-9359-6cc1b2a29927\") " pod="openshift-marketplace/certified-operators-bp4xt" Dec 10 15:50:02 crc kubenswrapper[4755]: I1210 15:50:02.932383 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhstm\" (UniqueName: \"kubernetes.io/projected/d510d56d-9e16-40ca-9359-6cc1b2a29927-kube-api-access-nhstm\") pod \"certified-operators-bp4xt\" (UID: \"d510d56d-9e16-40ca-9359-6cc1b2a29927\") " pod="openshift-marketplace/certified-operators-bp4xt" Dec 10 15:50:02 crc kubenswrapper[4755]: I1210 15:50:02.932505 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d510d56d-9e16-40ca-9359-6cc1b2a29927-catalog-content\") pod \"certified-operators-bp4xt\" (UID: \"d510d56d-9e16-40ca-9359-6cc1b2a29927\") " pod="openshift-marketplace/certified-operators-bp4xt" Dec 10 15:50:02 crc kubenswrapper[4755]: I1210 15:50:02.935831 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d510d56d-9e16-40ca-9359-6cc1b2a29927-utilities\") pod \"certified-operators-bp4xt\" (UID: \"d510d56d-9e16-40ca-9359-6cc1b2a29927\") " pod="openshift-marketplace/certified-operators-bp4xt" Dec 10 15:50:02 crc kubenswrapper[4755]: I1210 15:50:02.936144 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d510d56d-9e16-40ca-9359-6cc1b2a29927-catalog-content\") pod \"certified-operators-bp4xt\" (UID: \"d510d56d-9e16-40ca-9359-6cc1b2a29927\") " pod="openshift-marketplace/certified-operators-bp4xt" Dec 10 15:50:02 crc kubenswrapper[4755]: I1210 15:50:02.973421 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhstm\" (UniqueName: \"kubernetes.io/projected/d510d56d-9e16-40ca-9359-6cc1b2a29927-kube-api-access-nhstm\") pod \"certified-operators-bp4xt\" (UID: \"d510d56d-9e16-40ca-9359-6cc1b2a29927\") " pod="openshift-marketplace/certified-operators-bp4xt" Dec 10 15:50:03 crc kubenswrapper[4755]: I1210 15:50:03.013068 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bp4xt" Dec 10 15:50:03 crc kubenswrapper[4755]: I1210 15:50:03.118195 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dhtcf" event={"ID":"7482badc-03b7-4594-86dd-755f4d346232","Type":"ContainerStarted","Data":"46ad03dfa43220af0581f73300be1978bc6a2878115d0dc316fd0620959de6bc"} Dec 10 15:50:03 crc kubenswrapper[4755]: I1210 15:50:03.154832 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-dhtcf" podStartSLOduration=2.996696152 podStartE2EDuration="6.154807896s" podCreationTimestamp="2025-12-10 15:49:57 +0000 UTC" firstStartedPulling="2025-12-10 15:49:59.066616797 +0000 UTC m=+1595.667500429" lastFinishedPulling="2025-12-10 15:50:02.224728531 +0000 UTC m=+1598.825612173" observedRunningTime="2025-12-10 15:50:03.150850788 +0000 UTC m=+1599.751734420" watchObservedRunningTime="2025-12-10 15:50:03.154807896 +0000 UTC m=+1599.755691518" Dec 10 15:50:03 crc kubenswrapper[4755]: I1210 15:50:03.702871 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bp4xt"] Dec 10 15:50:03 crc kubenswrapper[4755]: W1210 15:50:03.704764 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd510d56d_9e16_40ca_9359_6cc1b2a29927.slice/crio-f9b0744f4f9d320d88fd26b4f4634bb91f8bdb30d758180201fbe78a83c027c5 WatchSource:0}: Error finding container f9b0744f4f9d320d88fd26b4f4634bb91f8bdb30d758180201fbe78a83c027c5: Status 404 returned error can't find the container with id f9b0744f4f9d320d88fd26b4f4634bb91f8bdb30d758180201fbe78a83c027c5 Dec 10 15:50:03 crc kubenswrapper[4755]: E1210 15:50:03.773769 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 15:50:04 crc kubenswrapper[4755]: I1210 15:50:04.130830 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bp4xt" event={"ID":"d510d56d-9e16-40ca-9359-6cc1b2a29927","Type":"ContainerStarted","Data":"f9b0744f4f9d320d88fd26b4f4634bb91f8bdb30d758180201fbe78a83c027c5"} Dec 10 15:50:04 crc kubenswrapper[4755]: I1210 15:50:04.363800 4755 scope.go:117] "RemoveContainer" containerID="f6f038b2833539a2083146e7c928b3f14dfe295137a5074a381347edec0d9d9c" Dec 10 15:50:05 crc kubenswrapper[4755]: I1210 15:50:05.142314 4755 generic.go:334] "Generic (PLEG): container finished" podID="d510d56d-9e16-40ca-9359-6cc1b2a29927" containerID="1c11de928c1adf910bc2990cd90c71d4ceca47590e85410286830fd45c1ca5d4" exitCode=0 Dec 10 15:50:05 crc kubenswrapper[4755]: I1210 15:50:05.142415 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bp4xt" event={"ID":"d510d56d-9e16-40ca-9359-6cc1b2a29927","Type":"ContainerDied","Data":"1c11de928c1adf910bc2990cd90c71d4ceca47590e85410286830fd45c1ca5d4"} Dec 10 15:50:06 crc kubenswrapper[4755]: I1210 15:50:06.155173 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bp4xt" event={"ID":"d510d56d-9e16-40ca-9359-6cc1b2a29927","Type":"ContainerStarted","Data":"f4685c9ed5e3823d5c7fc59cb84120c709d159ee0138d871a7fc446fe2a95ed9"} Dec 10 15:50:08 crc kubenswrapper[4755]: I1210 15:50:08.041548 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-dhtcf" Dec 10 15:50:08 crc kubenswrapper[4755]: I1210 15:50:08.041866 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-dhtcf" Dec 10 15:50:08 crc kubenswrapper[4755]: I1210 15:50:08.088950 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-dhtcf" Dec 10 15:50:08 crc kubenswrapper[4755]: I1210 15:50:08.174637 4755 generic.go:334] "Generic (PLEG): container finished" podID="d510d56d-9e16-40ca-9359-6cc1b2a29927" containerID="f4685c9ed5e3823d5c7fc59cb84120c709d159ee0138d871a7fc446fe2a95ed9" exitCode=0 Dec 10 15:50:08 crc kubenswrapper[4755]: I1210 15:50:08.175722 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bp4xt" event={"ID":"d510d56d-9e16-40ca-9359-6cc1b2a29927","Type":"ContainerDied","Data":"f4685c9ed5e3823d5c7fc59cb84120c709d159ee0138d871a7fc446fe2a95ed9"} Dec 10 15:50:08 crc kubenswrapper[4755]: I1210 15:50:08.248239 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-dhtcf" Dec 10 15:50:09 crc kubenswrapper[4755]: I1210 15:50:09.185888 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bp4xt" event={"ID":"d510d56d-9e16-40ca-9359-6cc1b2a29927","Type":"ContainerStarted","Data":"479f3ab8982ff521e53a895d5bdd28790ab105dd1ba1ae17544216dad8b9006d"} Dec 10 15:50:09 crc kubenswrapper[4755]: I1210 15:50:09.206785 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bp4xt" podStartSLOduration=3.717414324 podStartE2EDuration="7.206763937s" podCreationTimestamp="2025-12-10 15:50:02 +0000 UTC" firstStartedPulling="2025-12-10 15:50:05.144836124 +0000 UTC m=+1601.745719756" lastFinishedPulling="2025-12-10 15:50:08.634185737 +0000 UTC m=+1605.235069369" observedRunningTime="2025-12-10 15:50:09.203566349 +0000 UTC m=+1605.804449991" watchObservedRunningTime="2025-12-10 15:50:09.206763937 +0000 UTC m=+1605.807647569" Dec 10 15:50:10 crc kubenswrapper[4755]: I1210 15:50:10.285346 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dhtcf"] Dec 10 15:50:10 crc kubenswrapper[4755]: I1210 15:50:10.285787 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-dhtcf" podUID="7482badc-03b7-4594-86dd-755f4d346232" containerName="registry-server" containerID="cri-o://46ad03dfa43220af0581f73300be1978bc6a2878115d0dc316fd0620959de6bc" gracePeriod=2 Dec 10 15:50:10 crc kubenswrapper[4755]: I1210 15:50:10.358833 4755 patch_prober.go:28] interesting pod/machine-config-daemon-ggt8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 15:50:10 crc kubenswrapper[4755]: I1210 15:50:10.358909 4755 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 15:50:10 crc kubenswrapper[4755]: I1210 15:50:10.358961 4755 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" Dec 10 15:50:10 crc kubenswrapper[4755]: I1210 15:50:10.359869 4755 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"008e8b27aa48ce8c618284ca4dccd38cf79c00478318f7aaaa34c326eeb5ea52"} pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 10 15:50:10 crc kubenswrapper[4755]: I1210 15:50:10.359947 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" containerName="machine-config-daemon" containerID="cri-o://008e8b27aa48ce8c618284ca4dccd38cf79c00478318f7aaaa34c326eeb5ea52" gracePeriod=600 Dec 10 15:50:10 crc kubenswrapper[4755]: E1210 15:50:10.484920 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 15:50:10 crc kubenswrapper[4755]: E1210 15:50:10.759170 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 15:50:10 crc kubenswrapper[4755]: I1210 15:50:10.840840 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dhtcf" Dec 10 15:50:10 crc kubenswrapper[4755]: I1210 15:50:10.916015 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qq4pb\" (UniqueName: \"kubernetes.io/projected/7482badc-03b7-4594-86dd-755f4d346232-kube-api-access-qq4pb\") pod \"7482badc-03b7-4594-86dd-755f4d346232\" (UID: \"7482badc-03b7-4594-86dd-755f4d346232\") " Dec 10 15:50:10 crc kubenswrapper[4755]: I1210 15:50:10.916379 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7482badc-03b7-4594-86dd-755f4d346232-catalog-content\") pod \"7482badc-03b7-4594-86dd-755f4d346232\" (UID: \"7482badc-03b7-4594-86dd-755f4d346232\") " Dec 10 15:50:10 crc kubenswrapper[4755]: I1210 15:50:10.916506 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7482badc-03b7-4594-86dd-755f4d346232-utilities\") pod \"7482badc-03b7-4594-86dd-755f4d346232\" (UID: \"7482badc-03b7-4594-86dd-755f4d346232\") " Dec 10 15:50:10 crc kubenswrapper[4755]: I1210 15:50:10.917144 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7482badc-03b7-4594-86dd-755f4d346232-utilities" (OuterVolumeSpecName: "utilities") pod "7482badc-03b7-4594-86dd-755f4d346232" (UID: "7482badc-03b7-4594-86dd-755f4d346232"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:50:10 crc kubenswrapper[4755]: I1210 15:50:10.922366 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7482badc-03b7-4594-86dd-755f4d346232-kube-api-access-qq4pb" (OuterVolumeSpecName: "kube-api-access-qq4pb") pod "7482badc-03b7-4594-86dd-755f4d346232" (UID: "7482badc-03b7-4594-86dd-755f4d346232"). InnerVolumeSpecName "kube-api-access-qq4pb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:50:10 crc kubenswrapper[4755]: I1210 15:50:10.937345 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7482badc-03b7-4594-86dd-755f4d346232-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7482badc-03b7-4594-86dd-755f4d346232" (UID: "7482badc-03b7-4594-86dd-755f4d346232"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:50:11 crc kubenswrapper[4755]: I1210 15:50:11.018110 4755 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7482badc-03b7-4594-86dd-755f4d346232-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 15:50:11 crc kubenswrapper[4755]: I1210 15:50:11.018409 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qq4pb\" (UniqueName: \"kubernetes.io/projected/7482badc-03b7-4594-86dd-755f4d346232-kube-api-access-qq4pb\") on node \"crc\" DevicePath \"\"" Dec 10 15:50:11 crc kubenswrapper[4755]: I1210 15:50:11.018506 4755 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7482badc-03b7-4594-86dd-755f4d346232-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 15:50:11 crc kubenswrapper[4755]: I1210 15:50:11.208318 4755 generic.go:334] "Generic (PLEG): container finished" podID="b132a8b9-1c99-414d-8773-229bf36b305d" containerID="008e8b27aa48ce8c618284ca4dccd38cf79c00478318f7aaaa34c326eeb5ea52" exitCode=0 Dec 10 15:50:11 crc kubenswrapper[4755]: I1210 15:50:11.208378 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" event={"ID":"b132a8b9-1c99-414d-8773-229bf36b305d","Type":"ContainerDied","Data":"008e8b27aa48ce8c618284ca4dccd38cf79c00478318f7aaaa34c326eeb5ea52"} Dec 10 15:50:11 crc kubenswrapper[4755]: I1210 15:50:11.208411 4755 scope.go:117] "RemoveContainer" containerID="b9b7f6e29c3e4593fa445fe830b0d353f5a037cd1634fd06b5f6ef129b3368c3" Dec 10 15:50:11 crc kubenswrapper[4755]: I1210 15:50:11.209184 4755 scope.go:117] "RemoveContainer" containerID="008e8b27aa48ce8c618284ca4dccd38cf79c00478318f7aaaa34c326eeb5ea52" Dec 10 15:50:11 crc kubenswrapper[4755]: E1210 15:50:11.209439 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 15:50:11 crc kubenswrapper[4755]: I1210 15:50:11.213035 4755 generic.go:334] "Generic (PLEG): container finished" podID="7482badc-03b7-4594-86dd-755f4d346232" containerID="46ad03dfa43220af0581f73300be1978bc6a2878115d0dc316fd0620959de6bc" exitCode=0 Dec 10 15:50:11 crc kubenswrapper[4755]: I1210 15:50:11.213086 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dhtcf" event={"ID":"7482badc-03b7-4594-86dd-755f4d346232","Type":"ContainerDied","Data":"46ad03dfa43220af0581f73300be1978bc6a2878115d0dc316fd0620959de6bc"} Dec 10 15:50:11 crc kubenswrapper[4755]: I1210 15:50:11.213129 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dhtcf" event={"ID":"7482badc-03b7-4594-86dd-755f4d346232","Type":"ContainerDied","Data":"bb39ac4d4f42af33343994dfae0a80e6dda863c16814aaff987ef7cbcaaca5f7"} Dec 10 15:50:11 crc kubenswrapper[4755]: I1210 15:50:11.213202 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dhtcf" Dec 10 15:50:11 crc kubenswrapper[4755]: I1210 15:50:11.277952 4755 scope.go:117] "RemoveContainer" containerID="46ad03dfa43220af0581f73300be1978bc6a2878115d0dc316fd0620959de6bc" Dec 10 15:50:11 crc kubenswrapper[4755]: I1210 15:50:11.283812 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dhtcf"] Dec 10 15:50:11 crc kubenswrapper[4755]: I1210 15:50:11.295798 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-dhtcf"] Dec 10 15:50:11 crc kubenswrapper[4755]: I1210 15:50:11.314275 4755 scope.go:117] "RemoveContainer" containerID="c5db2bb697d25ac1c981a3a3b2d18fff8da44809c425f28ef686a443b7b5adbd" Dec 10 15:50:11 crc kubenswrapper[4755]: I1210 15:50:11.373650 4755 scope.go:117] "RemoveContainer" containerID="1aff9276a5b27e398b655a53c6e21367a4234d2b13bdd80ad9e7083bc1edc173" Dec 10 15:50:11 crc kubenswrapper[4755]: I1210 15:50:11.394030 4755 scope.go:117] "RemoveContainer" containerID="46ad03dfa43220af0581f73300be1978bc6a2878115d0dc316fd0620959de6bc" Dec 10 15:50:11 crc kubenswrapper[4755]: E1210 15:50:11.394542 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46ad03dfa43220af0581f73300be1978bc6a2878115d0dc316fd0620959de6bc\": container with ID starting with 46ad03dfa43220af0581f73300be1978bc6a2878115d0dc316fd0620959de6bc not found: ID does not exist" containerID="46ad03dfa43220af0581f73300be1978bc6a2878115d0dc316fd0620959de6bc" Dec 10 15:50:11 crc kubenswrapper[4755]: I1210 15:50:11.394617 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46ad03dfa43220af0581f73300be1978bc6a2878115d0dc316fd0620959de6bc"} err="failed to get container status \"46ad03dfa43220af0581f73300be1978bc6a2878115d0dc316fd0620959de6bc\": rpc error: code = NotFound desc = could not find container \"46ad03dfa43220af0581f73300be1978bc6a2878115d0dc316fd0620959de6bc\": container with ID starting with 46ad03dfa43220af0581f73300be1978bc6a2878115d0dc316fd0620959de6bc not found: ID does not exist" Dec 10 15:50:11 crc kubenswrapper[4755]: I1210 15:50:11.394640 4755 scope.go:117] "RemoveContainer" containerID="c5db2bb697d25ac1c981a3a3b2d18fff8da44809c425f28ef686a443b7b5adbd" Dec 10 15:50:11 crc kubenswrapper[4755]: E1210 15:50:11.394859 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5db2bb697d25ac1c981a3a3b2d18fff8da44809c425f28ef686a443b7b5adbd\": container with ID starting with c5db2bb697d25ac1c981a3a3b2d18fff8da44809c425f28ef686a443b7b5adbd not found: ID does not exist" containerID="c5db2bb697d25ac1c981a3a3b2d18fff8da44809c425f28ef686a443b7b5adbd" Dec 10 15:50:11 crc kubenswrapper[4755]: I1210 15:50:11.394945 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5db2bb697d25ac1c981a3a3b2d18fff8da44809c425f28ef686a443b7b5adbd"} err="failed to get container status \"c5db2bb697d25ac1c981a3a3b2d18fff8da44809c425f28ef686a443b7b5adbd\": rpc error: code = NotFound desc = could not find container \"c5db2bb697d25ac1c981a3a3b2d18fff8da44809c425f28ef686a443b7b5adbd\": container with ID starting with c5db2bb697d25ac1c981a3a3b2d18fff8da44809c425f28ef686a443b7b5adbd not found: ID does not exist" Dec 10 15:50:11 crc kubenswrapper[4755]: I1210 15:50:11.395037 4755 scope.go:117] "RemoveContainer" containerID="1aff9276a5b27e398b655a53c6e21367a4234d2b13bdd80ad9e7083bc1edc173" Dec 10 15:50:11 crc kubenswrapper[4755]: E1210 15:50:11.395740 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1aff9276a5b27e398b655a53c6e21367a4234d2b13bdd80ad9e7083bc1edc173\": container with ID starting with 1aff9276a5b27e398b655a53c6e21367a4234d2b13bdd80ad9e7083bc1edc173 not found: ID does not exist" containerID="1aff9276a5b27e398b655a53c6e21367a4234d2b13bdd80ad9e7083bc1edc173" Dec 10 15:50:11 crc kubenswrapper[4755]: I1210 15:50:11.395773 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1aff9276a5b27e398b655a53c6e21367a4234d2b13bdd80ad9e7083bc1edc173"} err="failed to get container status \"1aff9276a5b27e398b655a53c6e21367a4234d2b13bdd80ad9e7083bc1edc173\": rpc error: code = NotFound desc = could not find container \"1aff9276a5b27e398b655a53c6e21367a4234d2b13bdd80ad9e7083bc1edc173\": container with ID starting with 1aff9276a5b27e398b655a53c6e21367a4234d2b13bdd80ad9e7083bc1edc173 not found: ID does not exist" Dec 10 15:50:11 crc kubenswrapper[4755]: I1210 15:50:11.773040 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7482badc-03b7-4594-86dd-755f4d346232" path="/var/lib/kubelet/pods/7482badc-03b7-4594-86dd-755f4d346232/volumes" Dec 10 15:50:13 crc kubenswrapper[4755]: I1210 15:50:13.013831 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bp4xt" Dec 10 15:50:13 crc kubenswrapper[4755]: I1210 15:50:13.013885 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bp4xt" Dec 10 15:50:13 crc kubenswrapper[4755]: I1210 15:50:13.064137 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bp4xt" Dec 10 15:50:13 crc kubenswrapper[4755]: I1210 15:50:13.285352 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bp4xt" Dec 10 15:50:14 crc kubenswrapper[4755]: I1210 15:50:14.485333 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bp4xt"] Dec 10 15:50:15 crc kubenswrapper[4755]: I1210 15:50:15.257023 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-bp4xt" podUID="d510d56d-9e16-40ca-9359-6cc1b2a29927" containerName="registry-server" containerID="cri-o://479f3ab8982ff521e53a895d5bdd28790ab105dd1ba1ae17544216dad8b9006d" gracePeriod=2 Dec 10 15:50:15 crc kubenswrapper[4755]: I1210 15:50:15.778146 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bp4xt" Dec 10 15:50:15 crc kubenswrapper[4755]: I1210 15:50:15.925057 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d510d56d-9e16-40ca-9359-6cc1b2a29927-catalog-content\") pod \"d510d56d-9e16-40ca-9359-6cc1b2a29927\" (UID: \"d510d56d-9e16-40ca-9359-6cc1b2a29927\") " Dec 10 15:50:15 crc kubenswrapper[4755]: I1210 15:50:15.925202 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d510d56d-9e16-40ca-9359-6cc1b2a29927-utilities\") pod \"d510d56d-9e16-40ca-9359-6cc1b2a29927\" (UID: \"d510d56d-9e16-40ca-9359-6cc1b2a29927\") " Dec 10 15:50:15 crc kubenswrapper[4755]: I1210 15:50:15.925225 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nhstm\" (UniqueName: \"kubernetes.io/projected/d510d56d-9e16-40ca-9359-6cc1b2a29927-kube-api-access-nhstm\") pod \"d510d56d-9e16-40ca-9359-6cc1b2a29927\" (UID: \"d510d56d-9e16-40ca-9359-6cc1b2a29927\") " Dec 10 15:50:15 crc kubenswrapper[4755]: I1210 15:50:15.926145 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d510d56d-9e16-40ca-9359-6cc1b2a29927-utilities" (OuterVolumeSpecName: "utilities") pod "d510d56d-9e16-40ca-9359-6cc1b2a29927" (UID: "d510d56d-9e16-40ca-9359-6cc1b2a29927"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:50:15 crc kubenswrapper[4755]: I1210 15:50:15.927528 4755 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d510d56d-9e16-40ca-9359-6cc1b2a29927-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 15:50:15 crc kubenswrapper[4755]: I1210 15:50:15.932793 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d510d56d-9e16-40ca-9359-6cc1b2a29927-kube-api-access-nhstm" (OuterVolumeSpecName: "kube-api-access-nhstm") pod "d510d56d-9e16-40ca-9359-6cc1b2a29927" (UID: "d510d56d-9e16-40ca-9359-6cc1b2a29927"). InnerVolumeSpecName "kube-api-access-nhstm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:50:15 crc kubenswrapper[4755]: I1210 15:50:15.978386 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d510d56d-9e16-40ca-9359-6cc1b2a29927-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d510d56d-9e16-40ca-9359-6cc1b2a29927" (UID: "d510d56d-9e16-40ca-9359-6cc1b2a29927"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:50:16 crc kubenswrapper[4755]: I1210 15:50:16.029839 4755 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d510d56d-9e16-40ca-9359-6cc1b2a29927-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 15:50:16 crc kubenswrapper[4755]: I1210 15:50:16.029885 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nhstm\" (UniqueName: \"kubernetes.io/projected/d510d56d-9e16-40ca-9359-6cc1b2a29927-kube-api-access-nhstm\") on node \"crc\" DevicePath \"\"" Dec 10 15:50:16 crc kubenswrapper[4755]: I1210 15:50:16.269922 4755 generic.go:334] "Generic (PLEG): container finished" podID="d510d56d-9e16-40ca-9359-6cc1b2a29927" containerID="479f3ab8982ff521e53a895d5bdd28790ab105dd1ba1ae17544216dad8b9006d" exitCode=0 Dec 10 15:50:16 crc kubenswrapper[4755]: I1210 15:50:16.269980 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bp4xt" event={"ID":"d510d56d-9e16-40ca-9359-6cc1b2a29927","Type":"ContainerDied","Data":"479f3ab8982ff521e53a895d5bdd28790ab105dd1ba1ae17544216dad8b9006d"} Dec 10 15:50:16 crc kubenswrapper[4755]: I1210 15:50:16.270020 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bp4xt" event={"ID":"d510d56d-9e16-40ca-9359-6cc1b2a29927","Type":"ContainerDied","Data":"f9b0744f4f9d320d88fd26b4f4634bb91f8bdb30d758180201fbe78a83c027c5"} Dec 10 15:50:16 crc kubenswrapper[4755]: I1210 15:50:16.270051 4755 scope.go:117] "RemoveContainer" containerID="479f3ab8982ff521e53a895d5bdd28790ab105dd1ba1ae17544216dad8b9006d" Dec 10 15:50:16 crc kubenswrapper[4755]: I1210 15:50:16.270377 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bp4xt" Dec 10 15:50:16 crc kubenswrapper[4755]: I1210 15:50:16.304932 4755 scope.go:117] "RemoveContainer" containerID="f4685c9ed5e3823d5c7fc59cb84120c709d159ee0138d871a7fc446fe2a95ed9" Dec 10 15:50:16 crc kubenswrapper[4755]: I1210 15:50:16.309334 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bp4xt"] Dec 10 15:50:16 crc kubenswrapper[4755]: I1210 15:50:16.332847 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-bp4xt"] Dec 10 15:50:16 crc kubenswrapper[4755]: I1210 15:50:16.364507 4755 scope.go:117] "RemoveContainer" containerID="1c11de928c1adf910bc2990cd90c71d4ceca47590e85410286830fd45c1ca5d4" Dec 10 15:50:16 crc kubenswrapper[4755]: I1210 15:50:16.395579 4755 scope.go:117] "RemoveContainer" containerID="479f3ab8982ff521e53a895d5bdd28790ab105dd1ba1ae17544216dad8b9006d" Dec 10 15:50:16 crc kubenswrapper[4755]: E1210 15:50:16.396086 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"479f3ab8982ff521e53a895d5bdd28790ab105dd1ba1ae17544216dad8b9006d\": container with ID starting with 479f3ab8982ff521e53a895d5bdd28790ab105dd1ba1ae17544216dad8b9006d not found: ID does not exist" containerID="479f3ab8982ff521e53a895d5bdd28790ab105dd1ba1ae17544216dad8b9006d" Dec 10 15:50:16 crc kubenswrapper[4755]: I1210 15:50:16.396130 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"479f3ab8982ff521e53a895d5bdd28790ab105dd1ba1ae17544216dad8b9006d"} err="failed to get container status \"479f3ab8982ff521e53a895d5bdd28790ab105dd1ba1ae17544216dad8b9006d\": rpc error: code = NotFound desc = could not find container \"479f3ab8982ff521e53a895d5bdd28790ab105dd1ba1ae17544216dad8b9006d\": container with ID starting with 479f3ab8982ff521e53a895d5bdd28790ab105dd1ba1ae17544216dad8b9006d not found: ID does not exist" Dec 10 15:50:16 crc kubenswrapper[4755]: I1210 15:50:16.396163 4755 scope.go:117] "RemoveContainer" containerID="f4685c9ed5e3823d5c7fc59cb84120c709d159ee0138d871a7fc446fe2a95ed9" Dec 10 15:50:16 crc kubenswrapper[4755]: E1210 15:50:16.396510 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4685c9ed5e3823d5c7fc59cb84120c709d159ee0138d871a7fc446fe2a95ed9\": container with ID starting with f4685c9ed5e3823d5c7fc59cb84120c709d159ee0138d871a7fc446fe2a95ed9 not found: ID does not exist" containerID="f4685c9ed5e3823d5c7fc59cb84120c709d159ee0138d871a7fc446fe2a95ed9" Dec 10 15:50:16 crc kubenswrapper[4755]: I1210 15:50:16.396565 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4685c9ed5e3823d5c7fc59cb84120c709d159ee0138d871a7fc446fe2a95ed9"} err="failed to get container status \"f4685c9ed5e3823d5c7fc59cb84120c709d159ee0138d871a7fc446fe2a95ed9\": rpc error: code = NotFound desc = could not find container \"f4685c9ed5e3823d5c7fc59cb84120c709d159ee0138d871a7fc446fe2a95ed9\": container with ID starting with f4685c9ed5e3823d5c7fc59cb84120c709d159ee0138d871a7fc446fe2a95ed9 not found: ID does not exist" Dec 10 15:50:16 crc kubenswrapper[4755]: I1210 15:50:16.396597 4755 scope.go:117] "RemoveContainer" containerID="1c11de928c1adf910bc2990cd90c71d4ceca47590e85410286830fd45c1ca5d4" Dec 10 15:50:16 crc kubenswrapper[4755]: E1210 15:50:16.396977 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c11de928c1adf910bc2990cd90c71d4ceca47590e85410286830fd45c1ca5d4\": container with ID starting with 1c11de928c1adf910bc2990cd90c71d4ceca47590e85410286830fd45c1ca5d4 not found: ID does not exist" containerID="1c11de928c1adf910bc2990cd90c71d4ceca47590e85410286830fd45c1ca5d4" Dec 10 15:50:16 crc kubenswrapper[4755]: I1210 15:50:16.397004 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c11de928c1adf910bc2990cd90c71d4ceca47590e85410286830fd45c1ca5d4"} err="failed to get container status \"1c11de928c1adf910bc2990cd90c71d4ceca47590e85410286830fd45c1ca5d4\": rpc error: code = NotFound desc = could not find container \"1c11de928c1adf910bc2990cd90c71d4ceca47590e85410286830fd45c1ca5d4\": container with ID starting with 1c11de928c1adf910bc2990cd90c71d4ceca47590e85410286830fd45c1ca5d4 not found: ID does not exist" Dec 10 15:50:17 crc kubenswrapper[4755]: I1210 15:50:17.769272 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d510d56d-9e16-40ca-9359-6cc1b2a29927" path="/var/lib/kubelet/pods/d510d56d-9e16-40ca-9359-6cc1b2a29927/volumes" Dec 10 15:50:18 crc kubenswrapper[4755]: E1210 15:50:18.760792 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 15:50:21 crc kubenswrapper[4755]: I1210 15:50:21.757845 4755 scope.go:117] "RemoveContainer" containerID="008e8b27aa48ce8c618284ca4dccd38cf79c00478318f7aaaa34c326eeb5ea52" Dec 10 15:50:21 crc kubenswrapper[4755]: E1210 15:50:21.758277 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 15:50:25 crc kubenswrapper[4755]: E1210 15:50:25.759651 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 15:50:30 crc kubenswrapper[4755]: E1210 15:50:30.759942 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 15:50:34 crc kubenswrapper[4755]: I1210 15:50:34.758787 4755 scope.go:117] "RemoveContainer" containerID="008e8b27aa48ce8c618284ca4dccd38cf79c00478318f7aaaa34c326eeb5ea52" Dec 10 15:50:34 crc kubenswrapper[4755]: E1210 15:50:34.759577 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 15:50:37 crc kubenswrapper[4755]: E1210 15:50:37.760370 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 15:50:44 crc kubenswrapper[4755]: E1210 15:50:44.761395 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 15:50:46 crc kubenswrapper[4755]: I1210 15:50:46.757660 4755 scope.go:117] "RemoveContainer" containerID="008e8b27aa48ce8c618284ca4dccd38cf79c00478318f7aaaa34c326eeb5ea52" Dec 10 15:50:46 crc kubenswrapper[4755]: E1210 15:50:46.758383 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 15:50:48 crc kubenswrapper[4755]: E1210 15:50:48.760331 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 15:50:57 crc kubenswrapper[4755]: E1210 15:50:57.760110 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 15:50:58 crc kubenswrapper[4755]: I1210 15:50:58.757373 4755 scope.go:117] "RemoveContainer" containerID="008e8b27aa48ce8c618284ca4dccd38cf79c00478318f7aaaa34c326eeb5ea52" Dec 10 15:50:58 crc kubenswrapper[4755]: E1210 15:50:58.758001 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 15:51:03 crc kubenswrapper[4755]: E1210 15:51:03.773158 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 15:51:04 crc kubenswrapper[4755]: I1210 15:51:04.484673 4755 scope.go:117] "RemoveContainer" containerID="ccfba5e44c738428b1efb1985fd1190a7901f21f3afd822dd48a953a331b8305" Dec 10 15:51:04 crc kubenswrapper[4755]: I1210 15:51:04.519414 4755 scope.go:117] "RemoveContainer" containerID="e980033572aa8e12de21370e135824f06455245f54bfdbce3d17e18d578b3250" Dec 10 15:51:04 crc kubenswrapper[4755]: I1210 15:51:04.567262 4755 scope.go:117] "RemoveContainer" containerID="6ee145ba3a7a44476a716e8d6e06467a204b1431c0f3807d9e82d3b83eb1b5e5" Dec 10 15:51:10 crc kubenswrapper[4755]: E1210 15:51:10.847905 4755 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 10 15:51:10 crc kubenswrapper[4755]: E1210 15:51:10.848436 4755 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 10 15:51:10 crc kubenswrapper[4755]: E1210 15:51:10.848592 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mz4t5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-jfc28_openstack(998863b6-4f48-4c8b-8011-a40377686b99): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 10 15:51:10 crc kubenswrapper[4755]: E1210 15:51:10.849779 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 15:51:11 crc kubenswrapper[4755]: I1210 15:51:11.758312 4755 scope.go:117] "RemoveContainer" containerID="008e8b27aa48ce8c618284ca4dccd38cf79c00478318f7aaaa34c326eeb5ea52" Dec 10 15:51:11 crc kubenswrapper[4755]: E1210 15:51:11.758626 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 15:51:15 crc kubenswrapper[4755]: E1210 15:51:15.760023 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 15:51:25 crc kubenswrapper[4755]: I1210 15:51:25.758019 4755 scope.go:117] "RemoveContainer" containerID="008e8b27aa48ce8c618284ca4dccd38cf79c00478318f7aaaa34c326eeb5ea52" Dec 10 15:51:25 crc kubenswrapper[4755]: E1210 15:51:25.758841 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 15:51:25 crc kubenswrapper[4755]: E1210 15:51:25.759919 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 15:51:29 crc kubenswrapper[4755]: E1210 15:51:29.891650 4755 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 10 15:51:29 crc kubenswrapper[4755]: E1210 15:51:29.892295 4755 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 10 15:51:29 crc kubenswrapper[4755]: E1210 15:51:29.892453 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5d4h5b7hfbh5ddh688h9ch55bh7chf6h5ddh68ch94h69h5c5h596h59bh569hfchc4h676hcbh64dhdbh57fh75h5c9h98h59ch679h566h77h9cq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hw9gj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(6d104bea-ecdc-4fe1-9861-fb1a19fce845): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 10 15:51:29 crc kubenswrapper[4755]: E1210 15:51:29.893664 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 15:51:37 crc kubenswrapper[4755]: E1210 15:51:37.761364 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 15:51:40 crc kubenswrapper[4755]: I1210 15:51:40.758051 4755 scope.go:117] "RemoveContainer" containerID="008e8b27aa48ce8c618284ca4dccd38cf79c00478318f7aaaa34c326eeb5ea52" Dec 10 15:51:40 crc kubenswrapper[4755]: E1210 15:51:40.758608 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 15:51:43 crc kubenswrapper[4755]: E1210 15:51:43.769159 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 15:51:50 crc kubenswrapper[4755]: E1210 15:51:50.760484 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 15:51:55 crc kubenswrapper[4755]: I1210 15:51:55.757815 4755 scope.go:117] "RemoveContainer" containerID="008e8b27aa48ce8c618284ca4dccd38cf79c00478318f7aaaa34c326eeb5ea52" Dec 10 15:51:55 crc kubenswrapper[4755]: E1210 15:51:55.758623 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 15:51:57 crc kubenswrapper[4755]: E1210 15:51:57.764481 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 15:52:04 crc kubenswrapper[4755]: E1210 15:52:04.760325 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 15:52:10 crc kubenswrapper[4755]: I1210 15:52:10.757810 4755 scope.go:117] "RemoveContainer" containerID="008e8b27aa48ce8c618284ca4dccd38cf79c00478318f7aaaa34c326eeb5ea52" Dec 10 15:52:10 crc kubenswrapper[4755]: E1210 15:52:10.758519 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 15:52:11 crc kubenswrapper[4755]: E1210 15:52:11.762186 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 15:52:19 crc kubenswrapper[4755]: E1210 15:52:19.760489 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 15:52:22 crc kubenswrapper[4755]: E1210 15:52:22.760519 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 15:52:24 crc kubenswrapper[4755]: I1210 15:52:24.757564 4755 scope.go:117] "RemoveContainer" containerID="008e8b27aa48ce8c618284ca4dccd38cf79c00478318f7aaaa34c326eeb5ea52" Dec 10 15:52:24 crc kubenswrapper[4755]: E1210 15:52:24.758156 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 15:52:28 crc kubenswrapper[4755]: I1210 15:52:28.586083 4755 generic.go:334] "Generic (PLEG): container finished" podID="c36d5e87-d120-4bf2-8680-cd2c7634f1cf" containerID="2ae79d726ebb743c4ee8aa20ec4401422923266b0f1468d4784ad753be0f47d3" exitCode=0 Dec 10 15:52:28 crc kubenswrapper[4755]: I1210 15:52:28.586168 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jwpvp" event={"ID":"c36d5e87-d120-4bf2-8680-cd2c7634f1cf","Type":"ContainerDied","Data":"2ae79d726ebb743c4ee8aa20ec4401422923266b0f1468d4784ad753be0f47d3"} Dec 10 15:52:30 crc kubenswrapper[4755]: I1210 15:52:30.118580 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jwpvp" Dec 10 15:52:30 crc kubenswrapper[4755]: I1210 15:52:30.281955 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c36d5e87-d120-4bf2-8680-cd2c7634f1cf-bootstrap-combined-ca-bundle\") pod \"c36d5e87-d120-4bf2-8680-cd2c7634f1cf\" (UID: \"c36d5e87-d120-4bf2-8680-cd2c7634f1cf\") " Dec 10 15:52:30 crc kubenswrapper[4755]: I1210 15:52:30.282068 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c36d5e87-d120-4bf2-8680-cd2c7634f1cf-inventory\") pod \"c36d5e87-d120-4bf2-8680-cd2c7634f1cf\" (UID: \"c36d5e87-d120-4bf2-8680-cd2c7634f1cf\") " Dec 10 15:52:30 crc kubenswrapper[4755]: I1210 15:52:30.282149 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9j429\" (UniqueName: \"kubernetes.io/projected/c36d5e87-d120-4bf2-8680-cd2c7634f1cf-kube-api-access-9j429\") pod \"c36d5e87-d120-4bf2-8680-cd2c7634f1cf\" (UID: \"c36d5e87-d120-4bf2-8680-cd2c7634f1cf\") " Dec 10 15:52:30 crc kubenswrapper[4755]: I1210 15:52:30.282264 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c36d5e87-d120-4bf2-8680-cd2c7634f1cf-ssh-key\") pod \"c36d5e87-d120-4bf2-8680-cd2c7634f1cf\" (UID: \"c36d5e87-d120-4bf2-8680-cd2c7634f1cf\") " Dec 10 15:52:30 crc kubenswrapper[4755]: I1210 15:52:30.288610 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c36d5e87-d120-4bf2-8680-cd2c7634f1cf-kube-api-access-9j429" (OuterVolumeSpecName: "kube-api-access-9j429") pod "c36d5e87-d120-4bf2-8680-cd2c7634f1cf" (UID: "c36d5e87-d120-4bf2-8680-cd2c7634f1cf"). InnerVolumeSpecName "kube-api-access-9j429". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:52:30 crc kubenswrapper[4755]: I1210 15:52:30.292664 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c36d5e87-d120-4bf2-8680-cd2c7634f1cf-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "c36d5e87-d120-4bf2-8680-cd2c7634f1cf" (UID: "c36d5e87-d120-4bf2-8680-cd2c7634f1cf"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:52:30 crc kubenswrapper[4755]: I1210 15:52:30.317690 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c36d5e87-d120-4bf2-8680-cd2c7634f1cf-inventory" (OuterVolumeSpecName: "inventory") pod "c36d5e87-d120-4bf2-8680-cd2c7634f1cf" (UID: "c36d5e87-d120-4bf2-8680-cd2c7634f1cf"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:52:30 crc kubenswrapper[4755]: I1210 15:52:30.323124 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c36d5e87-d120-4bf2-8680-cd2c7634f1cf-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "c36d5e87-d120-4bf2-8680-cd2c7634f1cf" (UID: "c36d5e87-d120-4bf2-8680-cd2c7634f1cf"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:52:30 crc kubenswrapper[4755]: I1210 15:52:30.384329 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9j429\" (UniqueName: \"kubernetes.io/projected/c36d5e87-d120-4bf2-8680-cd2c7634f1cf-kube-api-access-9j429\") on node \"crc\" DevicePath \"\"" Dec 10 15:52:30 crc kubenswrapper[4755]: I1210 15:52:30.384670 4755 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c36d5e87-d120-4bf2-8680-cd2c7634f1cf-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 10 15:52:30 crc kubenswrapper[4755]: I1210 15:52:30.384686 4755 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c36d5e87-d120-4bf2-8680-cd2c7634f1cf-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:52:30 crc kubenswrapper[4755]: I1210 15:52:30.384699 4755 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c36d5e87-d120-4bf2-8680-cd2c7634f1cf-inventory\") on node \"crc\" DevicePath \"\"" Dec 10 15:52:30 crc kubenswrapper[4755]: I1210 15:52:30.605809 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jwpvp" event={"ID":"c36d5e87-d120-4bf2-8680-cd2c7634f1cf","Type":"ContainerDied","Data":"4067039b32b41f41426a3cc0a22cea29f3c18aef326ac39199e25dd7a2a0037a"} Dec 10 15:52:30 crc kubenswrapper[4755]: I1210 15:52:30.605884 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jwpvp" Dec 10 15:52:30 crc kubenswrapper[4755]: I1210 15:52:30.605851 4755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4067039b32b41f41426a3cc0a22cea29f3c18aef326ac39199e25dd7a2a0037a" Dec 10 15:52:30 crc kubenswrapper[4755]: I1210 15:52:30.690638 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7fj8w"] Dec 10 15:52:30 crc kubenswrapper[4755]: E1210 15:52:30.691129 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d510d56d-9e16-40ca-9359-6cc1b2a29927" containerName="registry-server" Dec 10 15:52:30 crc kubenswrapper[4755]: I1210 15:52:30.691150 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="d510d56d-9e16-40ca-9359-6cc1b2a29927" containerName="registry-server" Dec 10 15:52:30 crc kubenswrapper[4755]: E1210 15:52:30.691170 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7482badc-03b7-4594-86dd-755f4d346232" containerName="extract-utilities" Dec 10 15:52:30 crc kubenswrapper[4755]: I1210 15:52:30.691178 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="7482badc-03b7-4594-86dd-755f4d346232" containerName="extract-utilities" Dec 10 15:52:30 crc kubenswrapper[4755]: E1210 15:52:30.691197 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d510d56d-9e16-40ca-9359-6cc1b2a29927" containerName="extract-content" Dec 10 15:52:30 crc kubenswrapper[4755]: I1210 15:52:30.691203 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="d510d56d-9e16-40ca-9359-6cc1b2a29927" containerName="extract-content" Dec 10 15:52:30 crc kubenswrapper[4755]: E1210 15:52:30.691225 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d510d56d-9e16-40ca-9359-6cc1b2a29927" containerName="extract-utilities" Dec 10 15:52:30 crc kubenswrapper[4755]: I1210 15:52:30.691230 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="d510d56d-9e16-40ca-9359-6cc1b2a29927" containerName="extract-utilities" Dec 10 15:52:30 crc kubenswrapper[4755]: E1210 15:52:30.691240 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c36d5e87-d120-4bf2-8680-cd2c7634f1cf" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 10 15:52:30 crc kubenswrapper[4755]: I1210 15:52:30.691247 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="c36d5e87-d120-4bf2-8680-cd2c7634f1cf" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 10 15:52:30 crc kubenswrapper[4755]: E1210 15:52:30.691264 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7482badc-03b7-4594-86dd-755f4d346232" containerName="registry-server" Dec 10 15:52:30 crc kubenswrapper[4755]: I1210 15:52:30.691269 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="7482badc-03b7-4594-86dd-755f4d346232" containerName="registry-server" Dec 10 15:52:30 crc kubenswrapper[4755]: E1210 15:52:30.691281 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7482badc-03b7-4594-86dd-755f4d346232" containerName="extract-content" Dec 10 15:52:30 crc kubenswrapper[4755]: I1210 15:52:30.691288 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="7482badc-03b7-4594-86dd-755f4d346232" containerName="extract-content" Dec 10 15:52:30 crc kubenswrapper[4755]: I1210 15:52:30.691479 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="7482badc-03b7-4594-86dd-755f4d346232" containerName="registry-server" Dec 10 15:52:30 crc kubenswrapper[4755]: I1210 15:52:30.691540 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="d510d56d-9e16-40ca-9359-6cc1b2a29927" containerName="registry-server" Dec 10 15:52:30 crc kubenswrapper[4755]: I1210 15:52:30.691560 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="c36d5e87-d120-4bf2-8680-cd2c7634f1cf" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 10 15:52:30 crc kubenswrapper[4755]: I1210 15:52:30.692401 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7fj8w" Dec 10 15:52:30 crc kubenswrapper[4755]: I1210 15:52:30.698144 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 10 15:52:30 crc kubenswrapper[4755]: I1210 15:52:30.698437 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-74mg7" Dec 10 15:52:30 crc kubenswrapper[4755]: I1210 15:52:30.698633 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 10 15:52:30 crc kubenswrapper[4755]: I1210 15:52:30.698828 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 10 15:52:30 crc kubenswrapper[4755]: I1210 15:52:30.711367 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7fj8w"] Dec 10 15:52:30 crc kubenswrapper[4755]: I1210 15:52:30.791774 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cce50278-7a20-499b-bbe8-7304224cc6e4-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-7fj8w\" (UID: \"cce50278-7a20-499b-bbe8-7304224cc6e4\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7fj8w" Dec 10 15:52:30 crc kubenswrapper[4755]: I1210 15:52:30.791866 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lfh5\" (UniqueName: \"kubernetes.io/projected/cce50278-7a20-499b-bbe8-7304224cc6e4-kube-api-access-8lfh5\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-7fj8w\" (UID: \"cce50278-7a20-499b-bbe8-7304224cc6e4\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7fj8w" Dec 10 15:52:30 crc kubenswrapper[4755]: I1210 15:52:30.791941 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cce50278-7a20-499b-bbe8-7304224cc6e4-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-7fj8w\" (UID: \"cce50278-7a20-499b-bbe8-7304224cc6e4\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7fj8w" Dec 10 15:52:30 crc kubenswrapper[4755]: I1210 15:52:30.893981 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cce50278-7a20-499b-bbe8-7304224cc6e4-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-7fj8w\" (UID: \"cce50278-7a20-499b-bbe8-7304224cc6e4\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7fj8w" Dec 10 15:52:30 crc kubenswrapper[4755]: I1210 15:52:30.894209 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cce50278-7a20-499b-bbe8-7304224cc6e4-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-7fj8w\" (UID: \"cce50278-7a20-499b-bbe8-7304224cc6e4\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7fj8w" Dec 10 15:52:30 crc kubenswrapper[4755]: I1210 15:52:30.894274 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8lfh5\" (UniqueName: \"kubernetes.io/projected/cce50278-7a20-499b-bbe8-7304224cc6e4-kube-api-access-8lfh5\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-7fj8w\" (UID: \"cce50278-7a20-499b-bbe8-7304224cc6e4\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7fj8w" Dec 10 15:52:30 crc kubenswrapper[4755]: I1210 15:52:30.898377 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cce50278-7a20-499b-bbe8-7304224cc6e4-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-7fj8w\" (UID: \"cce50278-7a20-499b-bbe8-7304224cc6e4\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7fj8w" Dec 10 15:52:30 crc kubenswrapper[4755]: I1210 15:52:30.899917 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cce50278-7a20-499b-bbe8-7304224cc6e4-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-7fj8w\" (UID: \"cce50278-7a20-499b-bbe8-7304224cc6e4\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7fj8w" Dec 10 15:52:30 crc kubenswrapper[4755]: I1210 15:52:30.909905 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8lfh5\" (UniqueName: \"kubernetes.io/projected/cce50278-7a20-499b-bbe8-7304224cc6e4-kube-api-access-8lfh5\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-7fj8w\" (UID: \"cce50278-7a20-499b-bbe8-7304224cc6e4\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7fj8w" Dec 10 15:52:31 crc kubenswrapper[4755]: I1210 15:52:31.014344 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7fj8w" Dec 10 15:52:31 crc kubenswrapper[4755]: I1210 15:52:31.047649 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-v5jhd"] Dec 10 15:52:31 crc kubenswrapper[4755]: I1210 15:52:31.061121 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-v5jhd"] Dec 10 15:52:31 crc kubenswrapper[4755]: I1210 15:52:31.542229 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7fj8w"] Dec 10 15:52:31 crc kubenswrapper[4755]: I1210 15:52:31.617075 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7fj8w" event={"ID":"cce50278-7a20-499b-bbe8-7304224cc6e4","Type":"ContainerStarted","Data":"e197e3e243954e5d58a1d1ebf9aa4ca38db83a811d5a41bf40c09098d4329967"} Dec 10 15:52:31 crc kubenswrapper[4755]: I1210 15:52:31.768838 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f3fe5d6-3bfb-4f18-8910-f09974b19e37" path="/var/lib/kubelet/pods/6f3fe5d6-3bfb-4f18-8910-f09974b19e37/volumes" Dec 10 15:52:32 crc kubenswrapper[4755]: I1210 15:52:32.628181 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7fj8w" event={"ID":"cce50278-7a20-499b-bbe8-7304224cc6e4","Type":"ContainerStarted","Data":"d3c945a565aa341d0b3159e4e3549c4d853fb4b0e2e1f1f49a8a34da5d0c0e65"} Dec 10 15:52:32 crc kubenswrapper[4755]: I1210 15:52:32.650486 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7fj8w" podStartSLOduration=1.943290263 podStartE2EDuration="2.650444343s" podCreationTimestamp="2025-12-10 15:52:30 +0000 UTC" firstStartedPulling="2025-12-10 15:52:31.544509406 +0000 UTC m=+1748.145393038" lastFinishedPulling="2025-12-10 15:52:32.251663486 +0000 UTC m=+1748.852547118" observedRunningTime="2025-12-10 15:52:32.642357423 +0000 UTC m=+1749.243241065" watchObservedRunningTime="2025-12-10 15:52:32.650444343 +0000 UTC m=+1749.251327975" Dec 10 15:52:33 crc kubenswrapper[4755]: I1210 15:52:33.030811 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-1696-account-create-update-v9pxt"] Dec 10 15:52:33 crc kubenswrapper[4755]: I1210 15:52:33.045515 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-1696-account-create-update-v9pxt"] Dec 10 15:52:33 crc kubenswrapper[4755]: E1210 15:52:33.789355 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 15:52:33 crc kubenswrapper[4755]: I1210 15:52:33.792622 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="857e88aa-b989-4d6e-acbf-0309f0297a25" path="/var/lib/kubelet/pods/857e88aa-b989-4d6e-acbf-0309f0297a25/volumes" Dec 10 15:52:34 crc kubenswrapper[4755]: E1210 15:52:34.760591 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 15:52:35 crc kubenswrapper[4755]: I1210 15:52:35.029293 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-1b56-account-create-update-l28rl"] Dec 10 15:52:35 crc kubenswrapper[4755]: I1210 15:52:35.041135 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-jjdr5"] Dec 10 15:52:35 crc kubenswrapper[4755]: I1210 15:52:35.050726 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-1b56-account-create-update-l28rl"] Dec 10 15:52:35 crc kubenswrapper[4755]: I1210 15:52:35.060362 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-jjdr5"] Dec 10 15:52:35 crc kubenswrapper[4755]: I1210 15:52:35.769867 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4dce77c7-9c60-4919-b20d-a954178e0b0c" path="/var/lib/kubelet/pods/4dce77c7-9c60-4919-b20d-a954178e0b0c/volumes" Dec 10 15:52:35 crc kubenswrapper[4755]: I1210 15:52:35.771234 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0d25e1b-5c7f-468c-9723-4030952bc88c" path="/var/lib/kubelet/pods/c0d25e1b-5c7f-468c-9723-4030952bc88c/volumes" Dec 10 15:52:39 crc kubenswrapper[4755]: I1210 15:52:39.758094 4755 scope.go:117] "RemoveContainer" containerID="008e8b27aa48ce8c618284ca4dccd38cf79c00478318f7aaaa34c326eeb5ea52" Dec 10 15:52:39 crc kubenswrapper[4755]: E1210 15:52:39.758844 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 15:52:45 crc kubenswrapper[4755]: I1210 15:52:45.053521 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-d0ed-account-create-update-ck22x"] Dec 10 15:52:45 crc kubenswrapper[4755]: I1210 15:52:45.069638 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-d0ed-account-create-update-ck22x"] Dec 10 15:52:45 crc kubenswrapper[4755]: I1210 15:52:45.092588 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-g7qh8"] Dec 10 15:52:45 crc kubenswrapper[4755]: I1210 15:52:45.104688 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-g7qh8"] Dec 10 15:52:45 crc kubenswrapper[4755]: I1210 15:52:45.772597 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="224b1eb4-d368-436c-90b1-fe760dc26591" path="/var/lib/kubelet/pods/224b1eb4-d368-436c-90b1-fe760dc26591/volumes" Dec 10 15:52:45 crc kubenswrapper[4755]: I1210 15:52:45.773388 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9b77d20-983f-4ca4-9fa6-67ab4fb8d3ba" path="/var/lib/kubelet/pods/c9b77d20-983f-4ca4-9fa6-67ab4fb8d3ba/volumes" Dec 10 15:52:46 crc kubenswrapper[4755]: E1210 15:52:46.759735 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 15:52:49 crc kubenswrapper[4755]: E1210 15:52:49.760452 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 15:52:52 crc kubenswrapper[4755]: I1210 15:52:52.759006 4755 scope.go:117] "RemoveContainer" containerID="008e8b27aa48ce8c618284ca4dccd38cf79c00478318f7aaaa34c326eeb5ea52" Dec 10 15:52:52 crc kubenswrapper[4755]: E1210 15:52:52.760297 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 15:53:01 crc kubenswrapper[4755]: E1210 15:53:01.759635 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 15:53:02 crc kubenswrapper[4755]: E1210 15:53:02.759910 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 15:53:04 crc kubenswrapper[4755]: I1210 15:53:04.723700 4755 scope.go:117] "RemoveContainer" containerID="74ba3f7506a0df21d04362279379ae7b657a6c2a8d7cea939589543a6ce287d0" Dec 10 15:53:04 crc kubenswrapper[4755]: I1210 15:53:04.758272 4755 scope.go:117] "RemoveContainer" containerID="008e8b27aa48ce8c618284ca4dccd38cf79c00478318f7aaaa34c326eeb5ea52" Dec 10 15:53:04 crc kubenswrapper[4755]: E1210 15:53:04.758828 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 15:53:04 crc kubenswrapper[4755]: I1210 15:53:04.762885 4755 scope.go:117] "RemoveContainer" containerID="c571038108eba45f39db641147369e9739c4d5ef301e1162b3ec403e5a73b7b0" Dec 10 15:53:04 crc kubenswrapper[4755]: I1210 15:53:04.821819 4755 scope.go:117] "RemoveContainer" containerID="2f61206613b11c920d9e7fafb07f8c923553a7345715b895d4afc705f9949642" Dec 10 15:53:04 crc kubenswrapper[4755]: I1210 15:53:04.869916 4755 scope.go:117] "RemoveContainer" containerID="6a90258492774ed628d3c0326176405b1e73b5605a84995a033402c90ac3f3f4" Dec 10 15:53:04 crc kubenswrapper[4755]: I1210 15:53:04.919089 4755 scope.go:117] "RemoveContainer" containerID="c8b67f6f92b34f8ca46c888205ae21fe2e9c2a7675821b4f3a8e5032b999adc4" Dec 10 15:53:04 crc kubenswrapper[4755]: I1210 15:53:04.970134 4755 scope.go:117] "RemoveContainer" containerID="981625c691068530e3d274326788e12ce542fa2e25b890d1c1f0c57499eb4e28" Dec 10 15:53:05 crc kubenswrapper[4755]: I1210 15:53:05.046020 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-fxbjx"] Dec 10 15:53:05 crc kubenswrapper[4755]: I1210 15:53:05.055765 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-fxbjx"] Dec 10 15:53:05 crc kubenswrapper[4755]: I1210 15:53:05.771086 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c9e6f66-2358-4311-98ff-066fe8edd720" path="/var/lib/kubelet/pods/5c9e6f66-2358-4311-98ff-066fe8edd720/volumes" Dec 10 15:53:08 crc kubenswrapper[4755]: I1210 15:53:08.042524 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-84g2q"] Dec 10 15:53:08 crc kubenswrapper[4755]: I1210 15:53:08.061459 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-rs5rh"] Dec 10 15:53:08 crc kubenswrapper[4755]: I1210 15:53:08.074847 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-db-create-2872x"] Dec 10 15:53:08 crc kubenswrapper[4755]: I1210 15:53:08.086212 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-84g2q"] Dec 10 15:53:08 crc kubenswrapper[4755]: I1210 15:53:08.095563 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-9b26-account-create-update-q9h4x"] Dec 10 15:53:08 crc kubenswrapper[4755]: I1210 15:53:08.105107 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-db-create-2872x"] Dec 10 15:53:08 crc kubenswrapper[4755]: I1210 15:53:08.115490 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-rs5rh"] Dec 10 15:53:08 crc kubenswrapper[4755]: I1210 15:53:08.125236 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-9b26-account-create-update-q9h4x"] Dec 10 15:53:08 crc kubenswrapper[4755]: I1210 15:53:08.135740 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-eb88-account-create-update-t2zx9"] Dec 10 15:53:08 crc kubenswrapper[4755]: I1210 15:53:08.145102 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-2b7c-account-create-update-dtbb2"] Dec 10 15:53:08 crc kubenswrapper[4755]: I1210 15:53:08.153701 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-eb88-account-create-update-t2zx9"] Dec 10 15:53:08 crc kubenswrapper[4755]: I1210 15:53:08.162421 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-a190-account-create-update-brt5j"] Dec 10 15:53:08 crc kubenswrapper[4755]: I1210 15:53:08.171366 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-a190-account-create-update-brt5j"] Dec 10 15:53:08 crc kubenswrapper[4755]: I1210 15:53:08.180087 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-2b7c-account-create-update-dtbb2"] Dec 10 15:53:09 crc kubenswrapper[4755]: I1210 15:53:09.769414 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="006b2612-d428-4655-9b11-7805124e026d" path="/var/lib/kubelet/pods/006b2612-d428-4655-9b11-7805124e026d/volumes" Dec 10 15:53:09 crc kubenswrapper[4755]: I1210 15:53:09.770708 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b387caa-3f36-4771-8046-f41a609fc2ba" path="/var/lib/kubelet/pods/4b387caa-3f36-4771-8046-f41a609fc2ba/volumes" Dec 10 15:53:09 crc kubenswrapper[4755]: I1210 15:53:09.771714 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b53b622-751c-4098-95ef-86d7bbb6f03b" path="/var/lib/kubelet/pods/4b53b622-751c-4098-95ef-86d7bbb6f03b/volumes" Dec 10 15:53:09 crc kubenswrapper[4755]: I1210 15:53:09.772658 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c5b35e4-4153-4f26-89bb-80e480230209" path="/var/lib/kubelet/pods/5c5b35e4-4153-4f26-89bb-80e480230209/volumes" Dec 10 15:53:09 crc kubenswrapper[4755]: I1210 15:53:09.774135 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d99bcd9-160f-4acf-b5a2-048e2cf4e69d" path="/var/lib/kubelet/pods/5d99bcd9-160f-4acf-b5a2-048e2cf4e69d/volumes" Dec 10 15:53:09 crc kubenswrapper[4755]: I1210 15:53:09.775100 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e112f6c-6906-4c34-9a05-ed827a4cd2ff" path="/var/lib/kubelet/pods/7e112f6c-6906-4c34-9a05-ed827a4cd2ff/volumes" Dec 10 15:53:09 crc kubenswrapper[4755]: I1210 15:53:09.775962 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8fb0565-641d-4083-a004-068c8f2da61f" path="/var/lib/kubelet/pods/d8fb0565-641d-4083-a004-068c8f2da61f/volumes" Dec 10 15:53:14 crc kubenswrapper[4755]: E1210 15:53:14.761812 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 15:53:14 crc kubenswrapper[4755]: E1210 15:53:14.762681 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 15:53:16 crc kubenswrapper[4755]: I1210 15:53:16.033164 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-6m9cc"] Dec 10 15:53:16 crc kubenswrapper[4755]: I1210 15:53:16.046761 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-6m9cc"] Dec 10 15:53:17 crc kubenswrapper[4755]: I1210 15:53:17.759774 4755 scope.go:117] "RemoveContainer" containerID="008e8b27aa48ce8c618284ca4dccd38cf79c00478318f7aaaa34c326eeb5ea52" Dec 10 15:53:17 crc kubenswrapper[4755]: E1210 15:53:17.760893 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 15:53:17 crc kubenswrapper[4755]: I1210 15:53:17.771517 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86e3285c-eddd-4bd4-bca9-8d9ccf2019e7" path="/var/lib/kubelet/pods/86e3285c-eddd-4bd4-bca9-8d9ccf2019e7/volumes" Dec 10 15:53:26 crc kubenswrapper[4755]: E1210 15:53:26.760117 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 15:53:29 crc kubenswrapper[4755]: I1210 15:53:29.758510 4755 scope.go:117] "RemoveContainer" containerID="008e8b27aa48ce8c618284ca4dccd38cf79c00478318f7aaaa34c326eeb5ea52" Dec 10 15:53:29 crc kubenswrapper[4755]: E1210 15:53:29.759373 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 15:53:29 crc kubenswrapper[4755]: E1210 15:53:29.760271 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 15:53:40 crc kubenswrapper[4755]: E1210 15:53:40.760167 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 15:53:40 crc kubenswrapper[4755]: E1210 15:53:40.760176 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 15:53:41 crc kubenswrapper[4755]: I1210 15:53:41.758160 4755 scope.go:117] "RemoveContainer" containerID="008e8b27aa48ce8c618284ca4dccd38cf79c00478318f7aaaa34c326eeb5ea52" Dec 10 15:53:41 crc kubenswrapper[4755]: E1210 15:53:41.758802 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 15:53:53 crc kubenswrapper[4755]: I1210 15:53:53.049506 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-pth9b"] Dec 10 15:53:53 crc kubenswrapper[4755]: I1210 15:53:53.061173 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-pth9b"] Dec 10 15:53:53 crc kubenswrapper[4755]: I1210 15:53:53.765729 4755 scope.go:117] "RemoveContainer" containerID="008e8b27aa48ce8c618284ca4dccd38cf79c00478318f7aaaa34c326eeb5ea52" Dec 10 15:53:53 crc kubenswrapper[4755]: E1210 15:53:53.766079 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 15:53:53 crc kubenswrapper[4755]: I1210 15:53:53.783038 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d949f1d-5cb7-49e0-aa0b-52a615dfe4b5" path="/var/lib/kubelet/pods/5d949f1d-5cb7-49e0-aa0b-52a615dfe4b5/volumes" Dec 10 15:53:55 crc kubenswrapper[4755]: E1210 15:53:55.760260 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 15:53:55 crc kubenswrapper[4755]: E1210 15:53:55.886970 4755 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 10 15:53:55 crc kubenswrapper[4755]: E1210 15:53:55.887078 4755 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 10 15:53:55 crc kubenswrapper[4755]: E1210 15:53:55.887228 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mz4t5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-jfc28_openstack(998863b6-4f48-4c8b-8011-a40377686b99): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 10 15:53:55 crc kubenswrapper[4755]: E1210 15:53:55.889122 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 15:54:04 crc kubenswrapper[4755]: I1210 15:54:04.757754 4755 scope.go:117] "RemoveContainer" containerID="008e8b27aa48ce8c618284ca4dccd38cf79c00478318f7aaaa34c326eeb5ea52" Dec 10 15:54:04 crc kubenswrapper[4755]: E1210 15:54:04.758675 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 15:54:05 crc kubenswrapper[4755]: I1210 15:54:05.035960 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-9l6v9"] Dec 10 15:54:05 crc kubenswrapper[4755]: I1210 15:54:05.048931 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-9l6v9"] Dec 10 15:54:05 crc kubenswrapper[4755]: I1210 15:54:05.166694 4755 scope.go:117] "RemoveContainer" containerID="b60495d86167df4328db1da5dbc39e9fb8da8762367164584bd9c6e44d926493" Dec 10 15:54:05 crc kubenswrapper[4755]: I1210 15:54:05.194380 4755 scope.go:117] "RemoveContainer" containerID="b68cfed6a96d957869624d241d8ebaf0a5235ef279b8fb377b567ee4bd898b64" Dec 10 15:54:05 crc kubenswrapper[4755]: I1210 15:54:05.271084 4755 scope.go:117] "RemoveContainer" containerID="e0f33564831859754eb1d2efb4a50c6195080ce672e09ed18acc6a683324fbd3" Dec 10 15:54:05 crc kubenswrapper[4755]: I1210 15:54:05.312737 4755 scope.go:117] "RemoveContainer" containerID="260f253881fd52021fdfb65abe24349cbd6c938147e2574bf76715507c6ad9c0" Dec 10 15:54:05 crc kubenswrapper[4755]: I1210 15:54:05.363883 4755 scope.go:117] "RemoveContainer" containerID="0cdbd04441704faaf6f184a7008fcd33e82429220859169d7a2e08cdd2d6febd" Dec 10 15:54:05 crc kubenswrapper[4755]: I1210 15:54:05.418621 4755 scope.go:117] "RemoveContainer" containerID="bb7e8108b253bf2aa90140899f0db1509d53c3030162ec47375974cdca5a60c2" Dec 10 15:54:05 crc kubenswrapper[4755]: I1210 15:54:05.474542 4755 scope.go:117] "RemoveContainer" containerID="9f80471a6d75d13b8062f27f69f866d01e00835c6e4dab5ad8395ca69344d598" Dec 10 15:54:05 crc kubenswrapper[4755]: I1210 15:54:05.500536 4755 scope.go:117] "RemoveContainer" containerID="52acf371ab5abc9a88cf3be82423c8e55a5ad893555c09f94c508b3d3e2e64b2" Dec 10 15:54:05 crc kubenswrapper[4755]: I1210 15:54:05.528345 4755 scope.go:117] "RemoveContainer" containerID="d2978f6e5bf73599a55d10ea09c7ee3cead55d0d2d604534c640b2a9be1b78d1" Dec 10 15:54:05 crc kubenswrapper[4755]: I1210 15:54:05.555272 4755 scope.go:117] "RemoveContainer" containerID="d99d38c264a34cb298485ed0e13ebafece8b9b5d7d1ec13c7ad9fa3bac9d019b" Dec 10 15:54:05 crc kubenswrapper[4755]: I1210 15:54:05.774288 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7e81227-e01b-4851-ac1f-d4ff480c0993" path="/var/lib/kubelet/pods/c7e81227-e01b-4851-ac1f-d4ff480c0993/volumes" Dec 10 15:54:06 crc kubenswrapper[4755]: E1210 15:54:06.758932 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 15:54:09 crc kubenswrapper[4755]: E1210 15:54:09.760394 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 15:54:10 crc kubenswrapper[4755]: I1210 15:54:10.028482 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-zttv2"] Dec 10 15:54:10 crc kubenswrapper[4755]: I1210 15:54:10.038270 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-wstp4"] Dec 10 15:54:10 crc kubenswrapper[4755]: I1210 15:54:10.047658 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-zttv2"] Dec 10 15:54:10 crc kubenswrapper[4755]: I1210 15:54:10.056117 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-wstp4"] Dec 10 15:54:11 crc kubenswrapper[4755]: I1210 15:54:11.029788 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-6fnmv"] Dec 10 15:54:11 crc kubenswrapper[4755]: I1210 15:54:11.040785 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-6fnmv"] Dec 10 15:54:11 crc kubenswrapper[4755]: I1210 15:54:11.769891 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0009273d-a6d2-43da-99f9-993f2aba0e3a" path="/var/lib/kubelet/pods/0009273d-a6d2-43da-99f9-993f2aba0e3a/volumes" Dec 10 15:54:11 crc kubenswrapper[4755]: I1210 15:54:11.771497 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07e5d790-41c9-4f66-87e4-6088fe8bbc8f" path="/var/lib/kubelet/pods/07e5d790-41c9-4f66-87e4-6088fe8bbc8f/volumes" Dec 10 15:54:11 crc kubenswrapper[4755]: I1210 15:54:11.772280 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3821a978-80ec-4434-a871-ed026186f498" path="/var/lib/kubelet/pods/3821a978-80ec-4434-a871-ed026186f498/volumes" Dec 10 15:54:19 crc kubenswrapper[4755]: I1210 15:54:19.757879 4755 scope.go:117] "RemoveContainer" containerID="008e8b27aa48ce8c618284ca4dccd38cf79c00478318f7aaaa34c326eeb5ea52" Dec 10 15:54:19 crc kubenswrapper[4755]: E1210 15:54:19.760218 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 15:54:19 crc kubenswrapper[4755]: I1210 15:54:19.761238 4755 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 10 15:54:19 crc kubenswrapper[4755]: E1210 15:54:19.886825 4755 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 10 15:54:19 crc kubenswrapper[4755]: E1210 15:54:19.886892 4755 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 10 15:54:19 crc kubenswrapper[4755]: E1210 15:54:19.887042 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5d4h5b7hfbh5ddh688h9ch55bh7chf6h5ddh68ch94h69h5c5h596h59bh569hfchc4h676hcbh64dhdbh57fh75h5c9h98h59ch679h566h77h9cq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hw9gj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(6d104bea-ecdc-4fe1-9861-fb1a19fce845): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 10 15:54:19 crc kubenswrapper[4755]: E1210 15:54:19.888285 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 15:54:22 crc kubenswrapper[4755]: E1210 15:54:22.759971 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 15:54:31 crc kubenswrapper[4755]: E1210 15:54:31.759203 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 15:54:33 crc kubenswrapper[4755]: I1210 15:54:33.763291 4755 scope.go:117] "RemoveContainer" containerID="008e8b27aa48ce8c618284ca4dccd38cf79c00478318f7aaaa34c326eeb5ea52" Dec 10 15:54:33 crc kubenswrapper[4755]: E1210 15:54:33.763902 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 15:54:35 crc kubenswrapper[4755]: E1210 15:54:35.771596 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 15:54:36 crc kubenswrapper[4755]: I1210 15:54:36.052929 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-cwjsz"] Dec 10 15:54:36 crc kubenswrapper[4755]: I1210 15:54:36.064350 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-cwjsz"] Dec 10 15:54:37 crc kubenswrapper[4755]: I1210 15:54:37.777112 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b9ab1e5-2daa-4057-84e3-50bef68bbaca" path="/var/lib/kubelet/pods/9b9ab1e5-2daa-4057-84e3-50bef68bbaca/volumes" Dec 10 15:54:43 crc kubenswrapper[4755]: I1210 15:54:43.033326 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-storageinit-796tf"] Dec 10 15:54:43 crc kubenswrapper[4755]: I1210 15:54:43.046956 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-storageinit-796tf"] Dec 10 15:54:43 crc kubenswrapper[4755]: E1210 15:54:43.768170 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 15:54:43 crc kubenswrapper[4755]: I1210 15:54:43.771362 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50f8ada4-b157-4f73-ae6b-876844b71ced" path="/var/lib/kubelet/pods/50f8ada4-b157-4f73-ae6b-876844b71ced/volumes" Dec 10 15:54:45 crc kubenswrapper[4755]: I1210 15:54:45.758184 4755 scope.go:117] "RemoveContainer" containerID="008e8b27aa48ce8c618284ca4dccd38cf79c00478318f7aaaa34c326eeb5ea52" Dec 10 15:54:45 crc kubenswrapper[4755]: E1210 15:54:45.758899 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 15:54:48 crc kubenswrapper[4755]: E1210 15:54:48.760086 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 15:54:54 crc kubenswrapper[4755]: E1210 15:54:54.760373 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 15:54:58 crc kubenswrapper[4755]: I1210 15:54:58.757424 4755 scope.go:117] "RemoveContainer" containerID="008e8b27aa48ce8c618284ca4dccd38cf79c00478318f7aaaa34c326eeb5ea52" Dec 10 15:54:58 crc kubenswrapper[4755]: E1210 15:54:58.757994 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 15:55:00 crc kubenswrapper[4755]: E1210 15:55:00.760325 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 15:55:05 crc kubenswrapper[4755]: I1210 15:55:05.778865 4755 scope.go:117] "RemoveContainer" containerID="b4742b126f04bd73fc89a611417ca7e607aa6268f96f09a20199c1a37ce9f841" Dec 10 15:55:05 crc kubenswrapper[4755]: I1210 15:55:05.832630 4755 scope.go:117] "RemoveContainer" containerID="815cb3dcfec50a0281d6b4e6b81c6ac12f56545437ec542718836fab236e026d" Dec 10 15:55:05 crc kubenswrapper[4755]: I1210 15:55:05.880738 4755 scope.go:117] "RemoveContainer" containerID="998291edebca64a51c3c48de3b26f728fbd5dc52fab841d39b6d5bc065326347" Dec 10 15:55:05 crc kubenswrapper[4755]: I1210 15:55:05.927671 4755 scope.go:117] "RemoveContainer" containerID="344329be481337c544c714666aee3668da2db986f3b6e94930a4c3b05a83d634" Dec 10 15:55:05 crc kubenswrapper[4755]: I1210 15:55:05.972655 4755 scope.go:117] "RemoveContainer" containerID="d52536c354af758c503c73ece6e28c53b5786a281589f9ca634611750884ffef" Dec 10 15:55:06 crc kubenswrapper[4755]: I1210 15:55:06.030738 4755 scope.go:117] "RemoveContainer" containerID="85cfc7d93b2a05f6174a89957198e855ab5e29e641e81a0927b2d05109cf98e8" Dec 10 15:55:06 crc kubenswrapper[4755]: E1210 15:55:06.759654 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 15:55:11 crc kubenswrapper[4755]: I1210 15:55:11.758754 4755 scope.go:117] "RemoveContainer" containerID="008e8b27aa48ce8c618284ca4dccd38cf79c00478318f7aaaa34c326eeb5ea52" Dec 10 15:55:12 crc kubenswrapper[4755]: I1210 15:55:12.292571 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" event={"ID":"b132a8b9-1c99-414d-8773-229bf36b305d","Type":"ContainerStarted","Data":"848464b372da64a2ff4b9b5d8f68e30f7b70ba91c0c9790e6358c2e46556c416"} Dec 10 15:55:13 crc kubenswrapper[4755]: E1210 15:55:13.766783 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 15:55:15 crc kubenswrapper[4755]: I1210 15:55:15.043074 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-hdt77"] Dec 10 15:55:15 crc kubenswrapper[4755]: I1210 15:55:15.052038 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-625d-account-create-update-mn5ph"] Dec 10 15:55:15 crc kubenswrapper[4755]: I1210 15:55:15.063298 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-hdt77"] Dec 10 15:55:15 crc kubenswrapper[4755]: I1210 15:55:15.076922 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-dca9-account-create-update-lw8s5"] Dec 10 15:55:15 crc kubenswrapper[4755]: I1210 15:55:15.086545 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-625d-account-create-update-mn5ph"] Dec 10 15:55:15 crc kubenswrapper[4755]: I1210 15:55:15.097115 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-64rbl"] Dec 10 15:55:15 crc kubenswrapper[4755]: I1210 15:55:15.105880 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-jrbpt"] Dec 10 15:55:15 crc kubenswrapper[4755]: I1210 15:55:15.115282 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-dca9-account-create-update-lw8s5"] Dec 10 15:55:15 crc kubenswrapper[4755]: I1210 15:55:15.124033 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-64rbl"] Dec 10 15:55:15 crc kubenswrapper[4755]: I1210 15:55:15.133263 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-jrbpt"] Dec 10 15:55:15 crc kubenswrapper[4755]: I1210 15:55:15.771676 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ce71806-31d9-482e-860b-3fceb024e17f" path="/var/lib/kubelet/pods/0ce71806-31d9-482e-860b-3fceb024e17f/volumes" Dec 10 15:55:15 crc kubenswrapper[4755]: I1210 15:55:15.772348 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3347b9d4-ec43-4f20-a896-4c3f26ecb892" path="/var/lib/kubelet/pods/3347b9d4-ec43-4f20-a896-4c3f26ecb892/volumes" Dec 10 15:55:15 crc kubenswrapper[4755]: I1210 15:55:15.772936 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5dbb4b56-2d08-40ba-8cce-70f548573384" path="/var/lib/kubelet/pods/5dbb4b56-2d08-40ba-8cce-70f548573384/volumes" Dec 10 15:55:15 crc kubenswrapper[4755]: I1210 15:55:15.773492 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="833228f4-cb63-4a39-aada-9481b9cdb3e5" path="/var/lib/kubelet/pods/833228f4-cb63-4a39-aada-9481b9cdb3e5/volumes" Dec 10 15:55:15 crc kubenswrapper[4755]: I1210 15:55:15.774599 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7364f60-3c77-4234-9bed-d0e8f92d0bca" path="/var/lib/kubelet/pods/c7364f60-3c77-4234-9bed-d0e8f92d0bca/volumes" Dec 10 15:55:16 crc kubenswrapper[4755]: I1210 15:55:16.053348 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-a9ee-account-create-update-zctzv"] Dec 10 15:55:16 crc kubenswrapper[4755]: I1210 15:55:16.069222 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-a9ee-account-create-update-zctzv"] Dec 10 15:55:17 crc kubenswrapper[4755]: I1210 15:55:17.771134 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8002c1bd-43bb-4d3d-b06a-e391505af5b5" path="/var/lib/kubelet/pods/8002c1bd-43bb-4d3d-b06a-e391505af5b5/volumes" Dec 10 15:55:21 crc kubenswrapper[4755]: E1210 15:55:21.760895 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 15:55:25 crc kubenswrapper[4755]: E1210 15:55:25.761962 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 15:55:33 crc kubenswrapper[4755]: E1210 15:55:33.807776 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 15:55:40 crc kubenswrapper[4755]: E1210 15:55:40.759664 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 15:55:45 crc kubenswrapper[4755]: E1210 15:55:45.760448 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 15:55:51 crc kubenswrapper[4755]: E1210 15:55:51.760369 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 15:55:58 crc kubenswrapper[4755]: E1210 15:55:58.762135 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 15:55:59 crc kubenswrapper[4755]: I1210 15:55:59.045978 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-f9wl6"] Dec 10 15:55:59 crc kubenswrapper[4755]: I1210 15:55:59.053973 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-f9wl6"] Dec 10 15:55:59 crc kubenswrapper[4755]: I1210 15:55:59.774639 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="609b4b0b-1c46-4b66-bfd5-d42a91e325c4" path="/var/lib/kubelet/pods/609b4b0b-1c46-4b66-bfd5-d42a91e325c4/volumes" Dec 10 15:56:06 crc kubenswrapper[4755]: I1210 15:56:06.195526 4755 scope.go:117] "RemoveContainer" containerID="fb300d4332a4ed727e73ad046dc7f955e46e7275f8df1c4ab6d91bf93c5a1561" Dec 10 15:56:06 crc kubenswrapper[4755]: I1210 15:56:06.241926 4755 scope.go:117] "RemoveContainer" containerID="962f4ba07936ca5ba9c2964c6d1610a2ee6074837e0d8367490d1be7505b1a99" Dec 10 15:56:06 crc kubenswrapper[4755]: I1210 15:56:06.286642 4755 scope.go:117] "RemoveContainer" containerID="104730d17cfed610864e030290e6b1c29573152fc2d8089806dab1b17ee67e86" Dec 10 15:56:06 crc kubenswrapper[4755]: I1210 15:56:06.343766 4755 scope.go:117] "RemoveContainer" containerID="c1f6de90b51c9c2a5bbc79b58a3477229d33b82c58f4df6c9ce192e2135ada2e" Dec 10 15:56:06 crc kubenswrapper[4755]: I1210 15:56:06.416680 4755 scope.go:117] "RemoveContainer" containerID="40ddfbb29e08e538d25bb254cd786fefa0268837e8f54f4706716b65feb6b363" Dec 10 15:56:06 crc kubenswrapper[4755]: I1210 15:56:06.465750 4755 scope.go:117] "RemoveContainer" containerID="45a0d33acde2d0dbe6f25ad4b92bff5ad31e1c3da9321d4ce3e27a3fa9322692" Dec 10 15:56:06 crc kubenswrapper[4755]: I1210 15:56:06.524904 4755 scope.go:117] "RemoveContainer" containerID="77a4bc7fae8602bacecc29cb7bf5070eaffc1797961e84e560aa1afcef358b4f" Dec 10 15:56:07 crc kubenswrapper[4755]: E1210 15:56:07.758965 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 15:56:10 crc kubenswrapper[4755]: E1210 15:56:10.760142 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 15:56:22 crc kubenswrapper[4755]: E1210 15:56:22.760440 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 15:56:25 crc kubenswrapper[4755]: E1210 15:56:25.760673 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 15:56:31 crc kubenswrapper[4755]: I1210 15:56:31.426956 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-72qxv"] Dec 10 15:56:31 crc kubenswrapper[4755]: I1210 15:56:31.430382 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-72qxv" Dec 10 15:56:31 crc kubenswrapper[4755]: I1210 15:56:31.444697 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-72qxv"] Dec 10 15:56:31 crc kubenswrapper[4755]: I1210 15:56:31.512032 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f79d3fc6-b53b-4f7a-8c86-7700f564086d-utilities\") pod \"redhat-operators-72qxv\" (UID: \"f79d3fc6-b53b-4f7a-8c86-7700f564086d\") " pod="openshift-marketplace/redhat-operators-72qxv" Dec 10 15:56:31 crc kubenswrapper[4755]: I1210 15:56:31.512270 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f79d3fc6-b53b-4f7a-8c86-7700f564086d-catalog-content\") pod \"redhat-operators-72qxv\" (UID: \"f79d3fc6-b53b-4f7a-8c86-7700f564086d\") " pod="openshift-marketplace/redhat-operators-72qxv" Dec 10 15:56:31 crc kubenswrapper[4755]: I1210 15:56:31.512375 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45pnw\" (UniqueName: \"kubernetes.io/projected/f79d3fc6-b53b-4f7a-8c86-7700f564086d-kube-api-access-45pnw\") pod \"redhat-operators-72qxv\" (UID: \"f79d3fc6-b53b-4f7a-8c86-7700f564086d\") " pod="openshift-marketplace/redhat-operators-72qxv" Dec 10 15:56:31 crc kubenswrapper[4755]: I1210 15:56:31.614442 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f79d3fc6-b53b-4f7a-8c86-7700f564086d-utilities\") pod \"redhat-operators-72qxv\" (UID: \"f79d3fc6-b53b-4f7a-8c86-7700f564086d\") " pod="openshift-marketplace/redhat-operators-72qxv" Dec 10 15:56:31 crc kubenswrapper[4755]: I1210 15:56:31.614613 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f79d3fc6-b53b-4f7a-8c86-7700f564086d-catalog-content\") pod \"redhat-operators-72qxv\" (UID: \"f79d3fc6-b53b-4f7a-8c86-7700f564086d\") " pod="openshift-marketplace/redhat-operators-72qxv" Dec 10 15:56:31 crc kubenswrapper[4755]: I1210 15:56:31.614645 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45pnw\" (UniqueName: \"kubernetes.io/projected/f79d3fc6-b53b-4f7a-8c86-7700f564086d-kube-api-access-45pnw\") pod \"redhat-operators-72qxv\" (UID: \"f79d3fc6-b53b-4f7a-8c86-7700f564086d\") " pod="openshift-marketplace/redhat-operators-72qxv" Dec 10 15:56:31 crc kubenswrapper[4755]: I1210 15:56:31.614923 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f79d3fc6-b53b-4f7a-8c86-7700f564086d-utilities\") pod \"redhat-operators-72qxv\" (UID: \"f79d3fc6-b53b-4f7a-8c86-7700f564086d\") " pod="openshift-marketplace/redhat-operators-72qxv" Dec 10 15:56:31 crc kubenswrapper[4755]: I1210 15:56:31.615047 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f79d3fc6-b53b-4f7a-8c86-7700f564086d-catalog-content\") pod \"redhat-operators-72qxv\" (UID: \"f79d3fc6-b53b-4f7a-8c86-7700f564086d\") " pod="openshift-marketplace/redhat-operators-72qxv" Dec 10 15:56:31 crc kubenswrapper[4755]: I1210 15:56:31.638564 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45pnw\" (UniqueName: \"kubernetes.io/projected/f79d3fc6-b53b-4f7a-8c86-7700f564086d-kube-api-access-45pnw\") pod \"redhat-operators-72qxv\" (UID: \"f79d3fc6-b53b-4f7a-8c86-7700f564086d\") " pod="openshift-marketplace/redhat-operators-72qxv" Dec 10 15:56:31 crc kubenswrapper[4755]: I1210 15:56:31.757330 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-72qxv" Dec 10 15:56:32 crc kubenswrapper[4755]: I1210 15:56:32.316341 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-72qxv"] Dec 10 15:56:33 crc kubenswrapper[4755]: I1210 15:56:33.111850 4755 generic.go:334] "Generic (PLEG): container finished" podID="f79d3fc6-b53b-4f7a-8c86-7700f564086d" containerID="e6914af0bf21233b344cc61698b6c29b63d69d9d0c047844d144c208aeece558" exitCode=0 Dec 10 15:56:33 crc kubenswrapper[4755]: I1210 15:56:33.111916 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-72qxv" event={"ID":"f79d3fc6-b53b-4f7a-8c86-7700f564086d","Type":"ContainerDied","Data":"e6914af0bf21233b344cc61698b6c29b63d69d9d0c047844d144c208aeece558"} Dec 10 15:56:33 crc kubenswrapper[4755]: I1210 15:56:33.112211 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-72qxv" event={"ID":"f79d3fc6-b53b-4f7a-8c86-7700f564086d","Type":"ContainerStarted","Data":"1c4c36d8ecb29e22147fdeadfebf0904f7491ed1f6bd2f6ad1d48865c8e7abcd"} Dec 10 15:56:33 crc kubenswrapper[4755]: E1210 15:56:33.767291 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 15:56:35 crc kubenswrapper[4755]: I1210 15:56:35.137142 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-72qxv" event={"ID":"f79d3fc6-b53b-4f7a-8c86-7700f564086d","Type":"ContainerStarted","Data":"018e66f222ac929d0f8a2b907aded76236c39da36008a6455715b3783380729b"} Dec 10 15:56:37 crc kubenswrapper[4755]: I1210 15:56:37.056242 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-h7t25"] Dec 10 15:56:37 crc kubenswrapper[4755]: I1210 15:56:37.066851 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-h7t25"] Dec 10 15:56:37 crc kubenswrapper[4755]: I1210 15:56:37.779228 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c537dd47-5926-4c76-9a78-af49c9418027" path="/var/lib/kubelet/pods/c537dd47-5926-4c76-9a78-af49c9418027/volumes" Dec 10 15:56:38 crc kubenswrapper[4755]: E1210 15:56:38.766416 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 15:56:39 crc kubenswrapper[4755]: I1210 15:56:39.176410 4755 generic.go:334] "Generic (PLEG): container finished" podID="f79d3fc6-b53b-4f7a-8c86-7700f564086d" containerID="018e66f222ac929d0f8a2b907aded76236c39da36008a6455715b3783380729b" exitCode=0 Dec 10 15:56:39 crc kubenswrapper[4755]: I1210 15:56:39.176517 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-72qxv" event={"ID":"f79d3fc6-b53b-4f7a-8c86-7700f564086d","Type":"ContainerDied","Data":"018e66f222ac929d0f8a2b907aded76236c39da36008a6455715b3783380729b"} Dec 10 15:56:40 crc kubenswrapper[4755]: I1210 15:56:40.189654 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-72qxv" event={"ID":"f79d3fc6-b53b-4f7a-8c86-7700f564086d","Type":"ContainerStarted","Data":"633747405f6836fb420748537ff4de41a17af3e4df127dece294b8dd734e19c8"} Dec 10 15:56:40 crc kubenswrapper[4755]: I1210 15:56:40.259904 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-72qxv" podStartSLOduration=2.606239448 podStartE2EDuration="9.259881109s" podCreationTimestamp="2025-12-10 15:56:31 +0000 UTC" firstStartedPulling="2025-12-10 15:56:33.115643058 +0000 UTC m=+1989.716526730" lastFinishedPulling="2025-12-10 15:56:39.769284759 +0000 UTC m=+1996.370168391" observedRunningTime="2025-12-10 15:56:40.251761668 +0000 UTC m=+1996.852645300" watchObservedRunningTime="2025-12-10 15:56:40.259881109 +0000 UTC m=+1996.860764741" Dec 10 15:56:41 crc kubenswrapper[4755]: I1210 15:56:41.768927 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-72qxv" Dec 10 15:56:41 crc kubenswrapper[4755]: I1210 15:56:41.769248 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-72qxv" Dec 10 15:56:42 crc kubenswrapper[4755]: I1210 15:56:42.812060 4755 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-72qxv" podUID="f79d3fc6-b53b-4f7a-8c86-7700f564086d" containerName="registry-server" probeResult="failure" output=< Dec 10 15:56:42 crc kubenswrapper[4755]: timeout: failed to connect service ":50051" within 1s Dec 10 15:56:42 crc kubenswrapper[4755]: > Dec 10 15:56:46 crc kubenswrapper[4755]: E1210 15:56:46.759303 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 15:56:49 crc kubenswrapper[4755]: I1210 15:56:49.034460 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-vx69q"] Dec 10 15:56:49 crc kubenswrapper[4755]: I1210 15:56:49.045500 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-vx69q"] Dec 10 15:56:49 crc kubenswrapper[4755]: I1210 15:56:49.772538 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="790c3b51-ebb0-4e09-83ed-ecc8cf5c7701" path="/var/lib/kubelet/pods/790c3b51-ebb0-4e09-83ed-ecc8cf5c7701/volumes" Dec 10 15:56:51 crc kubenswrapper[4755]: E1210 15:56:51.760242 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 15:56:51 crc kubenswrapper[4755]: I1210 15:56:51.806076 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-72qxv" Dec 10 15:56:51 crc kubenswrapper[4755]: I1210 15:56:51.854471 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-72qxv" Dec 10 15:56:52 crc kubenswrapper[4755]: I1210 15:56:52.042944 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-72qxv"] Dec 10 15:56:53 crc kubenswrapper[4755]: I1210 15:56:53.311900 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-72qxv" podUID="f79d3fc6-b53b-4f7a-8c86-7700f564086d" containerName="registry-server" containerID="cri-o://633747405f6836fb420748537ff4de41a17af3e4df127dece294b8dd734e19c8" gracePeriod=2 Dec 10 15:56:53 crc kubenswrapper[4755]: I1210 15:56:53.865341 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-72qxv" Dec 10 15:56:54 crc kubenswrapper[4755]: I1210 15:56:54.054758 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-45pnw\" (UniqueName: \"kubernetes.io/projected/f79d3fc6-b53b-4f7a-8c86-7700f564086d-kube-api-access-45pnw\") pod \"f79d3fc6-b53b-4f7a-8c86-7700f564086d\" (UID: \"f79d3fc6-b53b-4f7a-8c86-7700f564086d\") " Dec 10 15:56:54 crc kubenswrapper[4755]: I1210 15:56:54.055043 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f79d3fc6-b53b-4f7a-8c86-7700f564086d-utilities\") pod \"f79d3fc6-b53b-4f7a-8c86-7700f564086d\" (UID: \"f79d3fc6-b53b-4f7a-8c86-7700f564086d\") " Dec 10 15:56:54 crc kubenswrapper[4755]: I1210 15:56:54.055085 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f79d3fc6-b53b-4f7a-8c86-7700f564086d-catalog-content\") pod \"f79d3fc6-b53b-4f7a-8c86-7700f564086d\" (UID: \"f79d3fc6-b53b-4f7a-8c86-7700f564086d\") " Dec 10 15:56:54 crc kubenswrapper[4755]: I1210 15:56:54.056138 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f79d3fc6-b53b-4f7a-8c86-7700f564086d-utilities" (OuterVolumeSpecName: "utilities") pod "f79d3fc6-b53b-4f7a-8c86-7700f564086d" (UID: "f79d3fc6-b53b-4f7a-8c86-7700f564086d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:56:54 crc kubenswrapper[4755]: I1210 15:56:54.061254 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f79d3fc6-b53b-4f7a-8c86-7700f564086d-kube-api-access-45pnw" (OuterVolumeSpecName: "kube-api-access-45pnw") pod "f79d3fc6-b53b-4f7a-8c86-7700f564086d" (UID: "f79d3fc6-b53b-4f7a-8c86-7700f564086d"). InnerVolumeSpecName "kube-api-access-45pnw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:56:54 crc kubenswrapper[4755]: I1210 15:56:54.158945 4755 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f79d3fc6-b53b-4f7a-8c86-7700f564086d-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 15:56:54 crc kubenswrapper[4755]: I1210 15:56:54.158985 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-45pnw\" (UniqueName: \"kubernetes.io/projected/f79d3fc6-b53b-4f7a-8c86-7700f564086d-kube-api-access-45pnw\") on node \"crc\" DevicePath \"\"" Dec 10 15:56:54 crc kubenswrapper[4755]: I1210 15:56:54.173036 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f79d3fc6-b53b-4f7a-8c86-7700f564086d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f79d3fc6-b53b-4f7a-8c86-7700f564086d" (UID: "f79d3fc6-b53b-4f7a-8c86-7700f564086d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:56:54 crc kubenswrapper[4755]: I1210 15:56:54.262396 4755 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f79d3fc6-b53b-4f7a-8c86-7700f564086d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 15:56:54 crc kubenswrapper[4755]: I1210 15:56:54.324842 4755 generic.go:334] "Generic (PLEG): container finished" podID="f79d3fc6-b53b-4f7a-8c86-7700f564086d" containerID="633747405f6836fb420748537ff4de41a17af3e4df127dece294b8dd734e19c8" exitCode=0 Dec 10 15:56:54 crc kubenswrapper[4755]: I1210 15:56:54.324897 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-72qxv" event={"ID":"f79d3fc6-b53b-4f7a-8c86-7700f564086d","Type":"ContainerDied","Data":"633747405f6836fb420748537ff4de41a17af3e4df127dece294b8dd734e19c8"} Dec 10 15:56:54 crc kubenswrapper[4755]: I1210 15:56:54.324931 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-72qxv" event={"ID":"f79d3fc6-b53b-4f7a-8c86-7700f564086d","Type":"ContainerDied","Data":"1c4c36d8ecb29e22147fdeadfebf0904f7491ed1f6bd2f6ad1d48865c8e7abcd"} Dec 10 15:56:54 crc kubenswrapper[4755]: I1210 15:56:54.324931 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-72qxv" Dec 10 15:56:54 crc kubenswrapper[4755]: I1210 15:56:54.324989 4755 scope.go:117] "RemoveContainer" containerID="633747405f6836fb420748537ff4de41a17af3e4df127dece294b8dd734e19c8" Dec 10 15:56:54 crc kubenswrapper[4755]: I1210 15:56:54.360269 4755 scope.go:117] "RemoveContainer" containerID="018e66f222ac929d0f8a2b907aded76236c39da36008a6455715b3783380729b" Dec 10 15:56:54 crc kubenswrapper[4755]: I1210 15:56:54.379765 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-72qxv"] Dec 10 15:56:54 crc kubenswrapper[4755]: I1210 15:56:54.388006 4755 scope.go:117] "RemoveContainer" containerID="e6914af0bf21233b344cc61698b6c29b63d69d9d0c047844d144c208aeece558" Dec 10 15:56:54 crc kubenswrapper[4755]: I1210 15:56:54.392066 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-72qxv"] Dec 10 15:56:54 crc kubenswrapper[4755]: I1210 15:56:54.438634 4755 scope.go:117] "RemoveContainer" containerID="633747405f6836fb420748537ff4de41a17af3e4df127dece294b8dd734e19c8" Dec 10 15:56:54 crc kubenswrapper[4755]: E1210 15:56:54.439133 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"633747405f6836fb420748537ff4de41a17af3e4df127dece294b8dd734e19c8\": container with ID starting with 633747405f6836fb420748537ff4de41a17af3e4df127dece294b8dd734e19c8 not found: ID does not exist" containerID="633747405f6836fb420748537ff4de41a17af3e4df127dece294b8dd734e19c8" Dec 10 15:56:54 crc kubenswrapper[4755]: I1210 15:56:54.439161 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"633747405f6836fb420748537ff4de41a17af3e4df127dece294b8dd734e19c8"} err="failed to get container status \"633747405f6836fb420748537ff4de41a17af3e4df127dece294b8dd734e19c8\": rpc error: code = NotFound desc = could not find container \"633747405f6836fb420748537ff4de41a17af3e4df127dece294b8dd734e19c8\": container with ID starting with 633747405f6836fb420748537ff4de41a17af3e4df127dece294b8dd734e19c8 not found: ID does not exist" Dec 10 15:56:54 crc kubenswrapper[4755]: I1210 15:56:54.439181 4755 scope.go:117] "RemoveContainer" containerID="018e66f222ac929d0f8a2b907aded76236c39da36008a6455715b3783380729b" Dec 10 15:56:54 crc kubenswrapper[4755]: E1210 15:56:54.439595 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"018e66f222ac929d0f8a2b907aded76236c39da36008a6455715b3783380729b\": container with ID starting with 018e66f222ac929d0f8a2b907aded76236c39da36008a6455715b3783380729b not found: ID does not exist" containerID="018e66f222ac929d0f8a2b907aded76236c39da36008a6455715b3783380729b" Dec 10 15:56:54 crc kubenswrapper[4755]: I1210 15:56:54.439617 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"018e66f222ac929d0f8a2b907aded76236c39da36008a6455715b3783380729b"} err="failed to get container status \"018e66f222ac929d0f8a2b907aded76236c39da36008a6455715b3783380729b\": rpc error: code = NotFound desc = could not find container \"018e66f222ac929d0f8a2b907aded76236c39da36008a6455715b3783380729b\": container with ID starting with 018e66f222ac929d0f8a2b907aded76236c39da36008a6455715b3783380729b not found: ID does not exist" Dec 10 15:56:54 crc kubenswrapper[4755]: I1210 15:56:54.439635 4755 scope.go:117] "RemoveContainer" containerID="e6914af0bf21233b344cc61698b6c29b63d69d9d0c047844d144c208aeece558" Dec 10 15:56:54 crc kubenswrapper[4755]: E1210 15:56:54.439834 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6914af0bf21233b344cc61698b6c29b63d69d9d0c047844d144c208aeece558\": container with ID starting with e6914af0bf21233b344cc61698b6c29b63d69d9d0c047844d144c208aeece558 not found: ID does not exist" containerID="e6914af0bf21233b344cc61698b6c29b63d69d9d0c047844d144c208aeece558" Dec 10 15:56:54 crc kubenswrapper[4755]: I1210 15:56:54.439856 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6914af0bf21233b344cc61698b6c29b63d69d9d0c047844d144c208aeece558"} err="failed to get container status \"e6914af0bf21233b344cc61698b6c29b63d69d9d0c047844d144c208aeece558\": rpc error: code = NotFound desc = could not find container \"e6914af0bf21233b344cc61698b6c29b63d69d9d0c047844d144c208aeece558\": container with ID starting with e6914af0bf21233b344cc61698b6c29b63d69d9d0c047844d144c208aeece558 not found: ID does not exist" Dec 10 15:56:55 crc kubenswrapper[4755]: I1210 15:56:55.769537 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f79d3fc6-b53b-4f7a-8c86-7700f564086d" path="/var/lib/kubelet/pods/f79d3fc6-b53b-4f7a-8c86-7700f564086d/volumes" Dec 10 15:56:59 crc kubenswrapper[4755]: E1210 15:56:59.761359 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 15:57:05 crc kubenswrapper[4755]: E1210 15:57:05.760480 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 15:57:06 crc kubenswrapper[4755]: I1210 15:57:06.712853 4755 scope.go:117] "RemoveContainer" containerID="1c1d036baaea994e62ab2ed46c5a1bfeea6ecdd9d8722ff1fc7402c8729a6add" Dec 10 15:57:06 crc kubenswrapper[4755]: I1210 15:57:06.775628 4755 scope.go:117] "RemoveContainer" containerID="1bcf89598c6d7fd6ee49ea88b74b2bf5a037da185e4ec1f1c2ebad0b1f9487c9" Dec 10 15:57:13 crc kubenswrapper[4755]: E1210 15:57:13.766697 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 15:57:18 crc kubenswrapper[4755]: E1210 15:57:18.760099 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 15:57:24 crc kubenswrapper[4755]: I1210 15:57:24.044666 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-vmr2t"] Dec 10 15:57:24 crc kubenswrapper[4755]: I1210 15:57:24.056873 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-vmr2t"] Dec 10 15:57:25 crc kubenswrapper[4755]: E1210 15:57:25.760282 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 15:57:25 crc kubenswrapper[4755]: I1210 15:57:25.771640 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79ff0629-ddf7-481b-be4f-58d4023b0ae7" path="/var/lib/kubelet/pods/79ff0629-ddf7-481b-be4f-58d4023b0ae7/volumes" Dec 10 15:57:33 crc kubenswrapper[4755]: E1210 15:57:33.768216 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 15:57:38 crc kubenswrapper[4755]: E1210 15:57:38.760308 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 15:57:40 crc kubenswrapper[4755]: I1210 15:57:40.359219 4755 patch_prober.go:28] interesting pod/machine-config-daemon-ggt8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 15:57:40 crc kubenswrapper[4755]: I1210 15:57:40.359578 4755 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 15:57:44 crc kubenswrapper[4755]: E1210 15:57:44.762641 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 15:57:49 crc kubenswrapper[4755]: E1210 15:57:49.760509 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 15:57:59 crc kubenswrapper[4755]: E1210 15:57:59.760446 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 15:58:03 crc kubenswrapper[4755]: E1210 15:58:03.772605 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 15:58:06 crc kubenswrapper[4755]: I1210 15:58:06.891797 4755 scope.go:117] "RemoveContainer" containerID="7b59ed82c9b544643082119788f0cfdaa32b6beaa9131b7ed7f4833049015779" Dec 10 15:58:10 crc kubenswrapper[4755]: I1210 15:58:10.359241 4755 patch_prober.go:28] interesting pod/machine-config-daemon-ggt8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 15:58:10 crc kubenswrapper[4755]: I1210 15:58:10.359862 4755 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 15:58:14 crc kubenswrapper[4755]: E1210 15:58:14.760658 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 15:58:15 crc kubenswrapper[4755]: E1210 15:58:15.758677 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 15:58:25 crc kubenswrapper[4755]: E1210 15:58:25.773368 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 15:58:26 crc kubenswrapper[4755]: E1210 15:58:26.758890 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 15:58:38 crc kubenswrapper[4755]: E1210 15:58:38.760601 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 15:58:39 crc kubenswrapper[4755]: E1210 15:58:39.760289 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 15:58:40 crc kubenswrapper[4755]: I1210 15:58:40.359615 4755 patch_prober.go:28] interesting pod/machine-config-daemon-ggt8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 15:58:40 crc kubenswrapper[4755]: I1210 15:58:40.359869 4755 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 15:58:40 crc kubenswrapper[4755]: I1210 15:58:40.359911 4755 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" Dec 10 15:58:40 crc kubenswrapper[4755]: I1210 15:58:40.360669 4755 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"848464b372da64a2ff4b9b5d8f68e30f7b70ba91c0c9790e6358c2e46556c416"} pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 10 15:58:40 crc kubenswrapper[4755]: I1210 15:58:40.360733 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" containerName="machine-config-daemon" containerID="cri-o://848464b372da64a2ff4b9b5d8f68e30f7b70ba91c0c9790e6358c2e46556c416" gracePeriod=600 Dec 10 15:58:40 crc kubenswrapper[4755]: I1210 15:58:40.491243 4755 generic.go:334] "Generic (PLEG): container finished" podID="b132a8b9-1c99-414d-8773-229bf36b305d" containerID="848464b372da64a2ff4b9b5d8f68e30f7b70ba91c0c9790e6358c2e46556c416" exitCode=0 Dec 10 15:58:40 crc kubenswrapper[4755]: I1210 15:58:40.491296 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" event={"ID":"b132a8b9-1c99-414d-8773-229bf36b305d","Type":"ContainerDied","Data":"848464b372da64a2ff4b9b5d8f68e30f7b70ba91c0c9790e6358c2e46556c416"} Dec 10 15:58:40 crc kubenswrapper[4755]: I1210 15:58:40.491401 4755 scope.go:117] "RemoveContainer" containerID="008e8b27aa48ce8c618284ca4dccd38cf79c00478318f7aaaa34c326eeb5ea52" Dec 10 15:58:41 crc kubenswrapper[4755]: I1210 15:58:41.503023 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" event={"ID":"b132a8b9-1c99-414d-8773-229bf36b305d","Type":"ContainerStarted","Data":"069e74fb745b22d2e40409a9c21ff2e40e0bdf9359efe0e863492dacbe4fee35"} Dec 10 15:58:49 crc kubenswrapper[4755]: E1210 15:58:49.760708 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 15:58:51 crc kubenswrapper[4755]: E1210 15:58:51.759507 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 15:59:04 crc kubenswrapper[4755]: E1210 15:59:04.760136 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 15:59:05 crc kubenswrapper[4755]: E1210 15:59:05.884558 4755 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 10 15:59:05 crc kubenswrapper[4755]: E1210 15:59:05.884650 4755 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 10 15:59:05 crc kubenswrapper[4755]: E1210 15:59:05.884798 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mz4t5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-jfc28_openstack(998863b6-4f48-4c8b-8011-a40377686b99): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 10 15:59:05 crc kubenswrapper[4755]: E1210 15:59:05.886207 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 15:59:18 crc kubenswrapper[4755]: E1210 15:59:18.760320 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 15:59:20 crc kubenswrapper[4755]: E1210 15:59:20.759725 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 15:59:31 crc kubenswrapper[4755]: I1210 15:59:31.960057 4755 generic.go:334] "Generic (PLEG): container finished" podID="cce50278-7a20-499b-bbe8-7304224cc6e4" containerID="d3c945a565aa341d0b3159e4e3549c4d853fb4b0e2e1f1f49a8a34da5d0c0e65" exitCode=2 Dec 10 15:59:31 crc kubenswrapper[4755]: I1210 15:59:31.960105 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7fj8w" event={"ID":"cce50278-7a20-499b-bbe8-7304224cc6e4","Type":"ContainerDied","Data":"d3c945a565aa341d0b3159e4e3549c4d853fb4b0e2e1f1f49a8a34da5d0c0e65"} Dec 10 15:59:32 crc kubenswrapper[4755]: E1210 15:59:32.759986 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 15:59:32 crc kubenswrapper[4755]: I1210 15:59:32.760022 4755 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 10 15:59:32 crc kubenswrapper[4755]: E1210 15:59:32.885335 4755 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 10 15:59:32 crc kubenswrapper[4755]: E1210 15:59:32.885707 4755 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 10 15:59:32 crc kubenswrapper[4755]: E1210 15:59:32.885859 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5d4h5b7hfbh5ddh688h9ch55bh7chf6h5ddh68ch94h69h5c5h596h59bh569hfchc4h676hcbh64dhdbh57fh75h5c9h98h59ch679h566h77h9cq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hw9gj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(6d104bea-ecdc-4fe1-9861-fb1a19fce845): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 10 15:59:32 crc kubenswrapper[4755]: E1210 15:59:32.887209 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 15:59:33 crc kubenswrapper[4755]: I1210 15:59:33.508173 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7fj8w" Dec 10 15:59:33 crc kubenswrapper[4755]: I1210 15:59:33.656193 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cce50278-7a20-499b-bbe8-7304224cc6e4-inventory\") pod \"cce50278-7a20-499b-bbe8-7304224cc6e4\" (UID: \"cce50278-7a20-499b-bbe8-7304224cc6e4\") " Dec 10 15:59:33 crc kubenswrapper[4755]: I1210 15:59:33.656524 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cce50278-7a20-499b-bbe8-7304224cc6e4-ssh-key\") pod \"cce50278-7a20-499b-bbe8-7304224cc6e4\" (UID: \"cce50278-7a20-499b-bbe8-7304224cc6e4\") " Dec 10 15:59:33 crc kubenswrapper[4755]: I1210 15:59:33.656942 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8lfh5\" (UniqueName: \"kubernetes.io/projected/cce50278-7a20-499b-bbe8-7304224cc6e4-kube-api-access-8lfh5\") pod \"cce50278-7a20-499b-bbe8-7304224cc6e4\" (UID: \"cce50278-7a20-499b-bbe8-7304224cc6e4\") " Dec 10 15:59:33 crc kubenswrapper[4755]: I1210 15:59:33.662181 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cce50278-7a20-499b-bbe8-7304224cc6e4-kube-api-access-8lfh5" (OuterVolumeSpecName: "kube-api-access-8lfh5") pod "cce50278-7a20-499b-bbe8-7304224cc6e4" (UID: "cce50278-7a20-499b-bbe8-7304224cc6e4"). InnerVolumeSpecName "kube-api-access-8lfh5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:59:33 crc kubenswrapper[4755]: I1210 15:59:33.686317 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cce50278-7a20-499b-bbe8-7304224cc6e4-inventory" (OuterVolumeSpecName: "inventory") pod "cce50278-7a20-499b-bbe8-7304224cc6e4" (UID: "cce50278-7a20-499b-bbe8-7304224cc6e4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:59:33 crc kubenswrapper[4755]: I1210 15:59:33.704046 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cce50278-7a20-499b-bbe8-7304224cc6e4-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "cce50278-7a20-499b-bbe8-7304224cc6e4" (UID: "cce50278-7a20-499b-bbe8-7304224cc6e4"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:59:33 crc kubenswrapper[4755]: I1210 15:59:33.759353 4755 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cce50278-7a20-499b-bbe8-7304224cc6e4-inventory\") on node \"crc\" DevicePath \"\"" Dec 10 15:59:33 crc kubenswrapper[4755]: I1210 15:59:33.759397 4755 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cce50278-7a20-499b-bbe8-7304224cc6e4-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 10 15:59:33 crc kubenswrapper[4755]: I1210 15:59:33.759408 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8lfh5\" (UniqueName: \"kubernetes.io/projected/cce50278-7a20-499b-bbe8-7304224cc6e4-kube-api-access-8lfh5\") on node \"crc\" DevicePath \"\"" Dec 10 15:59:33 crc kubenswrapper[4755]: I1210 15:59:33.984835 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7fj8w" event={"ID":"cce50278-7a20-499b-bbe8-7304224cc6e4","Type":"ContainerDied","Data":"e197e3e243954e5d58a1d1ebf9aa4ca38db83a811d5a41bf40c09098d4329967"} Dec 10 15:59:33 crc kubenswrapper[4755]: I1210 15:59:33.984877 4755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e197e3e243954e5d58a1d1ebf9aa4ca38db83a811d5a41bf40c09098d4329967" Dec 10 15:59:33 crc kubenswrapper[4755]: I1210 15:59:33.984911 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7fj8w" Dec 10 15:59:41 crc kubenswrapper[4755]: I1210 15:59:41.032885 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-246lg"] Dec 10 15:59:41 crc kubenswrapper[4755]: E1210 15:59:41.042849 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f79d3fc6-b53b-4f7a-8c86-7700f564086d" containerName="extract-utilities" Dec 10 15:59:41 crc kubenswrapper[4755]: I1210 15:59:41.042909 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="f79d3fc6-b53b-4f7a-8c86-7700f564086d" containerName="extract-utilities" Dec 10 15:59:41 crc kubenswrapper[4755]: E1210 15:59:41.042959 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f79d3fc6-b53b-4f7a-8c86-7700f564086d" containerName="registry-server" Dec 10 15:59:41 crc kubenswrapper[4755]: I1210 15:59:41.042971 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="f79d3fc6-b53b-4f7a-8c86-7700f564086d" containerName="registry-server" Dec 10 15:59:41 crc kubenswrapper[4755]: E1210 15:59:41.043004 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cce50278-7a20-499b-bbe8-7304224cc6e4" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 10 15:59:41 crc kubenswrapper[4755]: I1210 15:59:41.043015 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="cce50278-7a20-499b-bbe8-7304224cc6e4" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 10 15:59:41 crc kubenswrapper[4755]: E1210 15:59:41.043041 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f79d3fc6-b53b-4f7a-8c86-7700f564086d" containerName="extract-content" Dec 10 15:59:41 crc kubenswrapper[4755]: I1210 15:59:41.043051 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="f79d3fc6-b53b-4f7a-8c86-7700f564086d" containerName="extract-content" Dec 10 15:59:41 crc kubenswrapper[4755]: I1210 15:59:41.043647 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="cce50278-7a20-499b-bbe8-7304224cc6e4" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 10 15:59:41 crc kubenswrapper[4755]: I1210 15:59:41.043706 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="f79d3fc6-b53b-4f7a-8c86-7700f564086d" containerName="registry-server" Dec 10 15:59:41 crc kubenswrapper[4755]: I1210 15:59:41.045282 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-246lg" Dec 10 15:59:41 crc kubenswrapper[4755]: I1210 15:59:41.050300 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 10 15:59:41 crc kubenswrapper[4755]: I1210 15:59:41.053348 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 10 15:59:41 crc kubenswrapper[4755]: I1210 15:59:41.054858 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-74mg7" Dec 10 15:59:41 crc kubenswrapper[4755]: I1210 15:59:41.079870 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 10 15:59:41 crc kubenswrapper[4755]: I1210 15:59:41.100249 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-246lg"] Dec 10 15:59:41 crc kubenswrapper[4755]: I1210 15:59:41.119292 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e893969f-84c7-4d33-a977-13cdc1a9ef2e-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-246lg\" (UID: \"e893969f-84c7-4d33-a977-13cdc1a9ef2e\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-246lg" Dec 10 15:59:41 crc kubenswrapper[4755]: I1210 15:59:41.119364 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e893969f-84c7-4d33-a977-13cdc1a9ef2e-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-246lg\" (UID: \"e893969f-84c7-4d33-a977-13cdc1a9ef2e\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-246lg" Dec 10 15:59:41 crc kubenswrapper[4755]: I1210 15:59:41.119388 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klgpv\" (UniqueName: \"kubernetes.io/projected/e893969f-84c7-4d33-a977-13cdc1a9ef2e-kube-api-access-klgpv\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-246lg\" (UID: \"e893969f-84c7-4d33-a977-13cdc1a9ef2e\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-246lg" Dec 10 15:59:41 crc kubenswrapper[4755]: I1210 15:59:41.221276 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e893969f-84c7-4d33-a977-13cdc1a9ef2e-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-246lg\" (UID: \"e893969f-84c7-4d33-a977-13cdc1a9ef2e\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-246lg" Dec 10 15:59:41 crc kubenswrapper[4755]: I1210 15:59:41.221358 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e893969f-84c7-4d33-a977-13cdc1a9ef2e-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-246lg\" (UID: \"e893969f-84c7-4d33-a977-13cdc1a9ef2e\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-246lg" Dec 10 15:59:41 crc kubenswrapper[4755]: I1210 15:59:41.221380 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-klgpv\" (UniqueName: \"kubernetes.io/projected/e893969f-84c7-4d33-a977-13cdc1a9ef2e-kube-api-access-klgpv\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-246lg\" (UID: \"e893969f-84c7-4d33-a977-13cdc1a9ef2e\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-246lg" Dec 10 15:59:41 crc kubenswrapper[4755]: I1210 15:59:41.228133 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e893969f-84c7-4d33-a977-13cdc1a9ef2e-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-246lg\" (UID: \"e893969f-84c7-4d33-a977-13cdc1a9ef2e\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-246lg" Dec 10 15:59:41 crc kubenswrapper[4755]: I1210 15:59:41.234703 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e893969f-84c7-4d33-a977-13cdc1a9ef2e-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-246lg\" (UID: \"e893969f-84c7-4d33-a977-13cdc1a9ef2e\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-246lg" Dec 10 15:59:41 crc kubenswrapper[4755]: I1210 15:59:41.237373 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-klgpv\" (UniqueName: \"kubernetes.io/projected/e893969f-84c7-4d33-a977-13cdc1a9ef2e-kube-api-access-klgpv\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-246lg\" (UID: \"e893969f-84c7-4d33-a977-13cdc1a9ef2e\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-246lg" Dec 10 15:59:41 crc kubenswrapper[4755]: I1210 15:59:41.386020 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-246lg" Dec 10 15:59:41 crc kubenswrapper[4755]: I1210 15:59:41.904167 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-246lg"] Dec 10 15:59:42 crc kubenswrapper[4755]: I1210 15:59:42.077315 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-246lg" event={"ID":"e893969f-84c7-4d33-a977-13cdc1a9ef2e","Type":"ContainerStarted","Data":"7233f413a7a8daafd32fcef01021c0fb5f1df7c6b3cd8562d2f07c77b0afa490"} Dec 10 15:59:43 crc kubenswrapper[4755]: I1210 15:59:43.091567 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-246lg" event={"ID":"e893969f-84c7-4d33-a977-13cdc1a9ef2e","Type":"ContainerStarted","Data":"9f3b3d3f9de2495b039ac5e8dfb6fe5bc106a9d5d04acc3fb71782e81ab24292"} Dec 10 15:59:43 crc kubenswrapper[4755]: I1210 15:59:43.129269 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-246lg" podStartSLOduration=1.336487219 podStartE2EDuration="2.129248694s" podCreationTimestamp="2025-12-10 15:59:41 +0000 UTC" firstStartedPulling="2025-12-10 15:59:41.908966585 +0000 UTC m=+2178.509850207" lastFinishedPulling="2025-12-10 15:59:42.70172805 +0000 UTC m=+2179.302611682" observedRunningTime="2025-12-10 15:59:43.120315401 +0000 UTC m=+2179.721199033" watchObservedRunningTime="2025-12-10 15:59:43.129248694 +0000 UTC m=+2179.730132326" Dec 10 15:59:43 crc kubenswrapper[4755]: E1210 15:59:43.786003 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 15:59:47 crc kubenswrapper[4755]: E1210 15:59:47.760394 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 15:59:57 crc kubenswrapper[4755]: E1210 15:59:57.760130 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:00:00 crc kubenswrapper[4755]: I1210 16:00:00.153345 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29423040-hf4vs"] Dec 10 16:00:00 crc kubenswrapper[4755]: I1210 16:00:00.155233 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29423040-hf4vs" Dec 10 16:00:00 crc kubenswrapper[4755]: I1210 16:00:00.158208 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 10 16:00:00 crc kubenswrapper[4755]: I1210 16:00:00.159134 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 10 16:00:00 crc kubenswrapper[4755]: I1210 16:00:00.169376 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29423040-hf4vs"] Dec 10 16:00:00 crc kubenswrapper[4755]: I1210 16:00:00.324689 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1680331d-d48e-4757-aee4-fab91fecff27-config-volume\") pod \"collect-profiles-29423040-hf4vs\" (UID: \"1680331d-d48e-4757-aee4-fab91fecff27\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423040-hf4vs" Dec 10 16:00:00 crc kubenswrapper[4755]: I1210 16:00:00.324743 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1680331d-d48e-4757-aee4-fab91fecff27-secret-volume\") pod \"collect-profiles-29423040-hf4vs\" (UID: \"1680331d-d48e-4757-aee4-fab91fecff27\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423040-hf4vs" Dec 10 16:00:00 crc kubenswrapper[4755]: I1210 16:00:00.325890 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkh6p\" (UniqueName: \"kubernetes.io/projected/1680331d-d48e-4757-aee4-fab91fecff27-kube-api-access-qkh6p\") pod \"collect-profiles-29423040-hf4vs\" (UID: \"1680331d-d48e-4757-aee4-fab91fecff27\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423040-hf4vs" Dec 10 16:00:00 crc kubenswrapper[4755]: I1210 16:00:00.427813 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkh6p\" (UniqueName: \"kubernetes.io/projected/1680331d-d48e-4757-aee4-fab91fecff27-kube-api-access-qkh6p\") pod \"collect-profiles-29423040-hf4vs\" (UID: \"1680331d-d48e-4757-aee4-fab91fecff27\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423040-hf4vs" Dec 10 16:00:00 crc kubenswrapper[4755]: I1210 16:00:00.428033 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1680331d-d48e-4757-aee4-fab91fecff27-config-volume\") pod \"collect-profiles-29423040-hf4vs\" (UID: \"1680331d-d48e-4757-aee4-fab91fecff27\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423040-hf4vs" Dec 10 16:00:00 crc kubenswrapper[4755]: I1210 16:00:00.428067 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1680331d-d48e-4757-aee4-fab91fecff27-secret-volume\") pod \"collect-profiles-29423040-hf4vs\" (UID: \"1680331d-d48e-4757-aee4-fab91fecff27\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423040-hf4vs" Dec 10 16:00:00 crc kubenswrapper[4755]: I1210 16:00:00.429122 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1680331d-d48e-4757-aee4-fab91fecff27-config-volume\") pod \"collect-profiles-29423040-hf4vs\" (UID: \"1680331d-d48e-4757-aee4-fab91fecff27\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423040-hf4vs" Dec 10 16:00:00 crc kubenswrapper[4755]: I1210 16:00:00.434410 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1680331d-d48e-4757-aee4-fab91fecff27-secret-volume\") pod \"collect-profiles-29423040-hf4vs\" (UID: \"1680331d-d48e-4757-aee4-fab91fecff27\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423040-hf4vs" Dec 10 16:00:00 crc kubenswrapper[4755]: I1210 16:00:00.444352 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkh6p\" (UniqueName: \"kubernetes.io/projected/1680331d-d48e-4757-aee4-fab91fecff27-kube-api-access-qkh6p\") pod \"collect-profiles-29423040-hf4vs\" (UID: \"1680331d-d48e-4757-aee4-fab91fecff27\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423040-hf4vs" Dec 10 16:00:00 crc kubenswrapper[4755]: I1210 16:00:00.499545 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29423040-hf4vs" Dec 10 16:00:00 crc kubenswrapper[4755]: I1210 16:00:00.976570 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29423040-hf4vs"] Dec 10 16:00:00 crc kubenswrapper[4755]: W1210 16:00:00.983677 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1680331d_d48e_4757_aee4_fab91fecff27.slice/crio-d94781681cde882c70f6874348220cacaee7e806d365da098d8bd10ccd25d9c6 WatchSource:0}: Error finding container d94781681cde882c70f6874348220cacaee7e806d365da098d8bd10ccd25d9c6: Status 404 returned error can't find the container with id d94781681cde882c70f6874348220cacaee7e806d365da098d8bd10ccd25d9c6 Dec 10 16:00:01 crc kubenswrapper[4755]: I1210 16:00:01.224655 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-tkjw9"] Dec 10 16:00:01 crc kubenswrapper[4755]: I1210 16:00:01.227741 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tkjw9" Dec 10 16:00:01 crc kubenswrapper[4755]: I1210 16:00:01.238670 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tkjw9"] Dec 10 16:00:01 crc kubenswrapper[4755]: I1210 16:00:01.277071 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29423040-hf4vs" event={"ID":"1680331d-d48e-4757-aee4-fab91fecff27","Type":"ContainerStarted","Data":"d94781681cde882c70f6874348220cacaee7e806d365da098d8bd10ccd25d9c6"} Dec 10 16:00:01 crc kubenswrapper[4755]: I1210 16:00:01.358220 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dpmp\" (UniqueName: \"kubernetes.io/projected/d63e3ee2-9e30-4e71-a323-2bc785ae5af5-kube-api-access-6dpmp\") pod \"community-operators-tkjw9\" (UID: \"d63e3ee2-9e30-4e71-a323-2bc785ae5af5\") " pod="openshift-marketplace/community-operators-tkjw9" Dec 10 16:00:01 crc kubenswrapper[4755]: I1210 16:00:01.358371 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d63e3ee2-9e30-4e71-a323-2bc785ae5af5-utilities\") pod \"community-operators-tkjw9\" (UID: \"d63e3ee2-9e30-4e71-a323-2bc785ae5af5\") " pod="openshift-marketplace/community-operators-tkjw9" Dec 10 16:00:01 crc kubenswrapper[4755]: I1210 16:00:01.359148 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d63e3ee2-9e30-4e71-a323-2bc785ae5af5-catalog-content\") pod \"community-operators-tkjw9\" (UID: \"d63e3ee2-9e30-4e71-a323-2bc785ae5af5\") " pod="openshift-marketplace/community-operators-tkjw9" Dec 10 16:00:01 crc kubenswrapper[4755]: I1210 16:00:01.461532 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dpmp\" (UniqueName: \"kubernetes.io/projected/d63e3ee2-9e30-4e71-a323-2bc785ae5af5-kube-api-access-6dpmp\") pod \"community-operators-tkjw9\" (UID: \"d63e3ee2-9e30-4e71-a323-2bc785ae5af5\") " pod="openshift-marketplace/community-operators-tkjw9" Dec 10 16:00:01 crc kubenswrapper[4755]: I1210 16:00:01.461602 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d63e3ee2-9e30-4e71-a323-2bc785ae5af5-utilities\") pod \"community-operators-tkjw9\" (UID: \"d63e3ee2-9e30-4e71-a323-2bc785ae5af5\") " pod="openshift-marketplace/community-operators-tkjw9" Dec 10 16:00:01 crc kubenswrapper[4755]: I1210 16:00:01.461722 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d63e3ee2-9e30-4e71-a323-2bc785ae5af5-catalog-content\") pod \"community-operators-tkjw9\" (UID: \"d63e3ee2-9e30-4e71-a323-2bc785ae5af5\") " pod="openshift-marketplace/community-operators-tkjw9" Dec 10 16:00:01 crc kubenswrapper[4755]: I1210 16:00:01.462152 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d63e3ee2-9e30-4e71-a323-2bc785ae5af5-catalog-content\") pod \"community-operators-tkjw9\" (UID: \"d63e3ee2-9e30-4e71-a323-2bc785ae5af5\") " pod="openshift-marketplace/community-operators-tkjw9" Dec 10 16:00:01 crc kubenswrapper[4755]: I1210 16:00:01.462696 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d63e3ee2-9e30-4e71-a323-2bc785ae5af5-utilities\") pod \"community-operators-tkjw9\" (UID: \"d63e3ee2-9e30-4e71-a323-2bc785ae5af5\") " pod="openshift-marketplace/community-operators-tkjw9" Dec 10 16:00:01 crc kubenswrapper[4755]: I1210 16:00:01.492128 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dpmp\" (UniqueName: \"kubernetes.io/projected/d63e3ee2-9e30-4e71-a323-2bc785ae5af5-kube-api-access-6dpmp\") pod \"community-operators-tkjw9\" (UID: \"d63e3ee2-9e30-4e71-a323-2bc785ae5af5\") " pod="openshift-marketplace/community-operators-tkjw9" Dec 10 16:00:01 crc kubenswrapper[4755]: I1210 16:00:01.567096 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tkjw9" Dec 10 16:00:02 crc kubenswrapper[4755]: I1210 16:00:02.203338 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tkjw9"] Dec 10 16:00:02 crc kubenswrapper[4755]: I1210 16:00:02.287773 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tkjw9" event={"ID":"d63e3ee2-9e30-4e71-a323-2bc785ae5af5","Type":"ContainerStarted","Data":"eb1a24035cf57bf833f59af2240afe7cbcab51d46022909589baffe1c10acda1"} Dec 10 16:00:02 crc kubenswrapper[4755]: I1210 16:00:02.290568 4755 generic.go:334] "Generic (PLEG): container finished" podID="1680331d-d48e-4757-aee4-fab91fecff27" containerID="ef23a65e007a36f295d36364d5cbaa01c03adff2ce69f056f5aa6c2e3316a04c" exitCode=0 Dec 10 16:00:02 crc kubenswrapper[4755]: I1210 16:00:02.290647 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29423040-hf4vs" event={"ID":"1680331d-d48e-4757-aee4-fab91fecff27","Type":"ContainerDied","Data":"ef23a65e007a36f295d36364d5cbaa01c03adff2ce69f056f5aa6c2e3316a04c"} Dec 10 16:00:02 crc kubenswrapper[4755]: E1210 16:00:02.758629 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:00:03 crc kubenswrapper[4755]: I1210 16:00:03.301519 4755 generic.go:334] "Generic (PLEG): container finished" podID="d63e3ee2-9e30-4e71-a323-2bc785ae5af5" containerID="89cf97fd34591d3652382cd919b7e2ab93407ebc6992ca80ddce63d05cb4faf7" exitCode=0 Dec 10 16:00:03 crc kubenswrapper[4755]: I1210 16:00:03.301606 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tkjw9" event={"ID":"d63e3ee2-9e30-4e71-a323-2bc785ae5af5","Type":"ContainerDied","Data":"89cf97fd34591d3652382cd919b7e2ab93407ebc6992ca80ddce63d05cb4faf7"} Dec 10 16:00:03 crc kubenswrapper[4755]: I1210 16:00:03.699845 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29423040-hf4vs" Dec 10 16:00:03 crc kubenswrapper[4755]: I1210 16:00:03.833227 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qkh6p\" (UniqueName: \"kubernetes.io/projected/1680331d-d48e-4757-aee4-fab91fecff27-kube-api-access-qkh6p\") pod \"1680331d-d48e-4757-aee4-fab91fecff27\" (UID: \"1680331d-d48e-4757-aee4-fab91fecff27\") " Dec 10 16:00:03 crc kubenswrapper[4755]: I1210 16:00:03.833280 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1680331d-d48e-4757-aee4-fab91fecff27-secret-volume\") pod \"1680331d-d48e-4757-aee4-fab91fecff27\" (UID: \"1680331d-d48e-4757-aee4-fab91fecff27\") " Dec 10 16:00:03 crc kubenswrapper[4755]: I1210 16:00:03.833493 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1680331d-d48e-4757-aee4-fab91fecff27-config-volume\") pod \"1680331d-d48e-4757-aee4-fab91fecff27\" (UID: \"1680331d-d48e-4757-aee4-fab91fecff27\") " Dec 10 16:00:03 crc kubenswrapper[4755]: I1210 16:00:03.834593 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1680331d-d48e-4757-aee4-fab91fecff27-config-volume" (OuterVolumeSpecName: "config-volume") pod "1680331d-d48e-4757-aee4-fab91fecff27" (UID: "1680331d-d48e-4757-aee4-fab91fecff27"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 16:00:03 crc kubenswrapper[4755]: I1210 16:00:03.838800 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1680331d-d48e-4757-aee4-fab91fecff27-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "1680331d-d48e-4757-aee4-fab91fecff27" (UID: "1680331d-d48e-4757-aee4-fab91fecff27"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 16:00:03 crc kubenswrapper[4755]: I1210 16:00:03.839111 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1680331d-d48e-4757-aee4-fab91fecff27-kube-api-access-qkh6p" (OuterVolumeSpecName: "kube-api-access-qkh6p") pod "1680331d-d48e-4757-aee4-fab91fecff27" (UID: "1680331d-d48e-4757-aee4-fab91fecff27"). InnerVolumeSpecName "kube-api-access-qkh6p". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 16:00:03 crc kubenswrapper[4755]: I1210 16:00:03.935903 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qkh6p\" (UniqueName: \"kubernetes.io/projected/1680331d-d48e-4757-aee4-fab91fecff27-kube-api-access-qkh6p\") on node \"crc\" DevicePath \"\"" Dec 10 16:00:03 crc kubenswrapper[4755]: I1210 16:00:03.936125 4755 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1680331d-d48e-4757-aee4-fab91fecff27-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 10 16:00:03 crc kubenswrapper[4755]: I1210 16:00:03.936208 4755 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1680331d-d48e-4757-aee4-fab91fecff27-config-volume\") on node \"crc\" DevicePath \"\"" Dec 10 16:00:04 crc kubenswrapper[4755]: I1210 16:00:04.312112 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29423040-hf4vs" event={"ID":"1680331d-d48e-4757-aee4-fab91fecff27","Type":"ContainerDied","Data":"d94781681cde882c70f6874348220cacaee7e806d365da098d8bd10ccd25d9c6"} Dec 10 16:00:04 crc kubenswrapper[4755]: I1210 16:00:04.313376 4755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d94781681cde882c70f6874348220cacaee7e806d365da098d8bd10ccd25d9c6" Dec 10 16:00:04 crc kubenswrapper[4755]: I1210 16:00:04.312458 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29423040-hf4vs" Dec 10 16:00:04 crc kubenswrapper[4755]: I1210 16:00:04.784987 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29422995-knlb5"] Dec 10 16:00:04 crc kubenswrapper[4755]: I1210 16:00:04.794199 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29422995-knlb5"] Dec 10 16:00:05 crc kubenswrapper[4755]: I1210 16:00:05.330667 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tkjw9" event={"ID":"d63e3ee2-9e30-4e71-a323-2bc785ae5af5","Type":"ContainerStarted","Data":"3ac47199e37a1984946537e6fdff09cc4936b09d77a1b1b40c680389e7acc277"} Dec 10 16:00:05 crc kubenswrapper[4755]: I1210 16:00:05.416843 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-267zk"] Dec 10 16:00:05 crc kubenswrapper[4755]: E1210 16:00:05.417319 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1680331d-d48e-4757-aee4-fab91fecff27" containerName="collect-profiles" Dec 10 16:00:05 crc kubenswrapper[4755]: I1210 16:00:05.417338 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="1680331d-d48e-4757-aee4-fab91fecff27" containerName="collect-profiles" Dec 10 16:00:05 crc kubenswrapper[4755]: I1210 16:00:05.417546 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="1680331d-d48e-4757-aee4-fab91fecff27" containerName="collect-profiles" Dec 10 16:00:05 crc kubenswrapper[4755]: I1210 16:00:05.423492 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-267zk" Dec 10 16:00:05 crc kubenswrapper[4755]: I1210 16:00:05.452085 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-267zk"] Dec 10 16:00:05 crc kubenswrapper[4755]: I1210 16:00:05.472102 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tzrc\" (UniqueName: \"kubernetes.io/projected/123e37d4-eb82-4a56-8f62-1c430a740ad0-kube-api-access-6tzrc\") pod \"redhat-marketplace-267zk\" (UID: \"123e37d4-eb82-4a56-8f62-1c430a740ad0\") " pod="openshift-marketplace/redhat-marketplace-267zk" Dec 10 16:00:05 crc kubenswrapper[4755]: I1210 16:00:05.472412 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/123e37d4-eb82-4a56-8f62-1c430a740ad0-catalog-content\") pod \"redhat-marketplace-267zk\" (UID: \"123e37d4-eb82-4a56-8f62-1c430a740ad0\") " pod="openshift-marketplace/redhat-marketplace-267zk" Dec 10 16:00:05 crc kubenswrapper[4755]: I1210 16:00:05.472921 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/123e37d4-eb82-4a56-8f62-1c430a740ad0-utilities\") pod \"redhat-marketplace-267zk\" (UID: \"123e37d4-eb82-4a56-8f62-1c430a740ad0\") " pod="openshift-marketplace/redhat-marketplace-267zk" Dec 10 16:00:05 crc kubenswrapper[4755]: I1210 16:00:05.573812 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tzrc\" (UniqueName: \"kubernetes.io/projected/123e37d4-eb82-4a56-8f62-1c430a740ad0-kube-api-access-6tzrc\") pod \"redhat-marketplace-267zk\" (UID: \"123e37d4-eb82-4a56-8f62-1c430a740ad0\") " pod="openshift-marketplace/redhat-marketplace-267zk" Dec 10 16:00:05 crc kubenswrapper[4755]: I1210 16:00:05.573904 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/123e37d4-eb82-4a56-8f62-1c430a740ad0-catalog-content\") pod \"redhat-marketplace-267zk\" (UID: \"123e37d4-eb82-4a56-8f62-1c430a740ad0\") " pod="openshift-marketplace/redhat-marketplace-267zk" Dec 10 16:00:05 crc kubenswrapper[4755]: I1210 16:00:05.574038 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/123e37d4-eb82-4a56-8f62-1c430a740ad0-utilities\") pod \"redhat-marketplace-267zk\" (UID: \"123e37d4-eb82-4a56-8f62-1c430a740ad0\") " pod="openshift-marketplace/redhat-marketplace-267zk" Dec 10 16:00:05 crc kubenswrapper[4755]: I1210 16:00:05.574430 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/123e37d4-eb82-4a56-8f62-1c430a740ad0-catalog-content\") pod \"redhat-marketplace-267zk\" (UID: \"123e37d4-eb82-4a56-8f62-1c430a740ad0\") " pod="openshift-marketplace/redhat-marketplace-267zk" Dec 10 16:00:05 crc kubenswrapper[4755]: I1210 16:00:05.574441 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/123e37d4-eb82-4a56-8f62-1c430a740ad0-utilities\") pod \"redhat-marketplace-267zk\" (UID: \"123e37d4-eb82-4a56-8f62-1c430a740ad0\") " pod="openshift-marketplace/redhat-marketplace-267zk" Dec 10 16:00:05 crc kubenswrapper[4755]: I1210 16:00:05.595305 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tzrc\" (UniqueName: \"kubernetes.io/projected/123e37d4-eb82-4a56-8f62-1c430a740ad0-kube-api-access-6tzrc\") pod \"redhat-marketplace-267zk\" (UID: \"123e37d4-eb82-4a56-8f62-1c430a740ad0\") " pod="openshift-marketplace/redhat-marketplace-267zk" Dec 10 16:00:05 crc kubenswrapper[4755]: I1210 16:00:05.753964 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-267zk" Dec 10 16:00:05 crc kubenswrapper[4755]: I1210 16:00:05.793857 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d212083-766e-4b88-ba08-8570f05f6c94" path="/var/lib/kubelet/pods/8d212083-766e-4b88-ba08-8570f05f6c94/volumes" Dec 10 16:00:06 crc kubenswrapper[4755]: W1210 16:00:06.268338 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod123e37d4_eb82_4a56_8f62_1c430a740ad0.slice/crio-de7a50f92a2f90439df578aeb59c7461c75809b787c3e7c7136741ce215fbbbd WatchSource:0}: Error finding container de7a50f92a2f90439df578aeb59c7461c75809b787c3e7c7136741ce215fbbbd: Status 404 returned error can't find the container with id de7a50f92a2f90439df578aeb59c7461c75809b787c3e7c7136741ce215fbbbd Dec 10 16:00:06 crc kubenswrapper[4755]: I1210 16:00:06.270379 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-267zk"] Dec 10 16:00:06 crc kubenswrapper[4755]: I1210 16:00:06.341865 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-267zk" event={"ID":"123e37d4-eb82-4a56-8f62-1c430a740ad0","Type":"ContainerStarted","Data":"de7a50f92a2f90439df578aeb59c7461c75809b787c3e7c7136741ce215fbbbd"} Dec 10 16:00:06 crc kubenswrapper[4755]: I1210 16:00:06.345135 4755 generic.go:334] "Generic (PLEG): container finished" podID="d63e3ee2-9e30-4e71-a323-2bc785ae5af5" containerID="3ac47199e37a1984946537e6fdff09cc4936b09d77a1b1b40c680389e7acc277" exitCode=0 Dec 10 16:00:06 crc kubenswrapper[4755]: I1210 16:00:06.345174 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tkjw9" event={"ID":"d63e3ee2-9e30-4e71-a323-2bc785ae5af5","Type":"ContainerDied","Data":"3ac47199e37a1984946537e6fdff09cc4936b09d77a1b1b40c680389e7acc277"} Dec 10 16:00:06 crc kubenswrapper[4755]: I1210 16:00:06.990155 4755 scope.go:117] "RemoveContainer" containerID="5a8f3ef990acd86ea0a2ed264f1039a0a313e1e53fa3e09e93fb45cde45f47fd" Dec 10 16:00:07 crc kubenswrapper[4755]: I1210 16:00:07.358567 4755 generic.go:334] "Generic (PLEG): container finished" podID="123e37d4-eb82-4a56-8f62-1c430a740ad0" containerID="9cb96d6218b0861b67f66ff94f7e0eed91a4177d8cea7f88323768ba98bbe4ed" exitCode=0 Dec 10 16:00:07 crc kubenswrapper[4755]: I1210 16:00:07.358803 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-267zk" event={"ID":"123e37d4-eb82-4a56-8f62-1c430a740ad0","Type":"ContainerDied","Data":"9cb96d6218b0861b67f66ff94f7e0eed91a4177d8cea7f88323768ba98bbe4ed"} Dec 10 16:00:08 crc kubenswrapper[4755]: E1210 16:00:08.761448 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:00:09 crc kubenswrapper[4755]: I1210 16:00:09.380184 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-267zk" event={"ID":"123e37d4-eb82-4a56-8f62-1c430a740ad0","Type":"ContainerStarted","Data":"e3b4c523aab95383f1a567726d672663ac883ccfac7d8b7b08a21c6718d49c70"} Dec 10 16:00:10 crc kubenswrapper[4755]: I1210 16:00:10.395561 4755 generic.go:334] "Generic (PLEG): container finished" podID="123e37d4-eb82-4a56-8f62-1c430a740ad0" containerID="e3b4c523aab95383f1a567726d672663ac883ccfac7d8b7b08a21c6718d49c70" exitCode=0 Dec 10 16:00:10 crc kubenswrapper[4755]: I1210 16:00:10.395865 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-267zk" event={"ID":"123e37d4-eb82-4a56-8f62-1c430a740ad0","Type":"ContainerDied","Data":"e3b4c523aab95383f1a567726d672663ac883ccfac7d8b7b08a21c6718d49c70"} Dec 10 16:00:11 crc kubenswrapper[4755]: I1210 16:00:11.407400 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-267zk" event={"ID":"123e37d4-eb82-4a56-8f62-1c430a740ad0","Type":"ContainerStarted","Data":"cf2bb49164375b93bbcd93366bf919c820ca7420e03c405fef58656d1fa3a3c3"} Dec 10 16:00:11 crc kubenswrapper[4755]: I1210 16:00:11.416038 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tkjw9" event={"ID":"d63e3ee2-9e30-4e71-a323-2bc785ae5af5","Type":"ContainerStarted","Data":"0c72c5ced6f6cb8bda8d1cd28d457185957013e182bed0526a89f2dd1c836535"} Dec 10 16:00:11 crc kubenswrapper[4755]: I1210 16:00:11.431411 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-267zk" podStartSLOduration=2.647304354 podStartE2EDuration="6.431383602s" podCreationTimestamp="2025-12-10 16:00:05 +0000 UTC" firstStartedPulling="2025-12-10 16:00:07.362216033 +0000 UTC m=+2203.963099685" lastFinishedPulling="2025-12-10 16:00:11.146295301 +0000 UTC m=+2207.747178933" observedRunningTime="2025-12-10 16:00:11.426017016 +0000 UTC m=+2208.026900658" watchObservedRunningTime="2025-12-10 16:00:11.431383602 +0000 UTC m=+2208.032267234" Dec 10 16:00:11 crc kubenswrapper[4755]: I1210 16:00:11.452929 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-tkjw9" podStartSLOduration=3.283923585 podStartE2EDuration="10.452912088s" podCreationTimestamp="2025-12-10 16:00:01 +0000 UTC" firstStartedPulling="2025-12-10 16:00:03.303271461 +0000 UTC m=+2199.904155093" lastFinishedPulling="2025-12-10 16:00:10.472259954 +0000 UTC m=+2207.073143596" observedRunningTime="2025-12-10 16:00:11.449335261 +0000 UTC m=+2208.050218893" watchObservedRunningTime="2025-12-10 16:00:11.452912088 +0000 UTC m=+2208.053795720" Dec 10 16:00:11 crc kubenswrapper[4755]: I1210 16:00:11.567233 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-tkjw9" Dec 10 16:00:11 crc kubenswrapper[4755]: I1210 16:00:11.567689 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-tkjw9" Dec 10 16:00:12 crc kubenswrapper[4755]: I1210 16:00:12.610888 4755 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-tkjw9" podUID="d63e3ee2-9e30-4e71-a323-2bc785ae5af5" containerName="registry-server" probeResult="failure" output=< Dec 10 16:00:12 crc kubenswrapper[4755]: timeout: failed to connect service ":50051" within 1s Dec 10 16:00:12 crc kubenswrapper[4755]: > Dec 10 16:00:15 crc kubenswrapper[4755]: I1210 16:00:15.754904 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-267zk" Dec 10 16:00:15 crc kubenswrapper[4755]: I1210 16:00:15.755489 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-267zk" Dec 10 16:00:15 crc kubenswrapper[4755]: E1210 16:00:15.759393 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:00:15 crc kubenswrapper[4755]: I1210 16:00:15.824857 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-267zk" Dec 10 16:00:16 crc kubenswrapper[4755]: I1210 16:00:16.524599 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-267zk" Dec 10 16:00:17 crc kubenswrapper[4755]: I1210 16:00:17.017570 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-267zk"] Dec 10 16:00:18 crc kubenswrapper[4755]: I1210 16:00:18.493765 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-267zk" podUID="123e37d4-eb82-4a56-8f62-1c430a740ad0" containerName="registry-server" containerID="cri-o://cf2bb49164375b93bbcd93366bf919c820ca7420e03c405fef58656d1fa3a3c3" gracePeriod=2 Dec 10 16:00:19 crc kubenswrapper[4755]: I1210 16:00:19.020233 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-267zk" Dec 10 16:00:19 crc kubenswrapper[4755]: I1210 16:00:19.176636 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/123e37d4-eb82-4a56-8f62-1c430a740ad0-utilities\") pod \"123e37d4-eb82-4a56-8f62-1c430a740ad0\" (UID: \"123e37d4-eb82-4a56-8f62-1c430a740ad0\") " Dec 10 16:00:19 crc kubenswrapper[4755]: I1210 16:00:19.176738 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/123e37d4-eb82-4a56-8f62-1c430a740ad0-catalog-content\") pod \"123e37d4-eb82-4a56-8f62-1c430a740ad0\" (UID: \"123e37d4-eb82-4a56-8f62-1c430a740ad0\") " Dec 10 16:00:19 crc kubenswrapper[4755]: I1210 16:00:19.176874 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6tzrc\" (UniqueName: \"kubernetes.io/projected/123e37d4-eb82-4a56-8f62-1c430a740ad0-kube-api-access-6tzrc\") pod \"123e37d4-eb82-4a56-8f62-1c430a740ad0\" (UID: \"123e37d4-eb82-4a56-8f62-1c430a740ad0\") " Dec 10 16:00:19 crc kubenswrapper[4755]: I1210 16:00:19.178355 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/123e37d4-eb82-4a56-8f62-1c430a740ad0-utilities" (OuterVolumeSpecName: "utilities") pod "123e37d4-eb82-4a56-8f62-1c430a740ad0" (UID: "123e37d4-eb82-4a56-8f62-1c430a740ad0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 16:00:19 crc kubenswrapper[4755]: I1210 16:00:19.184029 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/123e37d4-eb82-4a56-8f62-1c430a740ad0-kube-api-access-6tzrc" (OuterVolumeSpecName: "kube-api-access-6tzrc") pod "123e37d4-eb82-4a56-8f62-1c430a740ad0" (UID: "123e37d4-eb82-4a56-8f62-1c430a740ad0"). InnerVolumeSpecName "kube-api-access-6tzrc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 16:00:19 crc kubenswrapper[4755]: I1210 16:00:19.207132 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/123e37d4-eb82-4a56-8f62-1c430a740ad0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "123e37d4-eb82-4a56-8f62-1c430a740ad0" (UID: "123e37d4-eb82-4a56-8f62-1c430a740ad0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 16:00:19 crc kubenswrapper[4755]: I1210 16:00:19.280235 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6tzrc\" (UniqueName: \"kubernetes.io/projected/123e37d4-eb82-4a56-8f62-1c430a740ad0-kube-api-access-6tzrc\") on node \"crc\" DevicePath \"\"" Dec 10 16:00:19 crc kubenswrapper[4755]: I1210 16:00:19.280295 4755 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/123e37d4-eb82-4a56-8f62-1c430a740ad0-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 16:00:19 crc kubenswrapper[4755]: I1210 16:00:19.280313 4755 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/123e37d4-eb82-4a56-8f62-1c430a740ad0-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 16:00:19 crc kubenswrapper[4755]: I1210 16:00:19.505862 4755 generic.go:334] "Generic (PLEG): container finished" podID="123e37d4-eb82-4a56-8f62-1c430a740ad0" containerID="cf2bb49164375b93bbcd93366bf919c820ca7420e03c405fef58656d1fa3a3c3" exitCode=0 Dec 10 16:00:19 crc kubenswrapper[4755]: I1210 16:00:19.505895 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-267zk" Dec 10 16:00:19 crc kubenswrapper[4755]: I1210 16:00:19.505916 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-267zk" event={"ID":"123e37d4-eb82-4a56-8f62-1c430a740ad0","Type":"ContainerDied","Data":"cf2bb49164375b93bbcd93366bf919c820ca7420e03c405fef58656d1fa3a3c3"} Dec 10 16:00:19 crc kubenswrapper[4755]: I1210 16:00:19.505956 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-267zk" event={"ID":"123e37d4-eb82-4a56-8f62-1c430a740ad0","Type":"ContainerDied","Data":"de7a50f92a2f90439df578aeb59c7461c75809b787c3e7c7136741ce215fbbbd"} Dec 10 16:00:19 crc kubenswrapper[4755]: I1210 16:00:19.505990 4755 scope.go:117] "RemoveContainer" containerID="cf2bb49164375b93bbcd93366bf919c820ca7420e03c405fef58656d1fa3a3c3" Dec 10 16:00:19 crc kubenswrapper[4755]: I1210 16:00:19.528443 4755 scope.go:117] "RemoveContainer" containerID="e3b4c523aab95383f1a567726d672663ac883ccfac7d8b7b08a21c6718d49c70" Dec 10 16:00:19 crc kubenswrapper[4755]: I1210 16:00:19.547267 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-267zk"] Dec 10 16:00:19 crc kubenswrapper[4755]: I1210 16:00:19.559369 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-267zk"] Dec 10 16:00:19 crc kubenswrapper[4755]: I1210 16:00:19.563287 4755 scope.go:117] "RemoveContainer" containerID="9cb96d6218b0861b67f66ff94f7e0eed91a4177d8cea7f88323768ba98bbe4ed" Dec 10 16:00:19 crc kubenswrapper[4755]: I1210 16:00:19.607629 4755 scope.go:117] "RemoveContainer" containerID="cf2bb49164375b93bbcd93366bf919c820ca7420e03c405fef58656d1fa3a3c3" Dec 10 16:00:19 crc kubenswrapper[4755]: E1210 16:00:19.608205 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf2bb49164375b93bbcd93366bf919c820ca7420e03c405fef58656d1fa3a3c3\": container with ID starting with cf2bb49164375b93bbcd93366bf919c820ca7420e03c405fef58656d1fa3a3c3 not found: ID does not exist" containerID="cf2bb49164375b93bbcd93366bf919c820ca7420e03c405fef58656d1fa3a3c3" Dec 10 16:00:19 crc kubenswrapper[4755]: I1210 16:00:19.608254 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf2bb49164375b93bbcd93366bf919c820ca7420e03c405fef58656d1fa3a3c3"} err="failed to get container status \"cf2bb49164375b93bbcd93366bf919c820ca7420e03c405fef58656d1fa3a3c3\": rpc error: code = NotFound desc = could not find container \"cf2bb49164375b93bbcd93366bf919c820ca7420e03c405fef58656d1fa3a3c3\": container with ID starting with cf2bb49164375b93bbcd93366bf919c820ca7420e03c405fef58656d1fa3a3c3 not found: ID does not exist" Dec 10 16:00:19 crc kubenswrapper[4755]: I1210 16:00:19.608283 4755 scope.go:117] "RemoveContainer" containerID="e3b4c523aab95383f1a567726d672663ac883ccfac7d8b7b08a21c6718d49c70" Dec 10 16:00:19 crc kubenswrapper[4755]: E1210 16:00:19.608589 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3b4c523aab95383f1a567726d672663ac883ccfac7d8b7b08a21c6718d49c70\": container with ID starting with e3b4c523aab95383f1a567726d672663ac883ccfac7d8b7b08a21c6718d49c70 not found: ID does not exist" containerID="e3b4c523aab95383f1a567726d672663ac883ccfac7d8b7b08a21c6718d49c70" Dec 10 16:00:19 crc kubenswrapper[4755]: I1210 16:00:19.608619 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3b4c523aab95383f1a567726d672663ac883ccfac7d8b7b08a21c6718d49c70"} err="failed to get container status \"e3b4c523aab95383f1a567726d672663ac883ccfac7d8b7b08a21c6718d49c70\": rpc error: code = NotFound desc = could not find container \"e3b4c523aab95383f1a567726d672663ac883ccfac7d8b7b08a21c6718d49c70\": container with ID starting with e3b4c523aab95383f1a567726d672663ac883ccfac7d8b7b08a21c6718d49c70 not found: ID does not exist" Dec 10 16:00:19 crc kubenswrapper[4755]: I1210 16:00:19.608634 4755 scope.go:117] "RemoveContainer" containerID="9cb96d6218b0861b67f66ff94f7e0eed91a4177d8cea7f88323768ba98bbe4ed" Dec 10 16:00:19 crc kubenswrapper[4755]: E1210 16:00:19.609119 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9cb96d6218b0861b67f66ff94f7e0eed91a4177d8cea7f88323768ba98bbe4ed\": container with ID starting with 9cb96d6218b0861b67f66ff94f7e0eed91a4177d8cea7f88323768ba98bbe4ed not found: ID does not exist" containerID="9cb96d6218b0861b67f66ff94f7e0eed91a4177d8cea7f88323768ba98bbe4ed" Dec 10 16:00:19 crc kubenswrapper[4755]: I1210 16:00:19.609142 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9cb96d6218b0861b67f66ff94f7e0eed91a4177d8cea7f88323768ba98bbe4ed"} err="failed to get container status \"9cb96d6218b0861b67f66ff94f7e0eed91a4177d8cea7f88323768ba98bbe4ed\": rpc error: code = NotFound desc = could not find container \"9cb96d6218b0861b67f66ff94f7e0eed91a4177d8cea7f88323768ba98bbe4ed\": container with ID starting with 9cb96d6218b0861b67f66ff94f7e0eed91a4177d8cea7f88323768ba98bbe4ed not found: ID does not exist" Dec 10 16:00:19 crc kubenswrapper[4755]: I1210 16:00:19.769934 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="123e37d4-eb82-4a56-8f62-1c430a740ad0" path="/var/lib/kubelet/pods/123e37d4-eb82-4a56-8f62-1c430a740ad0/volumes" Dec 10 16:00:21 crc kubenswrapper[4755]: I1210 16:00:21.627881 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-tkjw9" Dec 10 16:00:21 crc kubenswrapper[4755]: I1210 16:00:21.721292 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-tkjw9" Dec 10 16:00:21 crc kubenswrapper[4755]: E1210 16:00:21.760067 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:00:22 crc kubenswrapper[4755]: I1210 16:00:22.209884 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tkjw9"] Dec 10 16:00:23 crc kubenswrapper[4755]: I1210 16:00:23.540223 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-tkjw9" podUID="d63e3ee2-9e30-4e71-a323-2bc785ae5af5" containerName="registry-server" containerID="cri-o://0c72c5ced6f6cb8bda8d1cd28d457185957013e182bed0526a89f2dd1c836535" gracePeriod=2 Dec 10 16:00:24 crc kubenswrapper[4755]: I1210 16:00:24.101299 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tkjw9" Dec 10 16:00:24 crc kubenswrapper[4755]: I1210 16:00:24.285013 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d63e3ee2-9e30-4e71-a323-2bc785ae5af5-catalog-content\") pod \"d63e3ee2-9e30-4e71-a323-2bc785ae5af5\" (UID: \"d63e3ee2-9e30-4e71-a323-2bc785ae5af5\") " Dec 10 16:00:24 crc kubenswrapper[4755]: I1210 16:00:24.285134 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d63e3ee2-9e30-4e71-a323-2bc785ae5af5-utilities\") pod \"d63e3ee2-9e30-4e71-a323-2bc785ae5af5\" (UID: \"d63e3ee2-9e30-4e71-a323-2bc785ae5af5\") " Dec 10 16:00:24 crc kubenswrapper[4755]: I1210 16:00:24.285204 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dpmp\" (UniqueName: \"kubernetes.io/projected/d63e3ee2-9e30-4e71-a323-2bc785ae5af5-kube-api-access-6dpmp\") pod \"d63e3ee2-9e30-4e71-a323-2bc785ae5af5\" (UID: \"d63e3ee2-9e30-4e71-a323-2bc785ae5af5\") " Dec 10 16:00:24 crc kubenswrapper[4755]: I1210 16:00:24.287129 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d63e3ee2-9e30-4e71-a323-2bc785ae5af5-utilities" (OuterVolumeSpecName: "utilities") pod "d63e3ee2-9e30-4e71-a323-2bc785ae5af5" (UID: "d63e3ee2-9e30-4e71-a323-2bc785ae5af5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 16:00:24 crc kubenswrapper[4755]: I1210 16:00:24.292038 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d63e3ee2-9e30-4e71-a323-2bc785ae5af5-kube-api-access-6dpmp" (OuterVolumeSpecName: "kube-api-access-6dpmp") pod "d63e3ee2-9e30-4e71-a323-2bc785ae5af5" (UID: "d63e3ee2-9e30-4e71-a323-2bc785ae5af5"). InnerVolumeSpecName "kube-api-access-6dpmp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 16:00:24 crc kubenswrapper[4755]: I1210 16:00:24.357531 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d63e3ee2-9e30-4e71-a323-2bc785ae5af5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d63e3ee2-9e30-4e71-a323-2bc785ae5af5" (UID: "d63e3ee2-9e30-4e71-a323-2bc785ae5af5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 16:00:24 crc kubenswrapper[4755]: I1210 16:00:24.387435 4755 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d63e3ee2-9e30-4e71-a323-2bc785ae5af5-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 16:00:24 crc kubenswrapper[4755]: I1210 16:00:24.387476 4755 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d63e3ee2-9e30-4e71-a323-2bc785ae5af5-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 16:00:24 crc kubenswrapper[4755]: I1210 16:00:24.387489 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6dpmp\" (UniqueName: \"kubernetes.io/projected/d63e3ee2-9e30-4e71-a323-2bc785ae5af5-kube-api-access-6dpmp\") on node \"crc\" DevicePath \"\"" Dec 10 16:00:24 crc kubenswrapper[4755]: I1210 16:00:24.551220 4755 generic.go:334] "Generic (PLEG): container finished" podID="d63e3ee2-9e30-4e71-a323-2bc785ae5af5" containerID="0c72c5ced6f6cb8bda8d1cd28d457185957013e182bed0526a89f2dd1c836535" exitCode=0 Dec 10 16:00:24 crc kubenswrapper[4755]: I1210 16:00:24.551309 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tkjw9" event={"ID":"d63e3ee2-9e30-4e71-a323-2bc785ae5af5","Type":"ContainerDied","Data":"0c72c5ced6f6cb8bda8d1cd28d457185957013e182bed0526a89f2dd1c836535"} Dec 10 16:00:24 crc kubenswrapper[4755]: I1210 16:00:24.551360 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tkjw9" Dec 10 16:00:24 crc kubenswrapper[4755]: I1210 16:00:24.552289 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tkjw9" event={"ID":"d63e3ee2-9e30-4e71-a323-2bc785ae5af5","Type":"ContainerDied","Data":"eb1a24035cf57bf833f59af2240afe7cbcab51d46022909589baffe1c10acda1"} Dec 10 16:00:24 crc kubenswrapper[4755]: I1210 16:00:24.552386 4755 scope.go:117] "RemoveContainer" containerID="0c72c5ced6f6cb8bda8d1cd28d457185957013e182bed0526a89f2dd1c836535" Dec 10 16:00:24 crc kubenswrapper[4755]: I1210 16:00:24.575266 4755 scope.go:117] "RemoveContainer" containerID="3ac47199e37a1984946537e6fdff09cc4936b09d77a1b1b40c680389e7acc277" Dec 10 16:00:24 crc kubenswrapper[4755]: I1210 16:00:24.610411 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tkjw9"] Dec 10 16:00:24 crc kubenswrapper[4755]: I1210 16:00:24.629713 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-tkjw9"] Dec 10 16:00:24 crc kubenswrapper[4755]: I1210 16:00:24.634776 4755 scope.go:117] "RemoveContainer" containerID="89cf97fd34591d3652382cd919b7e2ab93407ebc6992ca80ddce63d05cb4faf7" Dec 10 16:00:24 crc kubenswrapper[4755]: I1210 16:00:24.677151 4755 scope.go:117] "RemoveContainer" containerID="0c72c5ced6f6cb8bda8d1cd28d457185957013e182bed0526a89f2dd1c836535" Dec 10 16:00:24 crc kubenswrapper[4755]: E1210 16:00:24.677636 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c72c5ced6f6cb8bda8d1cd28d457185957013e182bed0526a89f2dd1c836535\": container with ID starting with 0c72c5ced6f6cb8bda8d1cd28d457185957013e182bed0526a89f2dd1c836535 not found: ID does not exist" containerID="0c72c5ced6f6cb8bda8d1cd28d457185957013e182bed0526a89f2dd1c836535" Dec 10 16:00:24 crc kubenswrapper[4755]: I1210 16:00:24.677691 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c72c5ced6f6cb8bda8d1cd28d457185957013e182bed0526a89f2dd1c836535"} err="failed to get container status \"0c72c5ced6f6cb8bda8d1cd28d457185957013e182bed0526a89f2dd1c836535\": rpc error: code = NotFound desc = could not find container \"0c72c5ced6f6cb8bda8d1cd28d457185957013e182bed0526a89f2dd1c836535\": container with ID starting with 0c72c5ced6f6cb8bda8d1cd28d457185957013e182bed0526a89f2dd1c836535 not found: ID does not exist" Dec 10 16:00:24 crc kubenswrapper[4755]: I1210 16:00:24.677725 4755 scope.go:117] "RemoveContainer" containerID="3ac47199e37a1984946537e6fdff09cc4936b09d77a1b1b40c680389e7acc277" Dec 10 16:00:24 crc kubenswrapper[4755]: E1210 16:00:24.678009 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ac47199e37a1984946537e6fdff09cc4936b09d77a1b1b40c680389e7acc277\": container with ID starting with 3ac47199e37a1984946537e6fdff09cc4936b09d77a1b1b40c680389e7acc277 not found: ID does not exist" containerID="3ac47199e37a1984946537e6fdff09cc4936b09d77a1b1b40c680389e7acc277" Dec 10 16:00:24 crc kubenswrapper[4755]: I1210 16:00:24.678036 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ac47199e37a1984946537e6fdff09cc4936b09d77a1b1b40c680389e7acc277"} err="failed to get container status \"3ac47199e37a1984946537e6fdff09cc4936b09d77a1b1b40c680389e7acc277\": rpc error: code = NotFound desc = could not find container \"3ac47199e37a1984946537e6fdff09cc4936b09d77a1b1b40c680389e7acc277\": container with ID starting with 3ac47199e37a1984946537e6fdff09cc4936b09d77a1b1b40c680389e7acc277 not found: ID does not exist" Dec 10 16:00:24 crc kubenswrapper[4755]: I1210 16:00:24.678052 4755 scope.go:117] "RemoveContainer" containerID="89cf97fd34591d3652382cd919b7e2ab93407ebc6992ca80ddce63d05cb4faf7" Dec 10 16:00:24 crc kubenswrapper[4755]: E1210 16:00:24.678323 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89cf97fd34591d3652382cd919b7e2ab93407ebc6992ca80ddce63d05cb4faf7\": container with ID starting with 89cf97fd34591d3652382cd919b7e2ab93407ebc6992ca80ddce63d05cb4faf7 not found: ID does not exist" containerID="89cf97fd34591d3652382cd919b7e2ab93407ebc6992ca80ddce63d05cb4faf7" Dec 10 16:00:24 crc kubenswrapper[4755]: I1210 16:00:24.678353 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89cf97fd34591d3652382cd919b7e2ab93407ebc6992ca80ddce63d05cb4faf7"} err="failed to get container status \"89cf97fd34591d3652382cd919b7e2ab93407ebc6992ca80ddce63d05cb4faf7\": rpc error: code = NotFound desc = could not find container \"89cf97fd34591d3652382cd919b7e2ab93407ebc6992ca80ddce63d05cb4faf7\": container with ID starting with 89cf97fd34591d3652382cd919b7e2ab93407ebc6992ca80ddce63d05cb4faf7 not found: ID does not exist" Dec 10 16:00:25 crc kubenswrapper[4755]: I1210 16:00:25.024799 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jl54z"] Dec 10 16:00:25 crc kubenswrapper[4755]: E1210 16:00:25.025488 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="123e37d4-eb82-4a56-8f62-1c430a740ad0" containerName="extract-utilities" Dec 10 16:00:25 crc kubenswrapper[4755]: I1210 16:00:25.025529 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="123e37d4-eb82-4a56-8f62-1c430a740ad0" containerName="extract-utilities" Dec 10 16:00:25 crc kubenswrapper[4755]: E1210 16:00:25.025570 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d63e3ee2-9e30-4e71-a323-2bc785ae5af5" containerName="registry-server" Dec 10 16:00:25 crc kubenswrapper[4755]: I1210 16:00:25.025584 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="d63e3ee2-9e30-4e71-a323-2bc785ae5af5" containerName="registry-server" Dec 10 16:00:25 crc kubenswrapper[4755]: E1210 16:00:25.025608 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="123e37d4-eb82-4a56-8f62-1c430a740ad0" containerName="registry-server" Dec 10 16:00:25 crc kubenswrapper[4755]: I1210 16:00:25.025619 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="123e37d4-eb82-4a56-8f62-1c430a740ad0" containerName="registry-server" Dec 10 16:00:25 crc kubenswrapper[4755]: E1210 16:00:25.025649 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d63e3ee2-9e30-4e71-a323-2bc785ae5af5" containerName="extract-utilities" Dec 10 16:00:25 crc kubenswrapper[4755]: I1210 16:00:25.025661 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="d63e3ee2-9e30-4e71-a323-2bc785ae5af5" containerName="extract-utilities" Dec 10 16:00:25 crc kubenswrapper[4755]: E1210 16:00:25.025690 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="123e37d4-eb82-4a56-8f62-1c430a740ad0" containerName="extract-content" Dec 10 16:00:25 crc kubenswrapper[4755]: I1210 16:00:25.025701 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="123e37d4-eb82-4a56-8f62-1c430a740ad0" containerName="extract-content" Dec 10 16:00:25 crc kubenswrapper[4755]: E1210 16:00:25.025718 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d63e3ee2-9e30-4e71-a323-2bc785ae5af5" containerName="extract-content" Dec 10 16:00:25 crc kubenswrapper[4755]: I1210 16:00:25.025729 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="d63e3ee2-9e30-4e71-a323-2bc785ae5af5" containerName="extract-content" Dec 10 16:00:25 crc kubenswrapper[4755]: I1210 16:00:25.026126 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="d63e3ee2-9e30-4e71-a323-2bc785ae5af5" containerName="registry-server" Dec 10 16:00:25 crc kubenswrapper[4755]: I1210 16:00:25.026159 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="123e37d4-eb82-4a56-8f62-1c430a740ad0" containerName="registry-server" Dec 10 16:00:25 crc kubenswrapper[4755]: I1210 16:00:25.029058 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jl54z" Dec 10 16:00:25 crc kubenswrapper[4755]: I1210 16:00:25.040725 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jl54z"] Dec 10 16:00:25 crc kubenswrapper[4755]: I1210 16:00:25.212487 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d5494b1-ea23-48b5-8115-8eddd2ebfc6b-catalog-content\") pod \"certified-operators-jl54z\" (UID: \"6d5494b1-ea23-48b5-8115-8eddd2ebfc6b\") " pod="openshift-marketplace/certified-operators-jl54z" Dec 10 16:00:25 crc kubenswrapper[4755]: I1210 16:00:25.212616 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76szw\" (UniqueName: \"kubernetes.io/projected/6d5494b1-ea23-48b5-8115-8eddd2ebfc6b-kube-api-access-76szw\") pod \"certified-operators-jl54z\" (UID: \"6d5494b1-ea23-48b5-8115-8eddd2ebfc6b\") " pod="openshift-marketplace/certified-operators-jl54z" Dec 10 16:00:25 crc kubenswrapper[4755]: I1210 16:00:25.212728 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d5494b1-ea23-48b5-8115-8eddd2ebfc6b-utilities\") pod \"certified-operators-jl54z\" (UID: \"6d5494b1-ea23-48b5-8115-8eddd2ebfc6b\") " pod="openshift-marketplace/certified-operators-jl54z" Dec 10 16:00:25 crc kubenswrapper[4755]: I1210 16:00:25.314201 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d5494b1-ea23-48b5-8115-8eddd2ebfc6b-catalog-content\") pod \"certified-operators-jl54z\" (UID: \"6d5494b1-ea23-48b5-8115-8eddd2ebfc6b\") " pod="openshift-marketplace/certified-operators-jl54z" Dec 10 16:00:25 crc kubenswrapper[4755]: I1210 16:00:25.314295 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76szw\" (UniqueName: \"kubernetes.io/projected/6d5494b1-ea23-48b5-8115-8eddd2ebfc6b-kube-api-access-76szw\") pod \"certified-operators-jl54z\" (UID: \"6d5494b1-ea23-48b5-8115-8eddd2ebfc6b\") " pod="openshift-marketplace/certified-operators-jl54z" Dec 10 16:00:25 crc kubenswrapper[4755]: I1210 16:00:25.314379 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d5494b1-ea23-48b5-8115-8eddd2ebfc6b-utilities\") pod \"certified-operators-jl54z\" (UID: \"6d5494b1-ea23-48b5-8115-8eddd2ebfc6b\") " pod="openshift-marketplace/certified-operators-jl54z" Dec 10 16:00:25 crc kubenswrapper[4755]: I1210 16:00:25.314728 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d5494b1-ea23-48b5-8115-8eddd2ebfc6b-catalog-content\") pod \"certified-operators-jl54z\" (UID: \"6d5494b1-ea23-48b5-8115-8eddd2ebfc6b\") " pod="openshift-marketplace/certified-operators-jl54z" Dec 10 16:00:25 crc kubenswrapper[4755]: I1210 16:00:25.314806 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d5494b1-ea23-48b5-8115-8eddd2ebfc6b-utilities\") pod \"certified-operators-jl54z\" (UID: \"6d5494b1-ea23-48b5-8115-8eddd2ebfc6b\") " pod="openshift-marketplace/certified-operators-jl54z" Dec 10 16:00:25 crc kubenswrapper[4755]: I1210 16:00:25.340456 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76szw\" (UniqueName: \"kubernetes.io/projected/6d5494b1-ea23-48b5-8115-8eddd2ebfc6b-kube-api-access-76szw\") pod \"certified-operators-jl54z\" (UID: \"6d5494b1-ea23-48b5-8115-8eddd2ebfc6b\") " pod="openshift-marketplace/certified-operators-jl54z" Dec 10 16:00:25 crc kubenswrapper[4755]: I1210 16:00:25.357401 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jl54z" Dec 10 16:00:25 crc kubenswrapper[4755]: I1210 16:00:25.773687 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d63e3ee2-9e30-4e71-a323-2bc785ae5af5" path="/var/lib/kubelet/pods/d63e3ee2-9e30-4e71-a323-2bc785ae5af5/volumes" Dec 10 16:00:26 crc kubenswrapper[4755]: I1210 16:00:26.010568 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jl54z"] Dec 10 16:00:26 crc kubenswrapper[4755]: I1210 16:00:26.619715 4755 generic.go:334] "Generic (PLEG): container finished" podID="6d5494b1-ea23-48b5-8115-8eddd2ebfc6b" containerID="77941bca15527411aadfa9bd0396c5372e0e88558aab3f65f729766b1114be38" exitCode=0 Dec 10 16:00:26 crc kubenswrapper[4755]: I1210 16:00:26.619835 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jl54z" event={"ID":"6d5494b1-ea23-48b5-8115-8eddd2ebfc6b","Type":"ContainerDied","Data":"77941bca15527411aadfa9bd0396c5372e0e88558aab3f65f729766b1114be38"} Dec 10 16:00:26 crc kubenswrapper[4755]: I1210 16:00:26.620069 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jl54z" event={"ID":"6d5494b1-ea23-48b5-8115-8eddd2ebfc6b","Type":"ContainerStarted","Data":"8d6cbb22c3dbf843695252cc18ebc5bd40f91ab888bf4731368c8fa24cffdfed"} Dec 10 16:00:28 crc kubenswrapper[4755]: I1210 16:00:28.665076 4755 generic.go:334] "Generic (PLEG): container finished" podID="6d5494b1-ea23-48b5-8115-8eddd2ebfc6b" containerID="74961ed64725c944785c1aea9f2bab6df316c1c72d1acbc71fddbf56625f5f49" exitCode=0 Dec 10 16:00:28 crc kubenswrapper[4755]: I1210 16:00:28.665199 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jl54z" event={"ID":"6d5494b1-ea23-48b5-8115-8eddd2ebfc6b","Type":"ContainerDied","Data":"74961ed64725c944785c1aea9f2bab6df316c1c72d1acbc71fddbf56625f5f49"} Dec 10 16:00:29 crc kubenswrapper[4755]: I1210 16:00:29.676620 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jl54z" event={"ID":"6d5494b1-ea23-48b5-8115-8eddd2ebfc6b","Type":"ContainerStarted","Data":"a6f57c39d88337e094f91d5830d7c5d903e64449f25d98a83e74c797d518f4e8"} Dec 10 16:00:29 crc kubenswrapper[4755]: I1210 16:00:29.695711 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jl54z" podStartSLOduration=3.040681698 podStartE2EDuration="5.695694716s" podCreationTimestamp="2025-12-10 16:00:24 +0000 UTC" firstStartedPulling="2025-12-10 16:00:26.62190234 +0000 UTC m=+2223.222785972" lastFinishedPulling="2025-12-10 16:00:29.276915358 +0000 UTC m=+2225.877798990" observedRunningTime="2025-12-10 16:00:29.691275815 +0000 UTC m=+2226.292159447" watchObservedRunningTime="2025-12-10 16:00:29.695694716 +0000 UTC m=+2226.296578348" Dec 10 16:00:30 crc kubenswrapper[4755]: E1210 16:00:30.760172 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:00:33 crc kubenswrapper[4755]: E1210 16:00:33.768154 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:00:35 crc kubenswrapper[4755]: I1210 16:00:35.358108 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jl54z" Dec 10 16:00:35 crc kubenswrapper[4755]: I1210 16:00:35.358502 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jl54z" Dec 10 16:00:35 crc kubenswrapper[4755]: I1210 16:00:35.407785 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jl54z" Dec 10 16:00:35 crc kubenswrapper[4755]: I1210 16:00:35.783898 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jl54z" Dec 10 16:00:36 crc kubenswrapper[4755]: I1210 16:00:36.628545 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jl54z"] Dec 10 16:00:37 crc kubenswrapper[4755]: I1210 16:00:37.745514 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jl54z" podUID="6d5494b1-ea23-48b5-8115-8eddd2ebfc6b" containerName="registry-server" containerID="cri-o://a6f57c39d88337e094f91d5830d7c5d903e64449f25d98a83e74c797d518f4e8" gracePeriod=2 Dec 10 16:00:38 crc kubenswrapper[4755]: I1210 16:00:38.360579 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jl54z" Dec 10 16:00:38 crc kubenswrapper[4755]: I1210 16:00:38.425582 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-76szw\" (UniqueName: \"kubernetes.io/projected/6d5494b1-ea23-48b5-8115-8eddd2ebfc6b-kube-api-access-76szw\") pod \"6d5494b1-ea23-48b5-8115-8eddd2ebfc6b\" (UID: \"6d5494b1-ea23-48b5-8115-8eddd2ebfc6b\") " Dec 10 16:00:38 crc kubenswrapper[4755]: I1210 16:00:38.425942 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d5494b1-ea23-48b5-8115-8eddd2ebfc6b-catalog-content\") pod \"6d5494b1-ea23-48b5-8115-8eddd2ebfc6b\" (UID: \"6d5494b1-ea23-48b5-8115-8eddd2ebfc6b\") " Dec 10 16:00:38 crc kubenswrapper[4755]: I1210 16:00:38.426184 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d5494b1-ea23-48b5-8115-8eddd2ebfc6b-utilities\") pod \"6d5494b1-ea23-48b5-8115-8eddd2ebfc6b\" (UID: \"6d5494b1-ea23-48b5-8115-8eddd2ebfc6b\") " Dec 10 16:00:38 crc kubenswrapper[4755]: I1210 16:00:38.427954 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d5494b1-ea23-48b5-8115-8eddd2ebfc6b-utilities" (OuterVolumeSpecName: "utilities") pod "6d5494b1-ea23-48b5-8115-8eddd2ebfc6b" (UID: "6d5494b1-ea23-48b5-8115-8eddd2ebfc6b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 16:00:38 crc kubenswrapper[4755]: I1210 16:00:38.428981 4755 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d5494b1-ea23-48b5-8115-8eddd2ebfc6b-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 16:00:38 crc kubenswrapper[4755]: I1210 16:00:38.442620 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d5494b1-ea23-48b5-8115-8eddd2ebfc6b-kube-api-access-76szw" (OuterVolumeSpecName: "kube-api-access-76szw") pod "6d5494b1-ea23-48b5-8115-8eddd2ebfc6b" (UID: "6d5494b1-ea23-48b5-8115-8eddd2ebfc6b"). InnerVolumeSpecName "kube-api-access-76szw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 16:00:38 crc kubenswrapper[4755]: I1210 16:00:38.531294 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-76szw\" (UniqueName: \"kubernetes.io/projected/6d5494b1-ea23-48b5-8115-8eddd2ebfc6b-kube-api-access-76szw\") on node \"crc\" DevicePath \"\"" Dec 10 16:00:38 crc kubenswrapper[4755]: I1210 16:00:38.764787 4755 generic.go:334] "Generic (PLEG): container finished" podID="6d5494b1-ea23-48b5-8115-8eddd2ebfc6b" containerID="a6f57c39d88337e094f91d5830d7c5d903e64449f25d98a83e74c797d518f4e8" exitCode=0 Dec 10 16:00:38 crc kubenswrapper[4755]: I1210 16:00:38.764842 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jl54z" event={"ID":"6d5494b1-ea23-48b5-8115-8eddd2ebfc6b","Type":"ContainerDied","Data":"a6f57c39d88337e094f91d5830d7c5d903e64449f25d98a83e74c797d518f4e8"} Dec 10 16:00:38 crc kubenswrapper[4755]: I1210 16:00:38.764913 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jl54z" event={"ID":"6d5494b1-ea23-48b5-8115-8eddd2ebfc6b","Type":"ContainerDied","Data":"8d6cbb22c3dbf843695252cc18ebc5bd40f91ab888bf4731368c8fa24cffdfed"} Dec 10 16:00:38 crc kubenswrapper[4755]: I1210 16:00:38.764940 4755 scope.go:117] "RemoveContainer" containerID="a6f57c39d88337e094f91d5830d7c5d903e64449f25d98a83e74c797d518f4e8" Dec 10 16:00:38 crc kubenswrapper[4755]: I1210 16:00:38.764859 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jl54z" Dec 10 16:00:38 crc kubenswrapper[4755]: I1210 16:00:38.796609 4755 scope.go:117] "RemoveContainer" containerID="74961ed64725c944785c1aea9f2bab6df316c1c72d1acbc71fddbf56625f5f49" Dec 10 16:00:38 crc kubenswrapper[4755]: I1210 16:00:38.827685 4755 scope.go:117] "RemoveContainer" containerID="77941bca15527411aadfa9bd0396c5372e0e88558aab3f65f729766b1114be38" Dec 10 16:00:38 crc kubenswrapper[4755]: I1210 16:00:38.872324 4755 scope.go:117] "RemoveContainer" containerID="a6f57c39d88337e094f91d5830d7c5d903e64449f25d98a83e74c797d518f4e8" Dec 10 16:00:38 crc kubenswrapper[4755]: E1210 16:00:38.872862 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6f57c39d88337e094f91d5830d7c5d903e64449f25d98a83e74c797d518f4e8\": container with ID starting with a6f57c39d88337e094f91d5830d7c5d903e64449f25d98a83e74c797d518f4e8 not found: ID does not exist" containerID="a6f57c39d88337e094f91d5830d7c5d903e64449f25d98a83e74c797d518f4e8" Dec 10 16:00:38 crc kubenswrapper[4755]: I1210 16:00:38.872913 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6f57c39d88337e094f91d5830d7c5d903e64449f25d98a83e74c797d518f4e8"} err="failed to get container status \"a6f57c39d88337e094f91d5830d7c5d903e64449f25d98a83e74c797d518f4e8\": rpc error: code = NotFound desc = could not find container \"a6f57c39d88337e094f91d5830d7c5d903e64449f25d98a83e74c797d518f4e8\": container with ID starting with a6f57c39d88337e094f91d5830d7c5d903e64449f25d98a83e74c797d518f4e8 not found: ID does not exist" Dec 10 16:00:38 crc kubenswrapper[4755]: I1210 16:00:38.872945 4755 scope.go:117] "RemoveContainer" containerID="74961ed64725c944785c1aea9f2bab6df316c1c72d1acbc71fddbf56625f5f49" Dec 10 16:00:38 crc kubenswrapper[4755]: E1210 16:00:38.873244 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74961ed64725c944785c1aea9f2bab6df316c1c72d1acbc71fddbf56625f5f49\": container with ID starting with 74961ed64725c944785c1aea9f2bab6df316c1c72d1acbc71fddbf56625f5f49 not found: ID does not exist" containerID="74961ed64725c944785c1aea9f2bab6df316c1c72d1acbc71fddbf56625f5f49" Dec 10 16:00:38 crc kubenswrapper[4755]: I1210 16:00:38.873271 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74961ed64725c944785c1aea9f2bab6df316c1c72d1acbc71fddbf56625f5f49"} err="failed to get container status \"74961ed64725c944785c1aea9f2bab6df316c1c72d1acbc71fddbf56625f5f49\": rpc error: code = NotFound desc = could not find container \"74961ed64725c944785c1aea9f2bab6df316c1c72d1acbc71fddbf56625f5f49\": container with ID starting with 74961ed64725c944785c1aea9f2bab6df316c1c72d1acbc71fddbf56625f5f49 not found: ID does not exist" Dec 10 16:00:38 crc kubenswrapper[4755]: I1210 16:00:38.873290 4755 scope.go:117] "RemoveContainer" containerID="77941bca15527411aadfa9bd0396c5372e0e88558aab3f65f729766b1114be38" Dec 10 16:00:38 crc kubenswrapper[4755]: E1210 16:00:38.873871 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77941bca15527411aadfa9bd0396c5372e0e88558aab3f65f729766b1114be38\": container with ID starting with 77941bca15527411aadfa9bd0396c5372e0e88558aab3f65f729766b1114be38 not found: ID does not exist" containerID="77941bca15527411aadfa9bd0396c5372e0e88558aab3f65f729766b1114be38" Dec 10 16:00:38 crc kubenswrapper[4755]: I1210 16:00:38.873893 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77941bca15527411aadfa9bd0396c5372e0e88558aab3f65f729766b1114be38"} err="failed to get container status \"77941bca15527411aadfa9bd0396c5372e0e88558aab3f65f729766b1114be38\": rpc error: code = NotFound desc = could not find container \"77941bca15527411aadfa9bd0396c5372e0e88558aab3f65f729766b1114be38\": container with ID starting with 77941bca15527411aadfa9bd0396c5372e0e88558aab3f65f729766b1114be38 not found: ID does not exist" Dec 10 16:00:39 crc kubenswrapper[4755]: I1210 16:00:39.132132 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d5494b1-ea23-48b5-8115-8eddd2ebfc6b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6d5494b1-ea23-48b5-8115-8eddd2ebfc6b" (UID: "6d5494b1-ea23-48b5-8115-8eddd2ebfc6b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 16:00:39 crc kubenswrapper[4755]: I1210 16:00:39.167342 4755 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d5494b1-ea23-48b5-8115-8eddd2ebfc6b-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 16:00:39 crc kubenswrapper[4755]: I1210 16:00:39.395694 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jl54z"] Dec 10 16:00:39 crc kubenswrapper[4755]: I1210 16:00:39.405553 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jl54z"] Dec 10 16:00:39 crc kubenswrapper[4755]: I1210 16:00:39.774092 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d5494b1-ea23-48b5-8115-8eddd2ebfc6b" path="/var/lib/kubelet/pods/6d5494b1-ea23-48b5-8115-8eddd2ebfc6b/volumes" Dec 10 16:00:40 crc kubenswrapper[4755]: I1210 16:00:40.358891 4755 patch_prober.go:28] interesting pod/machine-config-daemon-ggt8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 16:00:40 crc kubenswrapper[4755]: I1210 16:00:40.359254 4755 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 16:00:45 crc kubenswrapper[4755]: E1210 16:00:45.762092 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:00:47 crc kubenswrapper[4755]: E1210 16:00:47.761442 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:00:59 crc kubenswrapper[4755]: E1210 16:00:59.760401 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:01:00 crc kubenswrapper[4755]: I1210 16:01:00.148642 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29423041-9vf8s"] Dec 10 16:01:00 crc kubenswrapper[4755]: E1210 16:01:00.149052 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d5494b1-ea23-48b5-8115-8eddd2ebfc6b" containerName="extract-content" Dec 10 16:01:00 crc kubenswrapper[4755]: I1210 16:01:00.149069 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d5494b1-ea23-48b5-8115-8eddd2ebfc6b" containerName="extract-content" Dec 10 16:01:00 crc kubenswrapper[4755]: E1210 16:01:00.149084 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d5494b1-ea23-48b5-8115-8eddd2ebfc6b" containerName="registry-server" Dec 10 16:01:00 crc kubenswrapper[4755]: I1210 16:01:00.149090 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d5494b1-ea23-48b5-8115-8eddd2ebfc6b" containerName="registry-server" Dec 10 16:01:00 crc kubenswrapper[4755]: E1210 16:01:00.149104 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d5494b1-ea23-48b5-8115-8eddd2ebfc6b" containerName="extract-utilities" Dec 10 16:01:00 crc kubenswrapper[4755]: I1210 16:01:00.149110 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d5494b1-ea23-48b5-8115-8eddd2ebfc6b" containerName="extract-utilities" Dec 10 16:01:00 crc kubenswrapper[4755]: I1210 16:01:00.149306 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d5494b1-ea23-48b5-8115-8eddd2ebfc6b" containerName="registry-server" Dec 10 16:01:00 crc kubenswrapper[4755]: I1210 16:01:00.150130 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29423041-9vf8s" Dec 10 16:01:00 crc kubenswrapper[4755]: I1210 16:01:00.169493 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29423041-9vf8s"] Dec 10 16:01:00 crc kubenswrapper[4755]: I1210 16:01:00.301045 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53fbaf86-07f0-41db-b467-1b101d16fc8d-config-data\") pod \"keystone-cron-29423041-9vf8s\" (UID: \"53fbaf86-07f0-41db-b467-1b101d16fc8d\") " pod="openstack/keystone-cron-29423041-9vf8s" Dec 10 16:01:00 crc kubenswrapper[4755]: I1210 16:01:00.301407 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5w7x\" (UniqueName: \"kubernetes.io/projected/53fbaf86-07f0-41db-b467-1b101d16fc8d-kube-api-access-b5w7x\") pod \"keystone-cron-29423041-9vf8s\" (UID: \"53fbaf86-07f0-41db-b467-1b101d16fc8d\") " pod="openstack/keystone-cron-29423041-9vf8s" Dec 10 16:01:00 crc kubenswrapper[4755]: I1210 16:01:00.301597 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/53fbaf86-07f0-41db-b467-1b101d16fc8d-fernet-keys\") pod \"keystone-cron-29423041-9vf8s\" (UID: \"53fbaf86-07f0-41db-b467-1b101d16fc8d\") " pod="openstack/keystone-cron-29423041-9vf8s" Dec 10 16:01:00 crc kubenswrapper[4755]: I1210 16:01:00.301893 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53fbaf86-07f0-41db-b467-1b101d16fc8d-combined-ca-bundle\") pod \"keystone-cron-29423041-9vf8s\" (UID: \"53fbaf86-07f0-41db-b467-1b101d16fc8d\") " pod="openstack/keystone-cron-29423041-9vf8s" Dec 10 16:01:00 crc kubenswrapper[4755]: I1210 16:01:00.404433 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53fbaf86-07f0-41db-b467-1b101d16fc8d-config-data\") pod \"keystone-cron-29423041-9vf8s\" (UID: \"53fbaf86-07f0-41db-b467-1b101d16fc8d\") " pod="openstack/keystone-cron-29423041-9vf8s" Dec 10 16:01:00 crc kubenswrapper[4755]: I1210 16:01:00.404519 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5w7x\" (UniqueName: \"kubernetes.io/projected/53fbaf86-07f0-41db-b467-1b101d16fc8d-kube-api-access-b5w7x\") pod \"keystone-cron-29423041-9vf8s\" (UID: \"53fbaf86-07f0-41db-b467-1b101d16fc8d\") " pod="openstack/keystone-cron-29423041-9vf8s" Dec 10 16:01:00 crc kubenswrapper[4755]: I1210 16:01:00.404553 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/53fbaf86-07f0-41db-b467-1b101d16fc8d-fernet-keys\") pod \"keystone-cron-29423041-9vf8s\" (UID: \"53fbaf86-07f0-41db-b467-1b101d16fc8d\") " pod="openstack/keystone-cron-29423041-9vf8s" Dec 10 16:01:00 crc kubenswrapper[4755]: I1210 16:01:00.404671 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53fbaf86-07f0-41db-b467-1b101d16fc8d-combined-ca-bundle\") pod \"keystone-cron-29423041-9vf8s\" (UID: \"53fbaf86-07f0-41db-b467-1b101d16fc8d\") " pod="openstack/keystone-cron-29423041-9vf8s" Dec 10 16:01:00 crc kubenswrapper[4755]: I1210 16:01:00.410766 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53fbaf86-07f0-41db-b467-1b101d16fc8d-combined-ca-bundle\") pod \"keystone-cron-29423041-9vf8s\" (UID: \"53fbaf86-07f0-41db-b467-1b101d16fc8d\") " pod="openstack/keystone-cron-29423041-9vf8s" Dec 10 16:01:00 crc kubenswrapper[4755]: I1210 16:01:00.410918 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53fbaf86-07f0-41db-b467-1b101d16fc8d-config-data\") pod \"keystone-cron-29423041-9vf8s\" (UID: \"53fbaf86-07f0-41db-b467-1b101d16fc8d\") " pod="openstack/keystone-cron-29423041-9vf8s" Dec 10 16:01:00 crc kubenswrapper[4755]: I1210 16:01:00.418868 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/53fbaf86-07f0-41db-b467-1b101d16fc8d-fernet-keys\") pod \"keystone-cron-29423041-9vf8s\" (UID: \"53fbaf86-07f0-41db-b467-1b101d16fc8d\") " pod="openstack/keystone-cron-29423041-9vf8s" Dec 10 16:01:00 crc kubenswrapper[4755]: I1210 16:01:00.422514 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5w7x\" (UniqueName: \"kubernetes.io/projected/53fbaf86-07f0-41db-b467-1b101d16fc8d-kube-api-access-b5w7x\") pod \"keystone-cron-29423041-9vf8s\" (UID: \"53fbaf86-07f0-41db-b467-1b101d16fc8d\") " pod="openstack/keystone-cron-29423041-9vf8s" Dec 10 16:01:00 crc kubenswrapper[4755]: I1210 16:01:00.469007 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29423041-9vf8s" Dec 10 16:01:00 crc kubenswrapper[4755]: E1210 16:01:00.759717 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:01:00 crc kubenswrapper[4755]: W1210 16:01:00.947138 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod53fbaf86_07f0_41db_b467_1b101d16fc8d.slice/crio-1b0865a84e18ca3f8e3ff9f6cff9a61b013be73b10d309175b051a705b324a15 WatchSource:0}: Error finding container 1b0865a84e18ca3f8e3ff9f6cff9a61b013be73b10d309175b051a705b324a15: Status 404 returned error can't find the container with id 1b0865a84e18ca3f8e3ff9f6cff9a61b013be73b10d309175b051a705b324a15 Dec 10 16:01:00 crc kubenswrapper[4755]: I1210 16:01:00.961236 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29423041-9vf8s"] Dec 10 16:01:01 crc kubenswrapper[4755]: I1210 16:01:01.032680 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29423041-9vf8s" event={"ID":"53fbaf86-07f0-41db-b467-1b101d16fc8d","Type":"ContainerStarted","Data":"1b0865a84e18ca3f8e3ff9f6cff9a61b013be73b10d309175b051a705b324a15"} Dec 10 16:01:02 crc kubenswrapper[4755]: I1210 16:01:02.046720 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29423041-9vf8s" event={"ID":"53fbaf86-07f0-41db-b467-1b101d16fc8d","Type":"ContainerStarted","Data":"0a7e4c44f92803174207d1249b17a09b8438d4a291eb819ec100b25124ca50c7"} Dec 10 16:01:02 crc kubenswrapper[4755]: I1210 16:01:02.067346 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29423041-9vf8s" podStartSLOduration=2.067329113 podStartE2EDuration="2.067329113s" podCreationTimestamp="2025-12-10 16:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 16:01:02.066023558 +0000 UTC m=+2258.666907220" watchObservedRunningTime="2025-12-10 16:01:02.067329113 +0000 UTC m=+2258.668212745" Dec 10 16:01:04 crc kubenswrapper[4755]: I1210 16:01:04.070931 4755 generic.go:334] "Generic (PLEG): container finished" podID="53fbaf86-07f0-41db-b467-1b101d16fc8d" containerID="0a7e4c44f92803174207d1249b17a09b8438d4a291eb819ec100b25124ca50c7" exitCode=0 Dec 10 16:01:04 crc kubenswrapper[4755]: I1210 16:01:04.071071 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29423041-9vf8s" event={"ID":"53fbaf86-07f0-41db-b467-1b101d16fc8d","Type":"ContainerDied","Data":"0a7e4c44f92803174207d1249b17a09b8438d4a291eb819ec100b25124ca50c7"} Dec 10 16:01:05 crc kubenswrapper[4755]: I1210 16:01:05.558181 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29423041-9vf8s" Dec 10 16:01:05 crc kubenswrapper[4755]: I1210 16:01:05.638750 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/53fbaf86-07f0-41db-b467-1b101d16fc8d-fernet-keys\") pod \"53fbaf86-07f0-41db-b467-1b101d16fc8d\" (UID: \"53fbaf86-07f0-41db-b467-1b101d16fc8d\") " Dec 10 16:01:05 crc kubenswrapper[4755]: I1210 16:01:05.638908 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53fbaf86-07f0-41db-b467-1b101d16fc8d-combined-ca-bundle\") pod \"53fbaf86-07f0-41db-b467-1b101d16fc8d\" (UID: \"53fbaf86-07f0-41db-b467-1b101d16fc8d\") " Dec 10 16:01:05 crc kubenswrapper[4755]: I1210 16:01:05.638977 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53fbaf86-07f0-41db-b467-1b101d16fc8d-config-data\") pod \"53fbaf86-07f0-41db-b467-1b101d16fc8d\" (UID: \"53fbaf86-07f0-41db-b467-1b101d16fc8d\") " Dec 10 16:01:05 crc kubenswrapper[4755]: I1210 16:01:05.638997 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b5w7x\" (UniqueName: \"kubernetes.io/projected/53fbaf86-07f0-41db-b467-1b101d16fc8d-kube-api-access-b5w7x\") pod \"53fbaf86-07f0-41db-b467-1b101d16fc8d\" (UID: \"53fbaf86-07f0-41db-b467-1b101d16fc8d\") " Dec 10 16:01:05 crc kubenswrapper[4755]: I1210 16:01:05.644409 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53fbaf86-07f0-41db-b467-1b101d16fc8d-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "53fbaf86-07f0-41db-b467-1b101d16fc8d" (UID: "53fbaf86-07f0-41db-b467-1b101d16fc8d"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 16:01:05 crc kubenswrapper[4755]: I1210 16:01:05.650827 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53fbaf86-07f0-41db-b467-1b101d16fc8d-kube-api-access-b5w7x" (OuterVolumeSpecName: "kube-api-access-b5w7x") pod "53fbaf86-07f0-41db-b467-1b101d16fc8d" (UID: "53fbaf86-07f0-41db-b467-1b101d16fc8d"). InnerVolumeSpecName "kube-api-access-b5w7x". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 16:01:05 crc kubenswrapper[4755]: I1210 16:01:05.679459 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53fbaf86-07f0-41db-b467-1b101d16fc8d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "53fbaf86-07f0-41db-b467-1b101d16fc8d" (UID: "53fbaf86-07f0-41db-b467-1b101d16fc8d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 16:01:05 crc kubenswrapper[4755]: I1210 16:01:05.694005 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53fbaf86-07f0-41db-b467-1b101d16fc8d-config-data" (OuterVolumeSpecName: "config-data") pod "53fbaf86-07f0-41db-b467-1b101d16fc8d" (UID: "53fbaf86-07f0-41db-b467-1b101d16fc8d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 16:01:05 crc kubenswrapper[4755]: I1210 16:01:05.741605 4755 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53fbaf86-07f0-41db-b467-1b101d16fc8d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 16:01:05 crc kubenswrapper[4755]: I1210 16:01:05.741640 4755 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53fbaf86-07f0-41db-b467-1b101d16fc8d-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 16:01:05 crc kubenswrapper[4755]: I1210 16:01:05.741649 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b5w7x\" (UniqueName: \"kubernetes.io/projected/53fbaf86-07f0-41db-b467-1b101d16fc8d-kube-api-access-b5w7x\") on node \"crc\" DevicePath \"\"" Dec 10 16:01:05 crc kubenswrapper[4755]: I1210 16:01:05.741658 4755 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/53fbaf86-07f0-41db-b467-1b101d16fc8d-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 10 16:01:06 crc kubenswrapper[4755]: I1210 16:01:06.090866 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29423041-9vf8s" event={"ID":"53fbaf86-07f0-41db-b467-1b101d16fc8d","Type":"ContainerDied","Data":"1b0865a84e18ca3f8e3ff9f6cff9a61b013be73b10d309175b051a705b324a15"} Dec 10 16:01:06 crc kubenswrapper[4755]: I1210 16:01:06.090921 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29423041-9vf8s" Dec 10 16:01:06 crc kubenswrapper[4755]: I1210 16:01:06.090911 4755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b0865a84e18ca3f8e3ff9f6cff9a61b013be73b10d309175b051a705b324a15" Dec 10 16:01:10 crc kubenswrapper[4755]: I1210 16:01:10.359411 4755 patch_prober.go:28] interesting pod/machine-config-daemon-ggt8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 16:01:10 crc kubenswrapper[4755]: I1210 16:01:10.360016 4755 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 16:01:12 crc kubenswrapper[4755]: E1210 16:01:12.760764 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:01:14 crc kubenswrapper[4755]: E1210 16:01:14.759799 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:01:25 crc kubenswrapper[4755]: E1210 16:01:25.760151 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:01:25 crc kubenswrapper[4755]: E1210 16:01:25.760211 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:01:39 crc kubenswrapper[4755]: E1210 16:01:39.759589 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:01:39 crc kubenswrapper[4755]: E1210 16:01:39.759665 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:01:40 crc kubenswrapper[4755]: I1210 16:01:40.359406 4755 patch_prober.go:28] interesting pod/machine-config-daemon-ggt8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 16:01:40 crc kubenswrapper[4755]: I1210 16:01:40.359477 4755 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 16:01:40 crc kubenswrapper[4755]: I1210 16:01:40.359520 4755 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" Dec 10 16:01:40 crc kubenswrapper[4755]: I1210 16:01:40.360072 4755 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"069e74fb745b22d2e40409a9c21ff2e40e0bdf9359efe0e863492dacbe4fee35"} pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 10 16:01:40 crc kubenswrapper[4755]: I1210 16:01:40.360132 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" containerName="machine-config-daemon" containerID="cri-o://069e74fb745b22d2e40409a9c21ff2e40e0bdf9359efe0e863492dacbe4fee35" gracePeriod=600 Dec 10 16:01:40 crc kubenswrapper[4755]: E1210 16:01:40.489418 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:01:41 crc kubenswrapper[4755]: I1210 16:01:41.434405 4755 generic.go:334] "Generic (PLEG): container finished" podID="b132a8b9-1c99-414d-8773-229bf36b305d" containerID="069e74fb745b22d2e40409a9c21ff2e40e0bdf9359efe0e863492dacbe4fee35" exitCode=0 Dec 10 16:01:41 crc kubenswrapper[4755]: I1210 16:01:41.434500 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" event={"ID":"b132a8b9-1c99-414d-8773-229bf36b305d","Type":"ContainerDied","Data":"069e74fb745b22d2e40409a9c21ff2e40e0bdf9359efe0e863492dacbe4fee35"} Dec 10 16:01:41 crc kubenswrapper[4755]: I1210 16:01:41.434846 4755 scope.go:117] "RemoveContainer" containerID="848464b372da64a2ff4b9b5d8f68e30f7b70ba91c0c9790e6358c2e46556c416" Dec 10 16:01:41 crc kubenswrapper[4755]: I1210 16:01:41.435507 4755 scope.go:117] "RemoveContainer" containerID="069e74fb745b22d2e40409a9c21ff2e40e0bdf9359efe0e863492dacbe4fee35" Dec 10 16:01:41 crc kubenswrapper[4755]: E1210 16:01:41.435801 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:01:52 crc kubenswrapper[4755]: E1210 16:01:52.759958 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:01:54 crc kubenswrapper[4755]: E1210 16:01:54.760522 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:01:55 crc kubenswrapper[4755]: I1210 16:01:55.757804 4755 scope.go:117] "RemoveContainer" containerID="069e74fb745b22d2e40409a9c21ff2e40e0bdf9359efe0e863492dacbe4fee35" Dec 10 16:01:55 crc kubenswrapper[4755]: E1210 16:01:55.758789 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:02:04 crc kubenswrapper[4755]: E1210 16:02:04.760252 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:02:06 crc kubenswrapper[4755]: E1210 16:02:06.760029 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:02:10 crc kubenswrapper[4755]: I1210 16:02:10.757775 4755 scope.go:117] "RemoveContainer" containerID="069e74fb745b22d2e40409a9c21ff2e40e0bdf9359efe0e863492dacbe4fee35" Dec 10 16:02:10 crc kubenswrapper[4755]: E1210 16:02:10.758611 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:02:16 crc kubenswrapper[4755]: E1210 16:02:16.760683 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:02:19 crc kubenswrapper[4755]: E1210 16:02:19.759644 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:02:25 crc kubenswrapper[4755]: I1210 16:02:25.758771 4755 scope.go:117] "RemoveContainer" containerID="069e74fb745b22d2e40409a9c21ff2e40e0bdf9359efe0e863492dacbe4fee35" Dec 10 16:02:25 crc kubenswrapper[4755]: E1210 16:02:25.759687 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:02:29 crc kubenswrapper[4755]: E1210 16:02:29.760423 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:02:31 crc kubenswrapper[4755]: E1210 16:02:31.759641 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:02:37 crc kubenswrapper[4755]: I1210 16:02:37.758002 4755 scope.go:117] "RemoveContainer" containerID="069e74fb745b22d2e40409a9c21ff2e40e0bdf9359efe0e863492dacbe4fee35" Dec 10 16:02:37 crc kubenswrapper[4755]: E1210 16:02:37.758970 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:02:42 crc kubenswrapper[4755]: E1210 16:02:42.760425 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:02:46 crc kubenswrapper[4755]: E1210 16:02:46.759320 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:02:50 crc kubenswrapper[4755]: I1210 16:02:50.759250 4755 scope.go:117] "RemoveContainer" containerID="069e74fb745b22d2e40409a9c21ff2e40e0bdf9359efe0e863492dacbe4fee35" Dec 10 16:02:50 crc kubenswrapper[4755]: E1210 16:02:50.760054 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:02:56 crc kubenswrapper[4755]: E1210 16:02:56.759224 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:02:59 crc kubenswrapper[4755]: E1210 16:02:59.760010 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:03:01 crc kubenswrapper[4755]: I1210 16:03:01.758676 4755 scope.go:117] "RemoveContainer" containerID="069e74fb745b22d2e40409a9c21ff2e40e0bdf9359efe0e863492dacbe4fee35" Dec 10 16:03:01 crc kubenswrapper[4755]: E1210 16:03:01.759567 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:03:07 crc kubenswrapper[4755]: E1210 16:03:07.760530 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:03:12 crc kubenswrapper[4755]: I1210 16:03:12.758612 4755 scope.go:117] "RemoveContainer" containerID="069e74fb745b22d2e40409a9c21ff2e40e0bdf9359efe0e863492dacbe4fee35" Dec 10 16:03:12 crc kubenswrapper[4755]: E1210 16:03:12.759411 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:03:13 crc kubenswrapper[4755]: E1210 16:03:13.773057 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:03:21 crc kubenswrapper[4755]: E1210 16:03:21.760020 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:03:23 crc kubenswrapper[4755]: I1210 16:03:23.777819 4755 scope.go:117] "RemoveContainer" containerID="069e74fb745b22d2e40409a9c21ff2e40e0bdf9359efe0e863492dacbe4fee35" Dec 10 16:03:23 crc kubenswrapper[4755]: E1210 16:03:23.779166 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:03:24 crc kubenswrapper[4755]: E1210 16:03:24.760993 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:03:32 crc kubenswrapper[4755]: E1210 16:03:32.760989 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:03:37 crc kubenswrapper[4755]: I1210 16:03:37.758324 4755 scope.go:117] "RemoveContainer" containerID="069e74fb745b22d2e40409a9c21ff2e40e0bdf9359efe0e863492dacbe4fee35" Dec 10 16:03:37 crc kubenswrapper[4755]: E1210 16:03:37.758966 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:03:38 crc kubenswrapper[4755]: E1210 16:03:38.761088 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:03:45 crc kubenswrapper[4755]: E1210 16:03:45.760835 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:03:49 crc kubenswrapper[4755]: I1210 16:03:49.758478 4755 scope.go:117] "RemoveContainer" containerID="069e74fb745b22d2e40409a9c21ff2e40e0bdf9359efe0e863492dacbe4fee35" Dec 10 16:03:49 crc kubenswrapper[4755]: E1210 16:03:49.759076 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:03:53 crc kubenswrapper[4755]: E1210 16:03:53.761929 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:03:56 crc kubenswrapper[4755]: E1210 16:03:56.759943 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:04:01 crc kubenswrapper[4755]: I1210 16:04:01.758825 4755 scope.go:117] "RemoveContainer" containerID="069e74fb745b22d2e40409a9c21ff2e40e0bdf9359efe0e863492dacbe4fee35" Dec 10 16:04:01 crc kubenswrapper[4755]: E1210 16:04:01.759902 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:04:05 crc kubenswrapper[4755]: E1210 16:04:05.760444 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:04:07 crc kubenswrapper[4755]: E1210 16:04:07.891974 4755 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 10 16:04:07 crc kubenswrapper[4755]: E1210 16:04:07.892711 4755 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 10 16:04:07 crc kubenswrapper[4755]: E1210 16:04:07.892906 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mz4t5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-jfc28_openstack(998863b6-4f48-4c8b-8011-a40377686b99): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 10 16:04:07 crc kubenswrapper[4755]: E1210 16:04:07.894178 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:04:16 crc kubenswrapper[4755]: I1210 16:04:16.757185 4755 scope.go:117] "RemoveContainer" containerID="069e74fb745b22d2e40409a9c21ff2e40e0bdf9359efe0e863492dacbe4fee35" Dec 10 16:04:16 crc kubenswrapper[4755]: E1210 16:04:16.757739 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:04:17 crc kubenswrapper[4755]: E1210 16:04:17.761221 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:04:21 crc kubenswrapper[4755]: E1210 16:04:21.763053 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:04:28 crc kubenswrapper[4755]: E1210 16:04:28.759176 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:04:31 crc kubenswrapper[4755]: I1210 16:04:31.758526 4755 scope.go:117] "RemoveContainer" containerID="069e74fb745b22d2e40409a9c21ff2e40e0bdf9359efe0e863492dacbe4fee35" Dec 10 16:04:31 crc kubenswrapper[4755]: E1210 16:04:31.758915 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:04:35 crc kubenswrapper[4755]: E1210 16:04:35.759681 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:04:40 crc kubenswrapper[4755]: I1210 16:04:40.760574 4755 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 10 16:04:40 crc kubenswrapper[4755]: E1210 16:04:40.890807 4755 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 10 16:04:40 crc kubenswrapper[4755]: E1210 16:04:40.890862 4755 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 10 16:04:40 crc kubenswrapper[4755]: E1210 16:04:40.890978 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5d4h5b7hfbh5ddh688h9ch55bh7chf6h5ddh68ch94h69h5c5h596h59bh569hfchc4h676hcbh64dhdbh57fh75h5c9h98h59ch679h566h77h9cq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hw9gj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(6d104bea-ecdc-4fe1-9861-fb1a19fce845): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 10 16:04:40 crc kubenswrapper[4755]: E1210 16:04:40.892122 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:04:45 crc kubenswrapper[4755]: I1210 16:04:45.758172 4755 scope.go:117] "RemoveContainer" containerID="069e74fb745b22d2e40409a9c21ff2e40e0bdf9359efe0e863492dacbe4fee35" Dec 10 16:04:45 crc kubenswrapper[4755]: E1210 16:04:45.759529 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:04:46 crc kubenswrapper[4755]: E1210 16:04:46.760228 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:04:55 crc kubenswrapper[4755]: E1210 16:04:55.759500 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:04:57 crc kubenswrapper[4755]: E1210 16:04:57.768318 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:04:59 crc kubenswrapper[4755]: I1210 16:04:59.758333 4755 scope.go:117] "RemoveContainer" containerID="069e74fb745b22d2e40409a9c21ff2e40e0bdf9359efe0e863492dacbe4fee35" Dec 10 16:04:59 crc kubenswrapper[4755]: E1210 16:04:59.759126 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:05:09 crc kubenswrapper[4755]: E1210 16:05:09.759653 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:05:11 crc kubenswrapper[4755]: I1210 16:05:11.758004 4755 scope.go:117] "RemoveContainer" containerID="069e74fb745b22d2e40409a9c21ff2e40e0bdf9359efe0e863492dacbe4fee35" Dec 10 16:05:11 crc kubenswrapper[4755]: E1210 16:05:11.758703 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:05:12 crc kubenswrapper[4755]: E1210 16:05:12.760662 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:05:23 crc kubenswrapper[4755]: I1210 16:05:23.759774 4755 scope.go:117] "RemoveContainer" containerID="069e74fb745b22d2e40409a9c21ff2e40e0bdf9359efe0e863492dacbe4fee35" Dec 10 16:05:23 crc kubenswrapper[4755]: E1210 16:05:23.761329 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:05:23 crc kubenswrapper[4755]: E1210 16:05:23.793846 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:05:24 crc kubenswrapper[4755]: E1210 16:05:24.759384 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:05:35 crc kubenswrapper[4755]: I1210 16:05:35.758502 4755 scope.go:117] "RemoveContainer" containerID="069e74fb745b22d2e40409a9c21ff2e40e0bdf9359efe0e863492dacbe4fee35" Dec 10 16:05:35 crc kubenswrapper[4755]: E1210 16:05:35.759656 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:05:37 crc kubenswrapper[4755]: E1210 16:05:37.761228 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:05:37 crc kubenswrapper[4755]: E1210 16:05:37.761237 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:05:48 crc kubenswrapper[4755]: I1210 16:05:48.758276 4755 scope.go:117] "RemoveContainer" containerID="069e74fb745b22d2e40409a9c21ff2e40e0bdf9359efe0e863492dacbe4fee35" Dec 10 16:05:48 crc kubenswrapper[4755]: E1210 16:05:48.759043 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:05:50 crc kubenswrapper[4755]: E1210 16:05:50.003322 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:05:50 crc kubenswrapper[4755]: E1210 16:05:50.762331 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:05:58 crc kubenswrapper[4755]: I1210 16:05:58.111530 4755 generic.go:334] "Generic (PLEG): container finished" podID="e893969f-84c7-4d33-a977-13cdc1a9ef2e" containerID="9f3b3d3f9de2495b039ac5e8dfb6fe5bc106a9d5d04acc3fb71782e81ab24292" exitCode=2 Dec 10 16:05:58 crc kubenswrapper[4755]: I1210 16:05:58.112552 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-246lg" event={"ID":"e893969f-84c7-4d33-a977-13cdc1a9ef2e","Type":"ContainerDied","Data":"9f3b3d3f9de2495b039ac5e8dfb6fe5bc106a9d5d04acc3fb71782e81ab24292"} Dec 10 16:05:59 crc kubenswrapper[4755]: I1210 16:05:59.613037 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-246lg" Dec 10 16:05:59 crc kubenswrapper[4755]: I1210 16:05:59.703707 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-klgpv\" (UniqueName: \"kubernetes.io/projected/e893969f-84c7-4d33-a977-13cdc1a9ef2e-kube-api-access-klgpv\") pod \"e893969f-84c7-4d33-a977-13cdc1a9ef2e\" (UID: \"e893969f-84c7-4d33-a977-13cdc1a9ef2e\") " Dec 10 16:05:59 crc kubenswrapper[4755]: I1210 16:05:59.704149 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e893969f-84c7-4d33-a977-13cdc1a9ef2e-ssh-key\") pod \"e893969f-84c7-4d33-a977-13cdc1a9ef2e\" (UID: \"e893969f-84c7-4d33-a977-13cdc1a9ef2e\") " Dec 10 16:05:59 crc kubenswrapper[4755]: I1210 16:05:59.704234 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e893969f-84c7-4d33-a977-13cdc1a9ef2e-inventory\") pod \"e893969f-84c7-4d33-a977-13cdc1a9ef2e\" (UID: \"e893969f-84c7-4d33-a977-13cdc1a9ef2e\") " Dec 10 16:05:59 crc kubenswrapper[4755]: I1210 16:05:59.711447 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e893969f-84c7-4d33-a977-13cdc1a9ef2e-kube-api-access-klgpv" (OuterVolumeSpecName: "kube-api-access-klgpv") pod "e893969f-84c7-4d33-a977-13cdc1a9ef2e" (UID: "e893969f-84c7-4d33-a977-13cdc1a9ef2e"). InnerVolumeSpecName "kube-api-access-klgpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 16:05:59 crc kubenswrapper[4755]: I1210 16:05:59.734812 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e893969f-84c7-4d33-a977-13cdc1a9ef2e-inventory" (OuterVolumeSpecName: "inventory") pod "e893969f-84c7-4d33-a977-13cdc1a9ef2e" (UID: "e893969f-84c7-4d33-a977-13cdc1a9ef2e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 16:05:59 crc kubenswrapper[4755]: I1210 16:05:59.758852 4755 scope.go:117] "RemoveContainer" containerID="069e74fb745b22d2e40409a9c21ff2e40e0bdf9359efe0e863492dacbe4fee35" Dec 10 16:05:59 crc kubenswrapper[4755]: E1210 16:05:59.759237 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:05:59 crc kubenswrapper[4755]: I1210 16:05:59.763099 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e893969f-84c7-4d33-a977-13cdc1a9ef2e-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "e893969f-84c7-4d33-a977-13cdc1a9ef2e" (UID: "e893969f-84c7-4d33-a977-13cdc1a9ef2e"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 16:05:59 crc kubenswrapper[4755]: I1210 16:05:59.806841 4755 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e893969f-84c7-4d33-a977-13cdc1a9ef2e-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 10 16:05:59 crc kubenswrapper[4755]: I1210 16:05:59.806881 4755 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e893969f-84c7-4d33-a977-13cdc1a9ef2e-inventory\") on node \"crc\" DevicePath \"\"" Dec 10 16:05:59 crc kubenswrapper[4755]: I1210 16:05:59.806910 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-klgpv\" (UniqueName: \"kubernetes.io/projected/e893969f-84c7-4d33-a977-13cdc1a9ef2e-kube-api-access-klgpv\") on node \"crc\" DevicePath \"\"" Dec 10 16:06:00 crc kubenswrapper[4755]: I1210 16:06:00.131719 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-246lg" event={"ID":"e893969f-84c7-4d33-a977-13cdc1a9ef2e","Type":"ContainerDied","Data":"7233f413a7a8daafd32fcef01021c0fb5f1df7c6b3cd8562d2f07c77b0afa490"} Dec 10 16:06:00 crc kubenswrapper[4755]: I1210 16:06:00.131758 4755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7233f413a7a8daafd32fcef01021c0fb5f1df7c6b3cd8562d2f07c77b0afa490" Dec 10 16:06:00 crc kubenswrapper[4755]: I1210 16:06:00.131798 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-246lg" Dec 10 16:06:01 crc kubenswrapper[4755]: E1210 16:06:01.759732 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:06:01 crc kubenswrapper[4755]: E1210 16:06:01.759732 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:06:13 crc kubenswrapper[4755]: E1210 16:06:13.766122 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:06:14 crc kubenswrapper[4755]: I1210 16:06:14.758553 4755 scope.go:117] "RemoveContainer" containerID="069e74fb745b22d2e40409a9c21ff2e40e0bdf9359efe0e863492dacbe4fee35" Dec 10 16:06:14 crc kubenswrapper[4755]: E1210 16:06:14.759117 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:06:15 crc kubenswrapper[4755]: E1210 16:06:15.759287 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:06:17 crc kubenswrapper[4755]: I1210 16:06:17.038610 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bdgt7"] Dec 10 16:06:17 crc kubenswrapper[4755]: E1210 16:06:17.039655 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53fbaf86-07f0-41db-b467-1b101d16fc8d" containerName="keystone-cron" Dec 10 16:06:17 crc kubenswrapper[4755]: I1210 16:06:17.039677 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="53fbaf86-07f0-41db-b467-1b101d16fc8d" containerName="keystone-cron" Dec 10 16:06:17 crc kubenswrapper[4755]: E1210 16:06:17.039710 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e893969f-84c7-4d33-a977-13cdc1a9ef2e" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 10 16:06:17 crc kubenswrapper[4755]: I1210 16:06:17.039719 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="e893969f-84c7-4d33-a977-13cdc1a9ef2e" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 10 16:06:17 crc kubenswrapper[4755]: I1210 16:06:17.040019 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="e893969f-84c7-4d33-a977-13cdc1a9ef2e" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 10 16:06:17 crc kubenswrapper[4755]: I1210 16:06:17.040048 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="53fbaf86-07f0-41db-b467-1b101d16fc8d" containerName="keystone-cron" Dec 10 16:06:17 crc kubenswrapper[4755]: I1210 16:06:17.041136 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bdgt7" Dec 10 16:06:17 crc kubenswrapper[4755]: I1210 16:06:17.045444 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-74mg7" Dec 10 16:06:17 crc kubenswrapper[4755]: I1210 16:06:17.045486 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 10 16:06:17 crc kubenswrapper[4755]: I1210 16:06:17.045613 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 10 16:06:17 crc kubenswrapper[4755]: I1210 16:06:17.046058 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 10 16:06:17 crc kubenswrapper[4755]: I1210 16:06:17.070401 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bdgt7"] Dec 10 16:06:17 crc kubenswrapper[4755]: I1210 16:06:17.159310 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/40fb2154-25cc-4263-beb4-f375fce600d1-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bdgt7\" (UID: \"40fb2154-25cc-4263-beb4-f375fce600d1\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bdgt7" Dec 10 16:06:17 crc kubenswrapper[4755]: I1210 16:06:17.159517 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/40fb2154-25cc-4263-beb4-f375fce600d1-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bdgt7\" (UID: \"40fb2154-25cc-4263-beb4-f375fce600d1\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bdgt7" Dec 10 16:06:17 crc kubenswrapper[4755]: I1210 16:06:17.159691 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-287rz\" (UniqueName: \"kubernetes.io/projected/40fb2154-25cc-4263-beb4-f375fce600d1-kube-api-access-287rz\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bdgt7\" (UID: \"40fb2154-25cc-4263-beb4-f375fce600d1\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bdgt7" Dec 10 16:06:17 crc kubenswrapper[4755]: I1210 16:06:17.261070 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/40fb2154-25cc-4263-beb4-f375fce600d1-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bdgt7\" (UID: \"40fb2154-25cc-4263-beb4-f375fce600d1\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bdgt7" Dec 10 16:06:17 crc kubenswrapper[4755]: I1210 16:06:17.261211 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-287rz\" (UniqueName: \"kubernetes.io/projected/40fb2154-25cc-4263-beb4-f375fce600d1-kube-api-access-287rz\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bdgt7\" (UID: \"40fb2154-25cc-4263-beb4-f375fce600d1\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bdgt7" Dec 10 16:06:17 crc kubenswrapper[4755]: I1210 16:06:17.261264 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/40fb2154-25cc-4263-beb4-f375fce600d1-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bdgt7\" (UID: \"40fb2154-25cc-4263-beb4-f375fce600d1\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bdgt7" Dec 10 16:06:17 crc kubenswrapper[4755]: I1210 16:06:17.268010 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/40fb2154-25cc-4263-beb4-f375fce600d1-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bdgt7\" (UID: \"40fb2154-25cc-4263-beb4-f375fce600d1\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bdgt7" Dec 10 16:06:17 crc kubenswrapper[4755]: I1210 16:06:17.268649 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/40fb2154-25cc-4263-beb4-f375fce600d1-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bdgt7\" (UID: \"40fb2154-25cc-4263-beb4-f375fce600d1\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bdgt7" Dec 10 16:06:17 crc kubenswrapper[4755]: I1210 16:06:17.282925 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-287rz\" (UniqueName: \"kubernetes.io/projected/40fb2154-25cc-4263-beb4-f375fce600d1-kube-api-access-287rz\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bdgt7\" (UID: \"40fb2154-25cc-4263-beb4-f375fce600d1\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bdgt7" Dec 10 16:06:17 crc kubenswrapper[4755]: I1210 16:06:17.366248 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bdgt7" Dec 10 16:06:18 crc kubenswrapper[4755]: W1210 16:06:18.079408 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod40fb2154_25cc_4263_beb4_f375fce600d1.slice/crio-a1d5c2458963c00a5319e9464d2e5d1037c47c26cd5900c9326d01b7fec00193 WatchSource:0}: Error finding container a1d5c2458963c00a5319e9464d2e5d1037c47c26cd5900c9326d01b7fec00193: Status 404 returned error can't find the container with id a1d5c2458963c00a5319e9464d2e5d1037c47c26cd5900c9326d01b7fec00193 Dec 10 16:06:18 crc kubenswrapper[4755]: I1210 16:06:18.079487 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bdgt7"] Dec 10 16:06:18 crc kubenswrapper[4755]: I1210 16:06:18.336079 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bdgt7" event={"ID":"40fb2154-25cc-4263-beb4-f375fce600d1","Type":"ContainerStarted","Data":"a1d5c2458963c00a5319e9464d2e5d1037c47c26cd5900c9326d01b7fec00193"} Dec 10 16:06:20 crc kubenswrapper[4755]: I1210 16:06:20.355683 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bdgt7" event={"ID":"40fb2154-25cc-4263-beb4-f375fce600d1","Type":"ContainerStarted","Data":"086ebb389c462796575817fa1576187cfc9fdbf785cc9025ed968918d9df7b95"} Dec 10 16:06:20 crc kubenswrapper[4755]: I1210 16:06:20.374692 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bdgt7" podStartSLOduration=1.728517206 podStartE2EDuration="3.374672209s" podCreationTimestamp="2025-12-10 16:06:17 +0000 UTC" firstStartedPulling="2025-12-10 16:06:18.08157161 +0000 UTC m=+2574.682455242" lastFinishedPulling="2025-12-10 16:06:19.727726613 +0000 UTC m=+2576.328610245" observedRunningTime="2025-12-10 16:06:20.367795392 +0000 UTC m=+2576.968679014" watchObservedRunningTime="2025-12-10 16:06:20.374672209 +0000 UTC m=+2576.975555841" Dec 10 16:06:24 crc kubenswrapper[4755]: E1210 16:06:24.760497 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:06:29 crc kubenswrapper[4755]: I1210 16:06:29.757618 4755 scope.go:117] "RemoveContainer" containerID="069e74fb745b22d2e40409a9c21ff2e40e0bdf9359efe0e863492dacbe4fee35" Dec 10 16:06:29 crc kubenswrapper[4755]: E1210 16:06:29.759032 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:06:30 crc kubenswrapper[4755]: E1210 16:06:30.759683 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:06:39 crc kubenswrapper[4755]: E1210 16:06:39.762354 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:06:41 crc kubenswrapper[4755]: I1210 16:06:41.757782 4755 scope.go:117] "RemoveContainer" containerID="069e74fb745b22d2e40409a9c21ff2e40e0bdf9359efe0e863492dacbe4fee35" Dec 10 16:06:42 crc kubenswrapper[4755]: I1210 16:06:42.586150 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" event={"ID":"b132a8b9-1c99-414d-8773-229bf36b305d","Type":"ContainerStarted","Data":"8ecc5fc17cefa81deedcf3bc2f08dbc197a77ef30c47fc063b577484619f3b8b"} Dec 10 16:06:45 crc kubenswrapper[4755]: E1210 16:06:45.761281 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:06:50 crc kubenswrapper[4755]: E1210 16:06:50.760152 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:06:59 crc kubenswrapper[4755]: E1210 16:06:59.761571 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:07:05 crc kubenswrapper[4755]: I1210 16:07:05.753859 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xq4bx"] Dec 10 16:07:05 crc kubenswrapper[4755]: I1210 16:07:05.756482 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xq4bx" Dec 10 16:07:05 crc kubenswrapper[4755]: E1210 16:07:05.760658 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:07:05 crc kubenswrapper[4755]: I1210 16:07:05.780656 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pg4h8\" (UniqueName: \"kubernetes.io/projected/779c7b86-e8f4-47a5-9475-867aa4dd9827-kube-api-access-pg4h8\") pod \"redhat-operators-xq4bx\" (UID: \"779c7b86-e8f4-47a5-9475-867aa4dd9827\") " pod="openshift-marketplace/redhat-operators-xq4bx" Dec 10 16:07:05 crc kubenswrapper[4755]: I1210 16:07:05.781141 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/779c7b86-e8f4-47a5-9475-867aa4dd9827-utilities\") pod \"redhat-operators-xq4bx\" (UID: \"779c7b86-e8f4-47a5-9475-867aa4dd9827\") " pod="openshift-marketplace/redhat-operators-xq4bx" Dec 10 16:07:05 crc kubenswrapper[4755]: I1210 16:07:05.781295 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/779c7b86-e8f4-47a5-9475-867aa4dd9827-catalog-content\") pod \"redhat-operators-xq4bx\" (UID: \"779c7b86-e8f4-47a5-9475-867aa4dd9827\") " pod="openshift-marketplace/redhat-operators-xq4bx" Dec 10 16:07:05 crc kubenswrapper[4755]: I1210 16:07:05.800485 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xq4bx"] Dec 10 16:07:05 crc kubenswrapper[4755]: I1210 16:07:05.882991 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/779c7b86-e8f4-47a5-9475-867aa4dd9827-utilities\") pod \"redhat-operators-xq4bx\" (UID: \"779c7b86-e8f4-47a5-9475-867aa4dd9827\") " pod="openshift-marketplace/redhat-operators-xq4bx" Dec 10 16:07:05 crc kubenswrapper[4755]: I1210 16:07:05.883128 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/779c7b86-e8f4-47a5-9475-867aa4dd9827-catalog-content\") pod \"redhat-operators-xq4bx\" (UID: \"779c7b86-e8f4-47a5-9475-867aa4dd9827\") " pod="openshift-marketplace/redhat-operators-xq4bx" Dec 10 16:07:05 crc kubenswrapper[4755]: I1210 16:07:05.883259 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pg4h8\" (UniqueName: \"kubernetes.io/projected/779c7b86-e8f4-47a5-9475-867aa4dd9827-kube-api-access-pg4h8\") pod \"redhat-operators-xq4bx\" (UID: \"779c7b86-e8f4-47a5-9475-867aa4dd9827\") " pod="openshift-marketplace/redhat-operators-xq4bx" Dec 10 16:07:05 crc kubenswrapper[4755]: I1210 16:07:05.884706 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/779c7b86-e8f4-47a5-9475-867aa4dd9827-utilities\") pod \"redhat-operators-xq4bx\" (UID: \"779c7b86-e8f4-47a5-9475-867aa4dd9827\") " pod="openshift-marketplace/redhat-operators-xq4bx" Dec 10 16:07:05 crc kubenswrapper[4755]: I1210 16:07:05.884969 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/779c7b86-e8f4-47a5-9475-867aa4dd9827-catalog-content\") pod \"redhat-operators-xq4bx\" (UID: \"779c7b86-e8f4-47a5-9475-867aa4dd9827\") " pod="openshift-marketplace/redhat-operators-xq4bx" Dec 10 16:07:05 crc kubenswrapper[4755]: I1210 16:07:05.903618 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pg4h8\" (UniqueName: \"kubernetes.io/projected/779c7b86-e8f4-47a5-9475-867aa4dd9827-kube-api-access-pg4h8\") pod \"redhat-operators-xq4bx\" (UID: \"779c7b86-e8f4-47a5-9475-867aa4dd9827\") " pod="openshift-marketplace/redhat-operators-xq4bx" Dec 10 16:07:06 crc kubenswrapper[4755]: I1210 16:07:06.076606 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xq4bx" Dec 10 16:07:06 crc kubenswrapper[4755]: I1210 16:07:06.562153 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xq4bx"] Dec 10 16:07:06 crc kubenswrapper[4755]: W1210 16:07:06.563131 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod779c7b86_e8f4_47a5_9475_867aa4dd9827.slice/crio-ccf0c2ead7c5174afac6e816e86bd0baadc0cb54c378cb8c7b21ebc1205d2993 WatchSource:0}: Error finding container ccf0c2ead7c5174afac6e816e86bd0baadc0cb54c378cb8c7b21ebc1205d2993: Status 404 returned error can't find the container with id ccf0c2ead7c5174afac6e816e86bd0baadc0cb54c378cb8c7b21ebc1205d2993 Dec 10 16:07:06 crc kubenswrapper[4755]: I1210 16:07:06.840672 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xq4bx" event={"ID":"779c7b86-e8f4-47a5-9475-867aa4dd9827","Type":"ContainerStarted","Data":"ccf0c2ead7c5174afac6e816e86bd0baadc0cb54c378cb8c7b21ebc1205d2993"} Dec 10 16:07:07 crc kubenswrapper[4755]: I1210 16:07:07.850866 4755 generic.go:334] "Generic (PLEG): container finished" podID="779c7b86-e8f4-47a5-9475-867aa4dd9827" containerID="fe7898794653ee1a4be203423e031234583490951f904662c09d09e25fa61d14" exitCode=0 Dec 10 16:07:07 crc kubenswrapper[4755]: I1210 16:07:07.850918 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xq4bx" event={"ID":"779c7b86-e8f4-47a5-9475-867aa4dd9827","Type":"ContainerDied","Data":"fe7898794653ee1a4be203423e031234583490951f904662c09d09e25fa61d14"} Dec 10 16:07:09 crc kubenswrapper[4755]: I1210 16:07:09.881564 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xq4bx" event={"ID":"779c7b86-e8f4-47a5-9475-867aa4dd9827","Type":"ContainerStarted","Data":"92d7f279145280d1dce5fd76a9fc2c62af6deedaa9f36f894c030f6bdd01884c"} Dec 10 16:07:12 crc kubenswrapper[4755]: I1210 16:07:12.917750 4755 generic.go:334] "Generic (PLEG): container finished" podID="779c7b86-e8f4-47a5-9475-867aa4dd9827" containerID="92d7f279145280d1dce5fd76a9fc2c62af6deedaa9f36f894c030f6bdd01884c" exitCode=0 Dec 10 16:07:12 crc kubenswrapper[4755]: I1210 16:07:12.917782 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xq4bx" event={"ID":"779c7b86-e8f4-47a5-9475-867aa4dd9827","Type":"ContainerDied","Data":"92d7f279145280d1dce5fd76a9fc2c62af6deedaa9f36f894c030f6bdd01884c"} Dec 10 16:07:13 crc kubenswrapper[4755]: E1210 16:07:13.766318 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:07:13 crc kubenswrapper[4755]: I1210 16:07:13.929325 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xq4bx" event={"ID":"779c7b86-e8f4-47a5-9475-867aa4dd9827","Type":"ContainerStarted","Data":"ed8270ac592c6fbe3eebde4984ffb0c58a6ac63278a93ee5dbffc9b74ee393fe"} Dec 10 16:07:13 crc kubenswrapper[4755]: I1210 16:07:13.951736 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xq4bx" podStartSLOduration=3.221268818 podStartE2EDuration="8.951705524s" podCreationTimestamp="2025-12-10 16:07:05 +0000 UTC" firstStartedPulling="2025-12-10 16:07:07.852844089 +0000 UTC m=+2624.453727731" lastFinishedPulling="2025-12-10 16:07:13.583280805 +0000 UTC m=+2630.184164437" observedRunningTime="2025-12-10 16:07:13.946838051 +0000 UTC m=+2630.547721693" watchObservedRunningTime="2025-12-10 16:07:13.951705524 +0000 UTC m=+2630.552589156" Dec 10 16:07:16 crc kubenswrapper[4755]: I1210 16:07:16.077773 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xq4bx" Dec 10 16:07:16 crc kubenswrapper[4755]: I1210 16:07:16.078085 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xq4bx" Dec 10 16:07:17 crc kubenswrapper[4755]: I1210 16:07:17.132736 4755 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xq4bx" podUID="779c7b86-e8f4-47a5-9475-867aa4dd9827" containerName="registry-server" probeResult="failure" output=< Dec 10 16:07:17 crc kubenswrapper[4755]: timeout: failed to connect service ":50051" within 1s Dec 10 16:07:17 crc kubenswrapper[4755]: > Dec 10 16:07:18 crc kubenswrapper[4755]: E1210 16:07:18.759817 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:07:24 crc kubenswrapper[4755]: E1210 16:07:24.761179 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:07:26 crc kubenswrapper[4755]: I1210 16:07:26.126489 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xq4bx" Dec 10 16:07:26 crc kubenswrapper[4755]: I1210 16:07:26.175387 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xq4bx" Dec 10 16:07:26 crc kubenswrapper[4755]: I1210 16:07:26.365768 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xq4bx"] Dec 10 16:07:28 crc kubenswrapper[4755]: I1210 16:07:28.066565 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-xq4bx" podUID="779c7b86-e8f4-47a5-9475-867aa4dd9827" containerName="registry-server" containerID="cri-o://ed8270ac592c6fbe3eebde4984ffb0c58a6ac63278a93ee5dbffc9b74ee393fe" gracePeriod=2 Dec 10 16:07:28 crc kubenswrapper[4755]: I1210 16:07:28.982399 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xq4bx" Dec 10 16:07:29 crc kubenswrapper[4755]: I1210 16:07:29.078356 4755 generic.go:334] "Generic (PLEG): container finished" podID="779c7b86-e8f4-47a5-9475-867aa4dd9827" containerID="ed8270ac592c6fbe3eebde4984ffb0c58a6ac63278a93ee5dbffc9b74ee393fe" exitCode=0 Dec 10 16:07:29 crc kubenswrapper[4755]: I1210 16:07:29.078409 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xq4bx" event={"ID":"779c7b86-e8f4-47a5-9475-867aa4dd9827","Type":"ContainerDied","Data":"ed8270ac592c6fbe3eebde4984ffb0c58a6ac63278a93ee5dbffc9b74ee393fe"} Dec 10 16:07:29 crc kubenswrapper[4755]: I1210 16:07:29.078440 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xq4bx" event={"ID":"779c7b86-e8f4-47a5-9475-867aa4dd9827","Type":"ContainerDied","Data":"ccf0c2ead7c5174afac6e816e86bd0baadc0cb54c378cb8c7b21ebc1205d2993"} Dec 10 16:07:29 crc kubenswrapper[4755]: I1210 16:07:29.078508 4755 scope.go:117] "RemoveContainer" containerID="ed8270ac592c6fbe3eebde4984ffb0c58a6ac63278a93ee5dbffc9b74ee393fe" Dec 10 16:07:29 crc kubenswrapper[4755]: I1210 16:07:29.078709 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xq4bx" Dec 10 16:07:29 crc kubenswrapper[4755]: I1210 16:07:29.098514 4755 scope.go:117] "RemoveContainer" containerID="92d7f279145280d1dce5fd76a9fc2c62af6deedaa9f36f894c030f6bdd01884c" Dec 10 16:07:29 crc kubenswrapper[4755]: I1210 16:07:29.112955 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/779c7b86-e8f4-47a5-9475-867aa4dd9827-catalog-content\") pod \"779c7b86-e8f4-47a5-9475-867aa4dd9827\" (UID: \"779c7b86-e8f4-47a5-9475-867aa4dd9827\") " Dec 10 16:07:29 crc kubenswrapper[4755]: I1210 16:07:29.113025 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pg4h8\" (UniqueName: \"kubernetes.io/projected/779c7b86-e8f4-47a5-9475-867aa4dd9827-kube-api-access-pg4h8\") pod \"779c7b86-e8f4-47a5-9475-867aa4dd9827\" (UID: \"779c7b86-e8f4-47a5-9475-867aa4dd9827\") " Dec 10 16:07:29 crc kubenswrapper[4755]: I1210 16:07:29.113260 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/779c7b86-e8f4-47a5-9475-867aa4dd9827-utilities\") pod \"779c7b86-e8f4-47a5-9475-867aa4dd9827\" (UID: \"779c7b86-e8f4-47a5-9475-867aa4dd9827\") " Dec 10 16:07:29 crc kubenswrapper[4755]: I1210 16:07:29.114725 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/779c7b86-e8f4-47a5-9475-867aa4dd9827-utilities" (OuterVolumeSpecName: "utilities") pod "779c7b86-e8f4-47a5-9475-867aa4dd9827" (UID: "779c7b86-e8f4-47a5-9475-867aa4dd9827"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 16:07:29 crc kubenswrapper[4755]: I1210 16:07:29.120010 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/779c7b86-e8f4-47a5-9475-867aa4dd9827-kube-api-access-pg4h8" (OuterVolumeSpecName: "kube-api-access-pg4h8") pod "779c7b86-e8f4-47a5-9475-867aa4dd9827" (UID: "779c7b86-e8f4-47a5-9475-867aa4dd9827"). InnerVolumeSpecName "kube-api-access-pg4h8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 16:07:29 crc kubenswrapper[4755]: I1210 16:07:29.123826 4755 scope.go:117] "RemoveContainer" containerID="fe7898794653ee1a4be203423e031234583490951f904662c09d09e25fa61d14" Dec 10 16:07:29 crc kubenswrapper[4755]: I1210 16:07:29.214601 4755 scope.go:117] "RemoveContainer" containerID="ed8270ac592c6fbe3eebde4984ffb0c58a6ac63278a93ee5dbffc9b74ee393fe" Dec 10 16:07:29 crc kubenswrapper[4755]: E1210 16:07:29.215306 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed8270ac592c6fbe3eebde4984ffb0c58a6ac63278a93ee5dbffc9b74ee393fe\": container with ID starting with ed8270ac592c6fbe3eebde4984ffb0c58a6ac63278a93ee5dbffc9b74ee393fe not found: ID does not exist" containerID="ed8270ac592c6fbe3eebde4984ffb0c58a6ac63278a93ee5dbffc9b74ee393fe" Dec 10 16:07:29 crc kubenswrapper[4755]: I1210 16:07:29.215345 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed8270ac592c6fbe3eebde4984ffb0c58a6ac63278a93ee5dbffc9b74ee393fe"} err="failed to get container status \"ed8270ac592c6fbe3eebde4984ffb0c58a6ac63278a93ee5dbffc9b74ee393fe\": rpc error: code = NotFound desc = could not find container \"ed8270ac592c6fbe3eebde4984ffb0c58a6ac63278a93ee5dbffc9b74ee393fe\": container with ID starting with ed8270ac592c6fbe3eebde4984ffb0c58a6ac63278a93ee5dbffc9b74ee393fe not found: ID does not exist" Dec 10 16:07:29 crc kubenswrapper[4755]: I1210 16:07:29.215370 4755 scope.go:117] "RemoveContainer" containerID="92d7f279145280d1dce5fd76a9fc2c62af6deedaa9f36f894c030f6bdd01884c" Dec 10 16:07:29 crc kubenswrapper[4755]: E1210 16:07:29.215624 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92d7f279145280d1dce5fd76a9fc2c62af6deedaa9f36f894c030f6bdd01884c\": container with ID starting with 92d7f279145280d1dce5fd76a9fc2c62af6deedaa9f36f894c030f6bdd01884c not found: ID does not exist" containerID="92d7f279145280d1dce5fd76a9fc2c62af6deedaa9f36f894c030f6bdd01884c" Dec 10 16:07:29 crc kubenswrapper[4755]: I1210 16:07:29.215654 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92d7f279145280d1dce5fd76a9fc2c62af6deedaa9f36f894c030f6bdd01884c"} err="failed to get container status \"92d7f279145280d1dce5fd76a9fc2c62af6deedaa9f36f894c030f6bdd01884c\": rpc error: code = NotFound desc = could not find container \"92d7f279145280d1dce5fd76a9fc2c62af6deedaa9f36f894c030f6bdd01884c\": container with ID starting with 92d7f279145280d1dce5fd76a9fc2c62af6deedaa9f36f894c030f6bdd01884c not found: ID does not exist" Dec 10 16:07:29 crc kubenswrapper[4755]: I1210 16:07:29.215672 4755 scope.go:117] "RemoveContainer" containerID="fe7898794653ee1a4be203423e031234583490951f904662c09d09e25fa61d14" Dec 10 16:07:29 crc kubenswrapper[4755]: E1210 16:07:29.216092 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe7898794653ee1a4be203423e031234583490951f904662c09d09e25fa61d14\": container with ID starting with fe7898794653ee1a4be203423e031234583490951f904662c09d09e25fa61d14 not found: ID does not exist" containerID="fe7898794653ee1a4be203423e031234583490951f904662c09d09e25fa61d14" Dec 10 16:07:29 crc kubenswrapper[4755]: I1210 16:07:29.216118 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe7898794653ee1a4be203423e031234583490951f904662c09d09e25fa61d14"} err="failed to get container status \"fe7898794653ee1a4be203423e031234583490951f904662c09d09e25fa61d14\": rpc error: code = NotFound desc = could not find container \"fe7898794653ee1a4be203423e031234583490951f904662c09d09e25fa61d14\": container with ID starting with fe7898794653ee1a4be203423e031234583490951f904662c09d09e25fa61d14 not found: ID does not exist" Dec 10 16:07:29 crc kubenswrapper[4755]: I1210 16:07:29.216521 4755 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/779c7b86-e8f4-47a5-9475-867aa4dd9827-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 16:07:29 crc kubenswrapper[4755]: I1210 16:07:29.216626 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pg4h8\" (UniqueName: \"kubernetes.io/projected/779c7b86-e8f4-47a5-9475-867aa4dd9827-kube-api-access-pg4h8\") on node \"crc\" DevicePath \"\"" Dec 10 16:07:29 crc kubenswrapper[4755]: I1210 16:07:29.238687 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/779c7b86-e8f4-47a5-9475-867aa4dd9827-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "779c7b86-e8f4-47a5-9475-867aa4dd9827" (UID: "779c7b86-e8f4-47a5-9475-867aa4dd9827"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 16:07:29 crc kubenswrapper[4755]: I1210 16:07:29.318701 4755 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/779c7b86-e8f4-47a5-9475-867aa4dd9827-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 16:07:29 crc kubenswrapper[4755]: I1210 16:07:29.413145 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xq4bx"] Dec 10 16:07:29 crc kubenswrapper[4755]: I1210 16:07:29.422620 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-xq4bx"] Dec 10 16:07:29 crc kubenswrapper[4755]: I1210 16:07:29.769540 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="779c7b86-e8f4-47a5-9475-867aa4dd9827" path="/var/lib/kubelet/pods/779c7b86-e8f4-47a5-9475-867aa4dd9827/volumes" Dec 10 16:07:30 crc kubenswrapper[4755]: E1210 16:07:30.759605 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:07:35 crc kubenswrapper[4755]: E1210 16:07:35.760687 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:07:42 crc kubenswrapper[4755]: E1210 16:07:42.762442 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:07:48 crc kubenswrapper[4755]: E1210 16:07:48.759401 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:07:56 crc kubenswrapper[4755]: E1210 16:07:56.760291 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:08:02 crc kubenswrapper[4755]: E1210 16:08:02.759590 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:08:11 crc kubenswrapper[4755]: E1210 16:08:11.760262 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:08:16 crc kubenswrapper[4755]: E1210 16:08:16.758778 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:08:23 crc kubenswrapper[4755]: E1210 16:08:23.768157 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:08:30 crc kubenswrapper[4755]: E1210 16:08:30.760501 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:08:37 crc kubenswrapper[4755]: E1210 16:08:37.759755 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:08:41 crc kubenswrapper[4755]: E1210 16:08:41.760139 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:08:48 crc kubenswrapper[4755]: E1210 16:08:48.760263 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:08:55 crc kubenswrapper[4755]: E1210 16:08:55.760341 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:09:02 crc kubenswrapper[4755]: E1210 16:09:02.760013 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:09:08 crc kubenswrapper[4755]: E1210 16:09:08.884608 4755 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 10 16:09:08 crc kubenswrapper[4755]: E1210 16:09:08.885198 4755 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 10 16:09:08 crc kubenswrapper[4755]: E1210 16:09:08.885377 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mz4t5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-jfc28_openstack(998863b6-4f48-4c8b-8011-a40377686b99): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 10 16:09:08 crc kubenswrapper[4755]: E1210 16:09:08.887392 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:09:10 crc kubenswrapper[4755]: I1210 16:09:10.359898 4755 patch_prober.go:28] interesting pod/machine-config-daemon-ggt8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 16:09:10 crc kubenswrapper[4755]: I1210 16:09:10.360242 4755 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 16:09:16 crc kubenswrapper[4755]: E1210 16:09:16.760710 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:09:22 crc kubenswrapper[4755]: E1210 16:09:22.760254 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:09:28 crc kubenswrapper[4755]: E1210 16:09:28.760719 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:09:33 crc kubenswrapper[4755]: E1210 16:09:33.767245 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:09:40 crc kubenswrapper[4755]: I1210 16:09:40.359274 4755 patch_prober.go:28] interesting pod/machine-config-daemon-ggt8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 16:09:40 crc kubenswrapper[4755]: I1210 16:09:40.360329 4755 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 16:09:41 crc kubenswrapper[4755]: I1210 16:09:41.762218 4755 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 10 16:09:41 crc kubenswrapper[4755]: E1210 16:09:41.887354 4755 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 10 16:09:41 crc kubenswrapper[4755]: E1210 16:09:41.887784 4755 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 10 16:09:41 crc kubenswrapper[4755]: E1210 16:09:41.888048 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5d4h5b7hfbh5ddh688h9ch55bh7chf6h5ddh68ch94h69h5c5h596h59bh569hfchc4h676hcbh64dhdbh57fh75h5c9h98h59ch679h566h77h9cq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hw9gj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(6d104bea-ecdc-4fe1-9861-fb1a19fce845): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 10 16:09:41 crc kubenswrapper[4755]: E1210 16:09:41.889571 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:09:44 crc kubenswrapper[4755]: E1210 16:09:44.763070 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:09:53 crc kubenswrapper[4755]: E1210 16:09:53.766400 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:09:55 crc kubenswrapper[4755]: E1210 16:09:55.761740 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:10:07 crc kubenswrapper[4755]: E1210 16:10:07.760070 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:10:08 crc kubenswrapper[4755]: E1210 16:10:08.758781 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:10:09 crc kubenswrapper[4755]: I1210 16:10:09.932900 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-t52zx"] Dec 10 16:10:09 crc kubenswrapper[4755]: E1210 16:10:09.933739 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="779c7b86-e8f4-47a5-9475-867aa4dd9827" containerName="extract-content" Dec 10 16:10:09 crc kubenswrapper[4755]: I1210 16:10:09.933756 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="779c7b86-e8f4-47a5-9475-867aa4dd9827" containerName="extract-content" Dec 10 16:10:09 crc kubenswrapper[4755]: E1210 16:10:09.933808 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="779c7b86-e8f4-47a5-9475-867aa4dd9827" containerName="registry-server" Dec 10 16:10:09 crc kubenswrapper[4755]: I1210 16:10:09.933814 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="779c7b86-e8f4-47a5-9475-867aa4dd9827" containerName="registry-server" Dec 10 16:10:09 crc kubenswrapper[4755]: E1210 16:10:09.933828 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="779c7b86-e8f4-47a5-9475-867aa4dd9827" containerName="extract-utilities" Dec 10 16:10:09 crc kubenswrapper[4755]: I1210 16:10:09.933836 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="779c7b86-e8f4-47a5-9475-867aa4dd9827" containerName="extract-utilities" Dec 10 16:10:09 crc kubenswrapper[4755]: I1210 16:10:09.934035 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="779c7b86-e8f4-47a5-9475-867aa4dd9827" containerName="registry-server" Dec 10 16:10:09 crc kubenswrapper[4755]: I1210 16:10:09.935659 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t52zx" Dec 10 16:10:09 crc kubenswrapper[4755]: I1210 16:10:09.941003 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-t52zx"] Dec 10 16:10:10 crc kubenswrapper[4755]: I1210 16:10:10.042582 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9aba7980-53b3-40c4-9549-3cfd5ee68274-catalog-content\") pod \"redhat-marketplace-t52zx\" (UID: \"9aba7980-53b3-40c4-9549-3cfd5ee68274\") " pod="openshift-marketplace/redhat-marketplace-t52zx" Dec 10 16:10:10 crc kubenswrapper[4755]: I1210 16:10:10.042686 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9aba7980-53b3-40c4-9549-3cfd5ee68274-utilities\") pod \"redhat-marketplace-t52zx\" (UID: \"9aba7980-53b3-40c4-9549-3cfd5ee68274\") " pod="openshift-marketplace/redhat-marketplace-t52zx" Dec 10 16:10:10 crc kubenswrapper[4755]: I1210 16:10:10.042749 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vrzp\" (UniqueName: \"kubernetes.io/projected/9aba7980-53b3-40c4-9549-3cfd5ee68274-kube-api-access-4vrzp\") pod \"redhat-marketplace-t52zx\" (UID: \"9aba7980-53b3-40c4-9549-3cfd5ee68274\") " pod="openshift-marketplace/redhat-marketplace-t52zx" Dec 10 16:10:10 crc kubenswrapper[4755]: I1210 16:10:10.144413 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9aba7980-53b3-40c4-9549-3cfd5ee68274-catalog-content\") pod \"redhat-marketplace-t52zx\" (UID: \"9aba7980-53b3-40c4-9549-3cfd5ee68274\") " pod="openshift-marketplace/redhat-marketplace-t52zx" Dec 10 16:10:10 crc kubenswrapper[4755]: I1210 16:10:10.144495 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9aba7980-53b3-40c4-9549-3cfd5ee68274-utilities\") pod \"redhat-marketplace-t52zx\" (UID: \"9aba7980-53b3-40c4-9549-3cfd5ee68274\") " pod="openshift-marketplace/redhat-marketplace-t52zx" Dec 10 16:10:10 crc kubenswrapper[4755]: I1210 16:10:10.144534 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vrzp\" (UniqueName: \"kubernetes.io/projected/9aba7980-53b3-40c4-9549-3cfd5ee68274-kube-api-access-4vrzp\") pod \"redhat-marketplace-t52zx\" (UID: \"9aba7980-53b3-40c4-9549-3cfd5ee68274\") " pod="openshift-marketplace/redhat-marketplace-t52zx" Dec 10 16:10:10 crc kubenswrapper[4755]: I1210 16:10:10.145172 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9aba7980-53b3-40c4-9549-3cfd5ee68274-catalog-content\") pod \"redhat-marketplace-t52zx\" (UID: \"9aba7980-53b3-40c4-9549-3cfd5ee68274\") " pod="openshift-marketplace/redhat-marketplace-t52zx" Dec 10 16:10:10 crc kubenswrapper[4755]: I1210 16:10:10.145204 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9aba7980-53b3-40c4-9549-3cfd5ee68274-utilities\") pod \"redhat-marketplace-t52zx\" (UID: \"9aba7980-53b3-40c4-9549-3cfd5ee68274\") " pod="openshift-marketplace/redhat-marketplace-t52zx" Dec 10 16:10:10 crc kubenswrapper[4755]: I1210 16:10:10.163888 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vrzp\" (UniqueName: \"kubernetes.io/projected/9aba7980-53b3-40c4-9549-3cfd5ee68274-kube-api-access-4vrzp\") pod \"redhat-marketplace-t52zx\" (UID: \"9aba7980-53b3-40c4-9549-3cfd5ee68274\") " pod="openshift-marketplace/redhat-marketplace-t52zx" Dec 10 16:10:10 crc kubenswrapper[4755]: I1210 16:10:10.284963 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t52zx" Dec 10 16:10:10 crc kubenswrapper[4755]: I1210 16:10:10.358866 4755 patch_prober.go:28] interesting pod/machine-config-daemon-ggt8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 16:10:10 crc kubenswrapper[4755]: I1210 16:10:10.359109 4755 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 16:10:10 crc kubenswrapper[4755]: I1210 16:10:10.359155 4755 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" Dec 10 16:10:10 crc kubenswrapper[4755]: I1210 16:10:10.360037 4755 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8ecc5fc17cefa81deedcf3bc2f08dbc197a77ef30c47fc063b577484619f3b8b"} pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 10 16:10:10 crc kubenswrapper[4755]: I1210 16:10:10.360124 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" containerName="machine-config-daemon" containerID="cri-o://8ecc5fc17cefa81deedcf3bc2f08dbc197a77ef30c47fc063b577484619f3b8b" gracePeriod=600 Dec 10 16:10:10 crc kubenswrapper[4755]: I1210 16:10:10.626074 4755 generic.go:334] "Generic (PLEG): container finished" podID="b132a8b9-1c99-414d-8773-229bf36b305d" containerID="8ecc5fc17cefa81deedcf3bc2f08dbc197a77ef30c47fc063b577484619f3b8b" exitCode=0 Dec 10 16:10:10 crc kubenswrapper[4755]: I1210 16:10:10.626182 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" event={"ID":"b132a8b9-1c99-414d-8773-229bf36b305d","Type":"ContainerDied","Data":"8ecc5fc17cefa81deedcf3bc2f08dbc197a77ef30c47fc063b577484619f3b8b"} Dec 10 16:10:10 crc kubenswrapper[4755]: I1210 16:10:10.626542 4755 scope.go:117] "RemoveContainer" containerID="069e74fb745b22d2e40409a9c21ff2e40e0bdf9359efe0e863492dacbe4fee35" Dec 10 16:10:10 crc kubenswrapper[4755]: W1210 16:10:10.817360 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9aba7980_53b3_40c4_9549_3cfd5ee68274.slice/crio-841245ad92bc8a48040f2b8e6a21bcaefedafb5dc9a32f7b62c023566d7390b4 WatchSource:0}: Error finding container 841245ad92bc8a48040f2b8e6a21bcaefedafb5dc9a32f7b62c023566d7390b4: Status 404 returned error can't find the container with id 841245ad92bc8a48040f2b8e6a21bcaefedafb5dc9a32f7b62c023566d7390b4 Dec 10 16:10:10 crc kubenswrapper[4755]: I1210 16:10:10.817507 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-t52zx"] Dec 10 16:10:11 crc kubenswrapper[4755]: I1210 16:10:11.635405 4755 generic.go:334] "Generic (PLEG): container finished" podID="9aba7980-53b3-40c4-9549-3cfd5ee68274" containerID="8fcd14fcaa78cf095ddf281e916c242129287f56586b3936dd2c34e2e5cbbff2" exitCode=0 Dec 10 16:10:11 crc kubenswrapper[4755]: I1210 16:10:11.635480 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t52zx" event={"ID":"9aba7980-53b3-40c4-9549-3cfd5ee68274","Type":"ContainerDied","Data":"8fcd14fcaa78cf095ddf281e916c242129287f56586b3936dd2c34e2e5cbbff2"} Dec 10 16:10:11 crc kubenswrapper[4755]: I1210 16:10:11.635954 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t52zx" event={"ID":"9aba7980-53b3-40c4-9549-3cfd5ee68274","Type":"ContainerStarted","Data":"841245ad92bc8a48040f2b8e6a21bcaefedafb5dc9a32f7b62c023566d7390b4"} Dec 10 16:10:11 crc kubenswrapper[4755]: I1210 16:10:11.638512 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" event={"ID":"b132a8b9-1c99-414d-8773-229bf36b305d","Type":"ContainerStarted","Data":"30d53fb1dd018f11a561066e37ed5aa32e4d43a5cd91c13627ef984f5187f3fd"} Dec 10 16:10:14 crc kubenswrapper[4755]: I1210 16:10:14.671730 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t52zx" event={"ID":"9aba7980-53b3-40c4-9549-3cfd5ee68274","Type":"ContainerStarted","Data":"fbbff494fde6a4a5b3bc80c8b9cff7890c824e78a6e876f8d5787e839d5f803b"} Dec 10 16:10:15 crc kubenswrapper[4755]: I1210 16:10:15.682175 4755 generic.go:334] "Generic (PLEG): container finished" podID="9aba7980-53b3-40c4-9549-3cfd5ee68274" containerID="fbbff494fde6a4a5b3bc80c8b9cff7890c824e78a6e876f8d5787e839d5f803b" exitCode=0 Dec 10 16:10:15 crc kubenswrapper[4755]: I1210 16:10:15.682524 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t52zx" event={"ID":"9aba7980-53b3-40c4-9549-3cfd5ee68274","Type":"ContainerDied","Data":"fbbff494fde6a4a5b3bc80c8b9cff7890c824e78a6e876f8d5787e839d5f803b"} Dec 10 16:10:17 crc kubenswrapper[4755]: I1210 16:10:17.711499 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t52zx" event={"ID":"9aba7980-53b3-40c4-9549-3cfd5ee68274","Type":"ContainerStarted","Data":"1856dbc3e4f96f85a61b855ee086bb72a8e8f870f014f9398de4a87d6e5a05f0"} Dec 10 16:10:17 crc kubenswrapper[4755]: I1210 16:10:17.733675 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-t52zx" podStartSLOduration=3.698645998 podStartE2EDuration="8.733657237s" podCreationTimestamp="2025-12-10 16:10:09 +0000 UTC" firstStartedPulling="2025-12-10 16:10:11.641151848 +0000 UTC m=+2808.242035480" lastFinishedPulling="2025-12-10 16:10:16.676163087 +0000 UTC m=+2813.277046719" observedRunningTime="2025-12-10 16:10:17.728273819 +0000 UTC m=+2814.329157461" watchObservedRunningTime="2025-12-10 16:10:17.733657237 +0000 UTC m=+2814.334540869" Dec 10 16:10:20 crc kubenswrapper[4755]: I1210 16:10:20.285572 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-t52zx" Dec 10 16:10:20 crc kubenswrapper[4755]: I1210 16:10:20.286747 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-t52zx" Dec 10 16:10:20 crc kubenswrapper[4755]: I1210 16:10:20.335239 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-t52zx" Dec 10 16:10:21 crc kubenswrapper[4755]: E1210 16:10:21.762959 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:10:21 crc kubenswrapper[4755]: I1210 16:10:21.821901 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-t52zx" Dec 10 16:10:21 crc kubenswrapper[4755]: I1210 16:10:21.874166 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-t52zx"] Dec 10 16:10:22 crc kubenswrapper[4755]: E1210 16:10:22.762055 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:10:23 crc kubenswrapper[4755]: I1210 16:10:23.801612 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-t52zx" podUID="9aba7980-53b3-40c4-9549-3cfd5ee68274" containerName="registry-server" containerID="cri-o://1856dbc3e4f96f85a61b855ee086bb72a8e8f870f014f9398de4a87d6e5a05f0" gracePeriod=2 Dec 10 16:10:24 crc kubenswrapper[4755]: I1210 16:10:24.359874 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t52zx" Dec 10 16:10:24 crc kubenswrapper[4755]: I1210 16:10:24.510062 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9aba7980-53b3-40c4-9549-3cfd5ee68274-catalog-content\") pod \"9aba7980-53b3-40c4-9549-3cfd5ee68274\" (UID: \"9aba7980-53b3-40c4-9549-3cfd5ee68274\") " Dec 10 16:10:24 crc kubenswrapper[4755]: I1210 16:10:24.510282 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4vrzp\" (UniqueName: \"kubernetes.io/projected/9aba7980-53b3-40c4-9549-3cfd5ee68274-kube-api-access-4vrzp\") pod \"9aba7980-53b3-40c4-9549-3cfd5ee68274\" (UID: \"9aba7980-53b3-40c4-9549-3cfd5ee68274\") " Dec 10 16:10:24 crc kubenswrapper[4755]: I1210 16:10:24.510417 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9aba7980-53b3-40c4-9549-3cfd5ee68274-utilities\") pod \"9aba7980-53b3-40c4-9549-3cfd5ee68274\" (UID: \"9aba7980-53b3-40c4-9549-3cfd5ee68274\") " Dec 10 16:10:24 crc kubenswrapper[4755]: I1210 16:10:24.511208 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9aba7980-53b3-40c4-9549-3cfd5ee68274-utilities" (OuterVolumeSpecName: "utilities") pod "9aba7980-53b3-40c4-9549-3cfd5ee68274" (UID: "9aba7980-53b3-40c4-9549-3cfd5ee68274"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 16:10:24 crc kubenswrapper[4755]: I1210 16:10:24.516658 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9aba7980-53b3-40c4-9549-3cfd5ee68274-kube-api-access-4vrzp" (OuterVolumeSpecName: "kube-api-access-4vrzp") pod "9aba7980-53b3-40c4-9549-3cfd5ee68274" (UID: "9aba7980-53b3-40c4-9549-3cfd5ee68274"). InnerVolumeSpecName "kube-api-access-4vrzp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 16:10:24 crc kubenswrapper[4755]: I1210 16:10:24.535317 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9aba7980-53b3-40c4-9549-3cfd5ee68274-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9aba7980-53b3-40c4-9549-3cfd5ee68274" (UID: "9aba7980-53b3-40c4-9549-3cfd5ee68274"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 16:10:24 crc kubenswrapper[4755]: I1210 16:10:24.612396 4755 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9aba7980-53b3-40c4-9549-3cfd5ee68274-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 16:10:24 crc kubenswrapper[4755]: I1210 16:10:24.612436 4755 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9aba7980-53b3-40c4-9549-3cfd5ee68274-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 16:10:24 crc kubenswrapper[4755]: I1210 16:10:24.612453 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4vrzp\" (UniqueName: \"kubernetes.io/projected/9aba7980-53b3-40c4-9549-3cfd5ee68274-kube-api-access-4vrzp\") on node \"crc\" DevicePath \"\"" Dec 10 16:10:24 crc kubenswrapper[4755]: I1210 16:10:24.813776 4755 generic.go:334] "Generic (PLEG): container finished" podID="9aba7980-53b3-40c4-9549-3cfd5ee68274" containerID="1856dbc3e4f96f85a61b855ee086bb72a8e8f870f014f9398de4a87d6e5a05f0" exitCode=0 Dec 10 16:10:24 crc kubenswrapper[4755]: I1210 16:10:24.813830 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t52zx" event={"ID":"9aba7980-53b3-40c4-9549-3cfd5ee68274","Type":"ContainerDied","Data":"1856dbc3e4f96f85a61b855ee086bb72a8e8f870f014f9398de4a87d6e5a05f0"} Dec 10 16:10:24 crc kubenswrapper[4755]: I1210 16:10:24.813884 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t52zx" event={"ID":"9aba7980-53b3-40c4-9549-3cfd5ee68274","Type":"ContainerDied","Data":"841245ad92bc8a48040f2b8e6a21bcaefedafb5dc9a32f7b62c023566d7390b4"} Dec 10 16:10:24 crc kubenswrapper[4755]: I1210 16:10:24.813895 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t52zx" Dec 10 16:10:24 crc kubenswrapper[4755]: I1210 16:10:24.813903 4755 scope.go:117] "RemoveContainer" containerID="1856dbc3e4f96f85a61b855ee086bb72a8e8f870f014f9398de4a87d6e5a05f0" Dec 10 16:10:24 crc kubenswrapper[4755]: I1210 16:10:24.832911 4755 scope.go:117] "RemoveContainer" containerID="fbbff494fde6a4a5b3bc80c8b9cff7890c824e78a6e876f8d5787e839d5f803b" Dec 10 16:10:24 crc kubenswrapper[4755]: I1210 16:10:24.861892 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-t52zx"] Dec 10 16:10:24 crc kubenswrapper[4755]: I1210 16:10:24.875067 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-t52zx"] Dec 10 16:10:24 crc kubenswrapper[4755]: I1210 16:10:24.879266 4755 scope.go:117] "RemoveContainer" containerID="8fcd14fcaa78cf095ddf281e916c242129287f56586b3936dd2c34e2e5cbbff2" Dec 10 16:10:24 crc kubenswrapper[4755]: I1210 16:10:24.923342 4755 scope.go:117] "RemoveContainer" containerID="1856dbc3e4f96f85a61b855ee086bb72a8e8f870f014f9398de4a87d6e5a05f0" Dec 10 16:10:24 crc kubenswrapper[4755]: E1210 16:10:24.923721 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1856dbc3e4f96f85a61b855ee086bb72a8e8f870f014f9398de4a87d6e5a05f0\": container with ID starting with 1856dbc3e4f96f85a61b855ee086bb72a8e8f870f014f9398de4a87d6e5a05f0 not found: ID does not exist" containerID="1856dbc3e4f96f85a61b855ee086bb72a8e8f870f014f9398de4a87d6e5a05f0" Dec 10 16:10:24 crc kubenswrapper[4755]: I1210 16:10:24.923762 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1856dbc3e4f96f85a61b855ee086bb72a8e8f870f014f9398de4a87d6e5a05f0"} err="failed to get container status \"1856dbc3e4f96f85a61b855ee086bb72a8e8f870f014f9398de4a87d6e5a05f0\": rpc error: code = NotFound desc = could not find container \"1856dbc3e4f96f85a61b855ee086bb72a8e8f870f014f9398de4a87d6e5a05f0\": container with ID starting with 1856dbc3e4f96f85a61b855ee086bb72a8e8f870f014f9398de4a87d6e5a05f0 not found: ID does not exist" Dec 10 16:10:24 crc kubenswrapper[4755]: I1210 16:10:24.923787 4755 scope.go:117] "RemoveContainer" containerID="fbbff494fde6a4a5b3bc80c8b9cff7890c824e78a6e876f8d5787e839d5f803b" Dec 10 16:10:24 crc kubenswrapper[4755]: E1210 16:10:24.923994 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fbbff494fde6a4a5b3bc80c8b9cff7890c824e78a6e876f8d5787e839d5f803b\": container with ID starting with fbbff494fde6a4a5b3bc80c8b9cff7890c824e78a6e876f8d5787e839d5f803b not found: ID does not exist" containerID="fbbff494fde6a4a5b3bc80c8b9cff7890c824e78a6e876f8d5787e839d5f803b" Dec 10 16:10:24 crc kubenswrapper[4755]: I1210 16:10:24.924019 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbbff494fde6a4a5b3bc80c8b9cff7890c824e78a6e876f8d5787e839d5f803b"} err="failed to get container status \"fbbff494fde6a4a5b3bc80c8b9cff7890c824e78a6e876f8d5787e839d5f803b\": rpc error: code = NotFound desc = could not find container \"fbbff494fde6a4a5b3bc80c8b9cff7890c824e78a6e876f8d5787e839d5f803b\": container with ID starting with fbbff494fde6a4a5b3bc80c8b9cff7890c824e78a6e876f8d5787e839d5f803b not found: ID does not exist" Dec 10 16:10:24 crc kubenswrapper[4755]: I1210 16:10:24.924035 4755 scope.go:117] "RemoveContainer" containerID="8fcd14fcaa78cf095ddf281e916c242129287f56586b3936dd2c34e2e5cbbff2" Dec 10 16:10:24 crc kubenswrapper[4755]: E1210 16:10:24.924236 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8fcd14fcaa78cf095ddf281e916c242129287f56586b3936dd2c34e2e5cbbff2\": container with ID starting with 8fcd14fcaa78cf095ddf281e916c242129287f56586b3936dd2c34e2e5cbbff2 not found: ID does not exist" containerID="8fcd14fcaa78cf095ddf281e916c242129287f56586b3936dd2c34e2e5cbbff2" Dec 10 16:10:24 crc kubenswrapper[4755]: I1210 16:10:24.924263 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8fcd14fcaa78cf095ddf281e916c242129287f56586b3936dd2c34e2e5cbbff2"} err="failed to get container status \"8fcd14fcaa78cf095ddf281e916c242129287f56586b3936dd2c34e2e5cbbff2\": rpc error: code = NotFound desc = could not find container \"8fcd14fcaa78cf095ddf281e916c242129287f56586b3936dd2c34e2e5cbbff2\": container with ID starting with 8fcd14fcaa78cf095ddf281e916c242129287f56586b3936dd2c34e2e5cbbff2 not found: ID does not exist" Dec 10 16:10:25 crc kubenswrapper[4755]: I1210 16:10:25.770367 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9aba7980-53b3-40c4-9549-3cfd5ee68274" path="/var/lib/kubelet/pods/9aba7980-53b3-40c4-9549-3cfd5ee68274/volumes" Dec 10 16:10:32 crc kubenswrapper[4755]: E1210 16:10:32.768300 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:10:34 crc kubenswrapper[4755]: E1210 16:10:34.759165 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:10:45 crc kubenswrapper[4755]: E1210 16:10:45.760141 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:10:45 crc kubenswrapper[4755]: E1210 16:10:45.760193 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:10:56 crc kubenswrapper[4755]: E1210 16:10:56.759429 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:11:00 crc kubenswrapper[4755]: E1210 16:11:00.761205 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:11:07 crc kubenswrapper[4755]: E1210 16:11:07.758968 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:11:07 crc kubenswrapper[4755]: I1210 16:11:07.964260 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jvrcl"] Dec 10 16:11:07 crc kubenswrapper[4755]: E1210 16:11:07.965003 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9aba7980-53b3-40c4-9549-3cfd5ee68274" containerName="extract-content" Dec 10 16:11:07 crc kubenswrapper[4755]: I1210 16:11:07.965028 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="9aba7980-53b3-40c4-9549-3cfd5ee68274" containerName="extract-content" Dec 10 16:11:07 crc kubenswrapper[4755]: E1210 16:11:07.965042 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9aba7980-53b3-40c4-9549-3cfd5ee68274" containerName="extract-utilities" Dec 10 16:11:07 crc kubenswrapper[4755]: I1210 16:11:07.965050 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="9aba7980-53b3-40c4-9549-3cfd5ee68274" containerName="extract-utilities" Dec 10 16:11:07 crc kubenswrapper[4755]: E1210 16:11:07.965068 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9aba7980-53b3-40c4-9549-3cfd5ee68274" containerName="registry-server" Dec 10 16:11:07 crc kubenswrapper[4755]: I1210 16:11:07.965074 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="9aba7980-53b3-40c4-9549-3cfd5ee68274" containerName="registry-server" Dec 10 16:11:07 crc kubenswrapper[4755]: I1210 16:11:07.965276 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="9aba7980-53b3-40c4-9549-3cfd5ee68274" containerName="registry-server" Dec 10 16:11:07 crc kubenswrapper[4755]: I1210 16:11:07.966995 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jvrcl" Dec 10 16:11:08 crc kubenswrapper[4755]: I1210 16:11:08.012584 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jvrcl"] Dec 10 16:11:08 crc kubenswrapper[4755]: I1210 16:11:08.168379 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbb9b263-ea76-4a75-ac11-86b402bbd9e8-catalog-content\") pod \"certified-operators-jvrcl\" (UID: \"bbb9b263-ea76-4a75-ac11-86b402bbd9e8\") " pod="openshift-marketplace/certified-operators-jvrcl" Dec 10 16:11:08 crc kubenswrapper[4755]: I1210 16:11:08.168454 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7c7s\" (UniqueName: \"kubernetes.io/projected/bbb9b263-ea76-4a75-ac11-86b402bbd9e8-kube-api-access-q7c7s\") pod \"certified-operators-jvrcl\" (UID: \"bbb9b263-ea76-4a75-ac11-86b402bbd9e8\") " pod="openshift-marketplace/certified-operators-jvrcl" Dec 10 16:11:08 crc kubenswrapper[4755]: I1210 16:11:08.168716 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbb9b263-ea76-4a75-ac11-86b402bbd9e8-utilities\") pod \"certified-operators-jvrcl\" (UID: \"bbb9b263-ea76-4a75-ac11-86b402bbd9e8\") " pod="openshift-marketplace/certified-operators-jvrcl" Dec 10 16:11:08 crc kubenswrapper[4755]: I1210 16:11:08.271151 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbb9b263-ea76-4a75-ac11-86b402bbd9e8-utilities\") pod \"certified-operators-jvrcl\" (UID: \"bbb9b263-ea76-4a75-ac11-86b402bbd9e8\") " pod="openshift-marketplace/certified-operators-jvrcl" Dec 10 16:11:08 crc kubenswrapper[4755]: I1210 16:11:08.271362 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7c7s\" (UniqueName: \"kubernetes.io/projected/bbb9b263-ea76-4a75-ac11-86b402bbd9e8-kube-api-access-q7c7s\") pod \"certified-operators-jvrcl\" (UID: \"bbb9b263-ea76-4a75-ac11-86b402bbd9e8\") " pod="openshift-marketplace/certified-operators-jvrcl" Dec 10 16:11:08 crc kubenswrapper[4755]: I1210 16:11:08.271387 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbb9b263-ea76-4a75-ac11-86b402bbd9e8-catalog-content\") pod \"certified-operators-jvrcl\" (UID: \"bbb9b263-ea76-4a75-ac11-86b402bbd9e8\") " pod="openshift-marketplace/certified-operators-jvrcl" Dec 10 16:11:08 crc kubenswrapper[4755]: I1210 16:11:08.271731 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbb9b263-ea76-4a75-ac11-86b402bbd9e8-utilities\") pod \"certified-operators-jvrcl\" (UID: \"bbb9b263-ea76-4a75-ac11-86b402bbd9e8\") " pod="openshift-marketplace/certified-operators-jvrcl" Dec 10 16:11:08 crc kubenswrapper[4755]: I1210 16:11:08.271814 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbb9b263-ea76-4a75-ac11-86b402bbd9e8-catalog-content\") pod \"certified-operators-jvrcl\" (UID: \"bbb9b263-ea76-4a75-ac11-86b402bbd9e8\") " pod="openshift-marketplace/certified-operators-jvrcl" Dec 10 16:11:08 crc kubenswrapper[4755]: I1210 16:11:08.312432 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7c7s\" (UniqueName: \"kubernetes.io/projected/bbb9b263-ea76-4a75-ac11-86b402bbd9e8-kube-api-access-q7c7s\") pod \"certified-operators-jvrcl\" (UID: \"bbb9b263-ea76-4a75-ac11-86b402bbd9e8\") " pod="openshift-marketplace/certified-operators-jvrcl" Dec 10 16:11:08 crc kubenswrapper[4755]: I1210 16:11:08.322749 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jvrcl" Dec 10 16:11:08 crc kubenswrapper[4755]: I1210 16:11:08.855753 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jvrcl"] Dec 10 16:11:09 crc kubenswrapper[4755]: I1210 16:11:09.233295 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jvrcl" event={"ID":"bbb9b263-ea76-4a75-ac11-86b402bbd9e8","Type":"ContainerStarted","Data":"a175fd570dc85d8863682585bc204753cf4b6a2e2f74a65a826e96385a09b639"} Dec 10 16:11:10 crc kubenswrapper[4755]: I1210 16:11:10.243356 4755 generic.go:334] "Generic (PLEG): container finished" podID="bbb9b263-ea76-4a75-ac11-86b402bbd9e8" containerID="2ff412fc125d35a042507159aa8f1583e387356972b078073362b929df48367e" exitCode=0 Dec 10 16:11:10 crc kubenswrapper[4755]: I1210 16:11:10.243406 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jvrcl" event={"ID":"bbb9b263-ea76-4a75-ac11-86b402bbd9e8","Type":"ContainerDied","Data":"2ff412fc125d35a042507159aa8f1583e387356972b078073362b929df48367e"} Dec 10 16:11:12 crc kubenswrapper[4755]: I1210 16:11:12.264867 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jvrcl" event={"ID":"bbb9b263-ea76-4a75-ac11-86b402bbd9e8","Type":"ContainerStarted","Data":"44179cc12721e8acd81bd53271a3c23f1da9254f946f8a5acc7ca18c1e6e8f54"} Dec 10 16:11:13 crc kubenswrapper[4755]: I1210 16:11:13.277708 4755 generic.go:334] "Generic (PLEG): container finished" podID="bbb9b263-ea76-4a75-ac11-86b402bbd9e8" containerID="44179cc12721e8acd81bd53271a3c23f1da9254f946f8a5acc7ca18c1e6e8f54" exitCode=0 Dec 10 16:11:13 crc kubenswrapper[4755]: I1210 16:11:13.277878 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jvrcl" event={"ID":"bbb9b263-ea76-4a75-ac11-86b402bbd9e8","Type":"ContainerDied","Data":"44179cc12721e8acd81bd53271a3c23f1da9254f946f8a5acc7ca18c1e6e8f54"} Dec 10 16:11:14 crc kubenswrapper[4755]: I1210 16:11:14.290339 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jvrcl" event={"ID":"bbb9b263-ea76-4a75-ac11-86b402bbd9e8","Type":"ContainerStarted","Data":"c96dc5c9381b8c0b8dab4c5c0f6baa139d839716d3dabef533c8630e40f3f877"} Dec 10 16:11:14 crc kubenswrapper[4755]: I1210 16:11:14.322838 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jvrcl" podStartSLOduration=3.678978695 podStartE2EDuration="7.322793319s" podCreationTimestamp="2025-12-10 16:11:07 +0000 UTC" firstStartedPulling="2025-12-10 16:11:10.245699971 +0000 UTC m=+2866.846583603" lastFinishedPulling="2025-12-10 16:11:13.889514595 +0000 UTC m=+2870.490398227" observedRunningTime="2025-12-10 16:11:14.311970302 +0000 UTC m=+2870.912853944" watchObservedRunningTime="2025-12-10 16:11:14.322793319 +0000 UTC m=+2870.923676951" Dec 10 16:11:14 crc kubenswrapper[4755]: E1210 16:11:14.759129 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:11:18 crc kubenswrapper[4755]: I1210 16:11:18.323279 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jvrcl" Dec 10 16:11:18 crc kubenswrapper[4755]: I1210 16:11:18.325045 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jvrcl" Dec 10 16:11:18 crc kubenswrapper[4755]: I1210 16:11:18.375733 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jvrcl" Dec 10 16:11:19 crc kubenswrapper[4755]: I1210 16:11:19.383626 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jvrcl" Dec 10 16:11:19 crc kubenswrapper[4755]: I1210 16:11:19.441661 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jvrcl"] Dec 10 16:11:19 crc kubenswrapper[4755]: E1210 16:11:19.760366 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:11:21 crc kubenswrapper[4755]: I1210 16:11:21.350922 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jvrcl" podUID="bbb9b263-ea76-4a75-ac11-86b402bbd9e8" containerName="registry-server" containerID="cri-o://c96dc5c9381b8c0b8dab4c5c0f6baa139d839716d3dabef533c8630e40f3f877" gracePeriod=2 Dec 10 16:11:22 crc kubenswrapper[4755]: I1210 16:11:22.012064 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jvrcl" Dec 10 16:11:22 crc kubenswrapper[4755]: I1210 16:11:22.110852 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbb9b263-ea76-4a75-ac11-86b402bbd9e8-catalog-content\") pod \"bbb9b263-ea76-4a75-ac11-86b402bbd9e8\" (UID: \"bbb9b263-ea76-4a75-ac11-86b402bbd9e8\") " Dec 10 16:11:22 crc kubenswrapper[4755]: I1210 16:11:22.110930 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbb9b263-ea76-4a75-ac11-86b402bbd9e8-utilities\") pod \"bbb9b263-ea76-4a75-ac11-86b402bbd9e8\" (UID: \"bbb9b263-ea76-4a75-ac11-86b402bbd9e8\") " Dec 10 16:11:22 crc kubenswrapper[4755]: I1210 16:11:22.111197 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q7c7s\" (UniqueName: \"kubernetes.io/projected/bbb9b263-ea76-4a75-ac11-86b402bbd9e8-kube-api-access-q7c7s\") pod \"bbb9b263-ea76-4a75-ac11-86b402bbd9e8\" (UID: \"bbb9b263-ea76-4a75-ac11-86b402bbd9e8\") " Dec 10 16:11:22 crc kubenswrapper[4755]: I1210 16:11:22.114197 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bbb9b263-ea76-4a75-ac11-86b402bbd9e8-utilities" (OuterVolumeSpecName: "utilities") pod "bbb9b263-ea76-4a75-ac11-86b402bbd9e8" (UID: "bbb9b263-ea76-4a75-ac11-86b402bbd9e8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 16:11:22 crc kubenswrapper[4755]: I1210 16:11:22.119895 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbb9b263-ea76-4a75-ac11-86b402bbd9e8-kube-api-access-q7c7s" (OuterVolumeSpecName: "kube-api-access-q7c7s") pod "bbb9b263-ea76-4a75-ac11-86b402bbd9e8" (UID: "bbb9b263-ea76-4a75-ac11-86b402bbd9e8"). InnerVolumeSpecName "kube-api-access-q7c7s". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 16:11:22 crc kubenswrapper[4755]: I1210 16:11:22.175182 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bbb9b263-ea76-4a75-ac11-86b402bbd9e8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bbb9b263-ea76-4a75-ac11-86b402bbd9e8" (UID: "bbb9b263-ea76-4a75-ac11-86b402bbd9e8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 16:11:22 crc kubenswrapper[4755]: I1210 16:11:22.214076 4755 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbb9b263-ea76-4a75-ac11-86b402bbd9e8-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 16:11:22 crc kubenswrapper[4755]: I1210 16:11:22.214112 4755 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbb9b263-ea76-4a75-ac11-86b402bbd9e8-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 16:11:22 crc kubenswrapper[4755]: I1210 16:11:22.214122 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q7c7s\" (UniqueName: \"kubernetes.io/projected/bbb9b263-ea76-4a75-ac11-86b402bbd9e8-kube-api-access-q7c7s\") on node \"crc\" DevicePath \"\"" Dec 10 16:11:22 crc kubenswrapper[4755]: I1210 16:11:22.361147 4755 generic.go:334] "Generic (PLEG): container finished" podID="bbb9b263-ea76-4a75-ac11-86b402bbd9e8" containerID="c96dc5c9381b8c0b8dab4c5c0f6baa139d839716d3dabef533c8630e40f3f877" exitCode=0 Dec 10 16:11:22 crc kubenswrapper[4755]: I1210 16:11:22.361196 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jvrcl" Dec 10 16:11:22 crc kubenswrapper[4755]: I1210 16:11:22.361197 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jvrcl" event={"ID":"bbb9b263-ea76-4a75-ac11-86b402bbd9e8","Type":"ContainerDied","Data":"c96dc5c9381b8c0b8dab4c5c0f6baa139d839716d3dabef533c8630e40f3f877"} Dec 10 16:11:22 crc kubenswrapper[4755]: I1210 16:11:22.361245 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jvrcl" event={"ID":"bbb9b263-ea76-4a75-ac11-86b402bbd9e8","Type":"ContainerDied","Data":"a175fd570dc85d8863682585bc204753cf4b6a2e2f74a65a826e96385a09b639"} Dec 10 16:11:22 crc kubenswrapper[4755]: I1210 16:11:22.361298 4755 scope.go:117] "RemoveContainer" containerID="c96dc5c9381b8c0b8dab4c5c0f6baa139d839716d3dabef533c8630e40f3f877" Dec 10 16:11:22 crc kubenswrapper[4755]: I1210 16:11:22.383319 4755 scope.go:117] "RemoveContainer" containerID="44179cc12721e8acd81bd53271a3c23f1da9254f946f8a5acc7ca18c1e6e8f54" Dec 10 16:11:22 crc kubenswrapper[4755]: I1210 16:11:22.403185 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jvrcl"] Dec 10 16:11:22 crc kubenswrapper[4755]: I1210 16:11:22.415024 4755 scope.go:117] "RemoveContainer" containerID="2ff412fc125d35a042507159aa8f1583e387356972b078073362b929df48367e" Dec 10 16:11:22 crc kubenswrapper[4755]: I1210 16:11:22.415223 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jvrcl"] Dec 10 16:11:22 crc kubenswrapper[4755]: I1210 16:11:22.474592 4755 scope.go:117] "RemoveContainer" containerID="c96dc5c9381b8c0b8dab4c5c0f6baa139d839716d3dabef533c8630e40f3f877" Dec 10 16:11:22 crc kubenswrapper[4755]: E1210 16:11:22.475748 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c96dc5c9381b8c0b8dab4c5c0f6baa139d839716d3dabef533c8630e40f3f877\": container with ID starting with c96dc5c9381b8c0b8dab4c5c0f6baa139d839716d3dabef533c8630e40f3f877 not found: ID does not exist" containerID="c96dc5c9381b8c0b8dab4c5c0f6baa139d839716d3dabef533c8630e40f3f877" Dec 10 16:11:22 crc kubenswrapper[4755]: I1210 16:11:22.475786 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c96dc5c9381b8c0b8dab4c5c0f6baa139d839716d3dabef533c8630e40f3f877"} err="failed to get container status \"c96dc5c9381b8c0b8dab4c5c0f6baa139d839716d3dabef533c8630e40f3f877\": rpc error: code = NotFound desc = could not find container \"c96dc5c9381b8c0b8dab4c5c0f6baa139d839716d3dabef533c8630e40f3f877\": container with ID starting with c96dc5c9381b8c0b8dab4c5c0f6baa139d839716d3dabef533c8630e40f3f877 not found: ID does not exist" Dec 10 16:11:22 crc kubenswrapper[4755]: I1210 16:11:22.475807 4755 scope.go:117] "RemoveContainer" containerID="44179cc12721e8acd81bd53271a3c23f1da9254f946f8a5acc7ca18c1e6e8f54" Dec 10 16:11:22 crc kubenswrapper[4755]: E1210 16:11:22.476286 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44179cc12721e8acd81bd53271a3c23f1da9254f946f8a5acc7ca18c1e6e8f54\": container with ID starting with 44179cc12721e8acd81bd53271a3c23f1da9254f946f8a5acc7ca18c1e6e8f54 not found: ID does not exist" containerID="44179cc12721e8acd81bd53271a3c23f1da9254f946f8a5acc7ca18c1e6e8f54" Dec 10 16:11:22 crc kubenswrapper[4755]: I1210 16:11:22.476304 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44179cc12721e8acd81bd53271a3c23f1da9254f946f8a5acc7ca18c1e6e8f54"} err="failed to get container status \"44179cc12721e8acd81bd53271a3c23f1da9254f946f8a5acc7ca18c1e6e8f54\": rpc error: code = NotFound desc = could not find container \"44179cc12721e8acd81bd53271a3c23f1da9254f946f8a5acc7ca18c1e6e8f54\": container with ID starting with 44179cc12721e8acd81bd53271a3c23f1da9254f946f8a5acc7ca18c1e6e8f54 not found: ID does not exist" Dec 10 16:11:22 crc kubenswrapper[4755]: I1210 16:11:22.476316 4755 scope.go:117] "RemoveContainer" containerID="2ff412fc125d35a042507159aa8f1583e387356972b078073362b929df48367e" Dec 10 16:11:22 crc kubenswrapper[4755]: E1210 16:11:22.476715 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ff412fc125d35a042507159aa8f1583e387356972b078073362b929df48367e\": container with ID starting with 2ff412fc125d35a042507159aa8f1583e387356972b078073362b929df48367e not found: ID does not exist" containerID="2ff412fc125d35a042507159aa8f1583e387356972b078073362b929df48367e" Dec 10 16:11:22 crc kubenswrapper[4755]: I1210 16:11:22.476740 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ff412fc125d35a042507159aa8f1583e387356972b078073362b929df48367e"} err="failed to get container status \"2ff412fc125d35a042507159aa8f1583e387356972b078073362b929df48367e\": rpc error: code = NotFound desc = could not find container \"2ff412fc125d35a042507159aa8f1583e387356972b078073362b929df48367e\": container with ID starting with 2ff412fc125d35a042507159aa8f1583e387356972b078073362b929df48367e not found: ID does not exist" Dec 10 16:11:23 crc kubenswrapper[4755]: I1210 16:11:23.771651 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbb9b263-ea76-4a75-ac11-86b402bbd9e8" path="/var/lib/kubelet/pods/bbb9b263-ea76-4a75-ac11-86b402bbd9e8/volumes" Dec 10 16:11:25 crc kubenswrapper[4755]: E1210 16:11:25.772159 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:11:31 crc kubenswrapper[4755]: E1210 16:11:31.761311 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:11:38 crc kubenswrapper[4755]: E1210 16:11:38.761062 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:11:45 crc kubenswrapper[4755]: E1210 16:11:45.760239 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:11:50 crc kubenswrapper[4755]: E1210 16:11:50.760721 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:11:57 crc kubenswrapper[4755]: E1210 16:11:57.760379 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:12:01 crc kubenswrapper[4755]: E1210 16:12:01.760754 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:12:08 crc kubenswrapper[4755]: E1210 16:12:08.759023 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:12:10 crc kubenswrapper[4755]: I1210 16:12:10.359412 4755 patch_prober.go:28] interesting pod/machine-config-daemon-ggt8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 16:12:10 crc kubenswrapper[4755]: I1210 16:12:10.359524 4755 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 16:12:14 crc kubenswrapper[4755]: E1210 16:12:14.760166 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:12:20 crc kubenswrapper[4755]: E1210 16:12:20.761002 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:12:27 crc kubenswrapper[4755]: E1210 16:12:27.763184 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:12:31 crc kubenswrapper[4755]: E1210 16:12:31.759348 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:12:37 crc kubenswrapper[4755]: I1210 16:12:37.078162 4755 generic.go:334] "Generic (PLEG): container finished" podID="40fb2154-25cc-4263-beb4-f375fce600d1" containerID="086ebb389c462796575817fa1576187cfc9fdbf785cc9025ed968918d9df7b95" exitCode=2 Dec 10 16:12:37 crc kubenswrapper[4755]: I1210 16:12:37.078269 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bdgt7" event={"ID":"40fb2154-25cc-4263-beb4-f375fce600d1","Type":"ContainerDied","Data":"086ebb389c462796575817fa1576187cfc9fdbf785cc9025ed968918d9df7b95"} Dec 10 16:12:38 crc kubenswrapper[4755]: I1210 16:12:38.673140 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bdgt7" Dec 10 16:12:38 crc kubenswrapper[4755]: E1210 16:12:38.760158 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:12:38 crc kubenswrapper[4755]: I1210 16:12:38.780231 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-287rz\" (UniqueName: \"kubernetes.io/projected/40fb2154-25cc-4263-beb4-f375fce600d1-kube-api-access-287rz\") pod \"40fb2154-25cc-4263-beb4-f375fce600d1\" (UID: \"40fb2154-25cc-4263-beb4-f375fce600d1\") " Dec 10 16:12:38 crc kubenswrapper[4755]: I1210 16:12:38.780338 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/40fb2154-25cc-4263-beb4-f375fce600d1-ssh-key\") pod \"40fb2154-25cc-4263-beb4-f375fce600d1\" (UID: \"40fb2154-25cc-4263-beb4-f375fce600d1\") " Dec 10 16:12:38 crc kubenswrapper[4755]: I1210 16:12:38.780528 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/40fb2154-25cc-4263-beb4-f375fce600d1-inventory\") pod \"40fb2154-25cc-4263-beb4-f375fce600d1\" (UID: \"40fb2154-25cc-4263-beb4-f375fce600d1\") " Dec 10 16:12:38 crc kubenswrapper[4755]: I1210 16:12:38.787011 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40fb2154-25cc-4263-beb4-f375fce600d1-kube-api-access-287rz" (OuterVolumeSpecName: "kube-api-access-287rz") pod "40fb2154-25cc-4263-beb4-f375fce600d1" (UID: "40fb2154-25cc-4263-beb4-f375fce600d1"). InnerVolumeSpecName "kube-api-access-287rz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 16:12:38 crc kubenswrapper[4755]: I1210 16:12:38.812791 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40fb2154-25cc-4263-beb4-f375fce600d1-inventory" (OuterVolumeSpecName: "inventory") pod "40fb2154-25cc-4263-beb4-f375fce600d1" (UID: "40fb2154-25cc-4263-beb4-f375fce600d1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 16:12:38 crc kubenswrapper[4755]: I1210 16:12:38.818504 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40fb2154-25cc-4263-beb4-f375fce600d1-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "40fb2154-25cc-4263-beb4-f375fce600d1" (UID: "40fb2154-25cc-4263-beb4-f375fce600d1"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 16:12:38 crc kubenswrapper[4755]: I1210 16:12:38.883705 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-287rz\" (UniqueName: \"kubernetes.io/projected/40fb2154-25cc-4263-beb4-f375fce600d1-kube-api-access-287rz\") on node \"crc\" DevicePath \"\"" Dec 10 16:12:38 crc kubenswrapper[4755]: I1210 16:12:38.884389 4755 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/40fb2154-25cc-4263-beb4-f375fce600d1-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 10 16:12:38 crc kubenswrapper[4755]: I1210 16:12:38.884440 4755 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/40fb2154-25cc-4263-beb4-f375fce600d1-inventory\") on node \"crc\" DevicePath \"\"" Dec 10 16:12:39 crc kubenswrapper[4755]: I1210 16:12:39.099822 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bdgt7" event={"ID":"40fb2154-25cc-4263-beb4-f375fce600d1","Type":"ContainerDied","Data":"a1d5c2458963c00a5319e9464d2e5d1037c47c26cd5900c9326d01b7fec00193"} Dec 10 16:12:39 crc kubenswrapper[4755]: I1210 16:12:39.099879 4755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a1d5c2458963c00a5319e9464d2e5d1037c47c26cd5900c9326d01b7fec00193" Dec 10 16:12:39 crc kubenswrapper[4755]: I1210 16:12:39.099898 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bdgt7" Dec 10 16:12:40 crc kubenswrapper[4755]: I1210 16:12:40.359445 4755 patch_prober.go:28] interesting pod/machine-config-daemon-ggt8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 16:12:40 crc kubenswrapper[4755]: I1210 16:12:40.359947 4755 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 16:12:42 crc kubenswrapper[4755]: E1210 16:12:42.759680 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:12:50 crc kubenswrapper[4755]: E1210 16:12:50.761498 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:12:57 crc kubenswrapper[4755]: E1210 16:12:57.759897 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:13:04 crc kubenswrapper[4755]: E1210 16:13:04.759715 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:13:10 crc kubenswrapper[4755]: I1210 16:13:10.358795 4755 patch_prober.go:28] interesting pod/machine-config-daemon-ggt8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 16:13:10 crc kubenswrapper[4755]: I1210 16:13:10.359399 4755 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 16:13:10 crc kubenswrapper[4755]: I1210 16:13:10.359446 4755 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" Dec 10 16:13:10 crc kubenswrapper[4755]: I1210 16:13:10.360275 4755 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"30d53fb1dd018f11a561066e37ed5aa32e4d43a5cd91c13627ef984f5187f3fd"} pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 10 16:13:10 crc kubenswrapper[4755]: I1210 16:13:10.360346 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" containerName="machine-config-daemon" containerID="cri-o://30d53fb1dd018f11a561066e37ed5aa32e4d43a5cd91c13627ef984f5187f3fd" gracePeriod=600 Dec 10 16:13:10 crc kubenswrapper[4755]: E1210 16:13:10.492152 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:13:10 crc kubenswrapper[4755]: E1210 16:13:10.760585 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:13:11 crc kubenswrapper[4755]: I1210 16:13:11.450412 4755 generic.go:334] "Generic (PLEG): container finished" podID="b132a8b9-1c99-414d-8773-229bf36b305d" containerID="30d53fb1dd018f11a561066e37ed5aa32e4d43a5cd91c13627ef984f5187f3fd" exitCode=0 Dec 10 16:13:11 crc kubenswrapper[4755]: I1210 16:13:11.450525 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" event={"ID":"b132a8b9-1c99-414d-8773-229bf36b305d","Type":"ContainerDied","Data":"30d53fb1dd018f11a561066e37ed5aa32e4d43a5cd91c13627ef984f5187f3fd"} Dec 10 16:13:11 crc kubenswrapper[4755]: I1210 16:13:11.450852 4755 scope.go:117] "RemoveContainer" containerID="8ecc5fc17cefa81deedcf3bc2f08dbc197a77ef30c47fc063b577484619f3b8b" Dec 10 16:13:11 crc kubenswrapper[4755]: I1210 16:13:11.451704 4755 scope.go:117] "RemoveContainer" containerID="30d53fb1dd018f11a561066e37ed5aa32e4d43a5cd91c13627ef984f5187f3fd" Dec 10 16:13:11 crc kubenswrapper[4755]: E1210 16:13:11.452177 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:13:16 crc kubenswrapper[4755]: I1210 16:13:16.035938 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mwqmw"] Dec 10 16:13:16 crc kubenswrapper[4755]: E1210 16:13:16.037029 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40fb2154-25cc-4263-beb4-f375fce600d1" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 10 16:13:16 crc kubenswrapper[4755]: I1210 16:13:16.037064 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="40fb2154-25cc-4263-beb4-f375fce600d1" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 10 16:13:16 crc kubenswrapper[4755]: E1210 16:13:16.037091 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbb9b263-ea76-4a75-ac11-86b402bbd9e8" containerName="registry-server" Dec 10 16:13:16 crc kubenswrapper[4755]: I1210 16:13:16.037099 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbb9b263-ea76-4a75-ac11-86b402bbd9e8" containerName="registry-server" Dec 10 16:13:16 crc kubenswrapper[4755]: E1210 16:13:16.037120 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbb9b263-ea76-4a75-ac11-86b402bbd9e8" containerName="extract-content" Dec 10 16:13:16 crc kubenswrapper[4755]: I1210 16:13:16.037128 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbb9b263-ea76-4a75-ac11-86b402bbd9e8" containerName="extract-content" Dec 10 16:13:16 crc kubenswrapper[4755]: E1210 16:13:16.037161 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbb9b263-ea76-4a75-ac11-86b402bbd9e8" containerName="extract-utilities" Dec 10 16:13:16 crc kubenswrapper[4755]: I1210 16:13:16.037169 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbb9b263-ea76-4a75-ac11-86b402bbd9e8" containerName="extract-utilities" Dec 10 16:13:16 crc kubenswrapper[4755]: I1210 16:13:16.037406 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbb9b263-ea76-4a75-ac11-86b402bbd9e8" containerName="registry-server" Dec 10 16:13:16 crc kubenswrapper[4755]: I1210 16:13:16.037463 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="40fb2154-25cc-4263-beb4-f375fce600d1" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 10 16:13:16 crc kubenswrapper[4755]: I1210 16:13:16.038514 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mwqmw" Dec 10 16:13:16 crc kubenswrapper[4755]: I1210 16:13:16.042945 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 10 16:13:16 crc kubenswrapper[4755]: I1210 16:13:16.047659 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-74mg7" Dec 10 16:13:16 crc kubenswrapper[4755]: I1210 16:13:16.047936 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 10 16:13:16 crc kubenswrapper[4755]: I1210 16:13:16.048109 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 10 16:13:16 crc kubenswrapper[4755]: I1210 16:13:16.062130 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mwqmw"] Dec 10 16:13:16 crc kubenswrapper[4755]: I1210 16:13:16.150162 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wsqf\" (UniqueName: \"kubernetes.io/projected/b4ab39f5-c779-4d0c-9497-5c7c567dc0bc-kube-api-access-5wsqf\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mwqmw\" (UID: \"b4ab39f5-c779-4d0c-9497-5c7c567dc0bc\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mwqmw" Dec 10 16:13:16 crc kubenswrapper[4755]: I1210 16:13:16.150268 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b4ab39f5-c779-4d0c-9497-5c7c567dc0bc-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mwqmw\" (UID: \"b4ab39f5-c779-4d0c-9497-5c7c567dc0bc\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mwqmw" Dec 10 16:13:16 crc kubenswrapper[4755]: I1210 16:13:16.150407 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b4ab39f5-c779-4d0c-9497-5c7c567dc0bc-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mwqmw\" (UID: \"b4ab39f5-c779-4d0c-9497-5c7c567dc0bc\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mwqmw" Dec 10 16:13:16 crc kubenswrapper[4755]: I1210 16:13:16.252565 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wsqf\" (UniqueName: \"kubernetes.io/projected/b4ab39f5-c779-4d0c-9497-5c7c567dc0bc-kube-api-access-5wsqf\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mwqmw\" (UID: \"b4ab39f5-c779-4d0c-9497-5c7c567dc0bc\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mwqmw" Dec 10 16:13:16 crc kubenswrapper[4755]: I1210 16:13:16.252793 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b4ab39f5-c779-4d0c-9497-5c7c567dc0bc-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mwqmw\" (UID: \"b4ab39f5-c779-4d0c-9497-5c7c567dc0bc\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mwqmw" Dec 10 16:13:16 crc kubenswrapper[4755]: I1210 16:13:16.254099 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b4ab39f5-c779-4d0c-9497-5c7c567dc0bc-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mwqmw\" (UID: \"b4ab39f5-c779-4d0c-9497-5c7c567dc0bc\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mwqmw" Dec 10 16:13:16 crc kubenswrapper[4755]: I1210 16:13:16.262337 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b4ab39f5-c779-4d0c-9497-5c7c567dc0bc-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mwqmw\" (UID: \"b4ab39f5-c779-4d0c-9497-5c7c567dc0bc\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mwqmw" Dec 10 16:13:16 crc kubenswrapper[4755]: I1210 16:13:16.268343 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b4ab39f5-c779-4d0c-9497-5c7c567dc0bc-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mwqmw\" (UID: \"b4ab39f5-c779-4d0c-9497-5c7c567dc0bc\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mwqmw" Dec 10 16:13:16 crc kubenswrapper[4755]: I1210 16:13:16.270363 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wsqf\" (UniqueName: \"kubernetes.io/projected/b4ab39f5-c779-4d0c-9497-5c7c567dc0bc-kube-api-access-5wsqf\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mwqmw\" (UID: \"b4ab39f5-c779-4d0c-9497-5c7c567dc0bc\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mwqmw" Dec 10 16:13:16 crc kubenswrapper[4755]: I1210 16:13:16.360449 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mwqmw" Dec 10 16:13:16 crc kubenswrapper[4755]: I1210 16:13:16.880831 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mwqmw"] Dec 10 16:13:17 crc kubenswrapper[4755]: I1210 16:13:17.512030 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mwqmw" event={"ID":"b4ab39f5-c779-4d0c-9497-5c7c567dc0bc","Type":"ContainerStarted","Data":"e369f2df46e0b5da1629462d690027c09e3d7418cada8479415f73d6925f5d24"} Dec 10 16:13:17 crc kubenswrapper[4755]: E1210 16:13:17.759530 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:13:18 crc kubenswrapper[4755]: I1210 16:13:18.537605 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mwqmw" event={"ID":"b4ab39f5-c779-4d0c-9497-5c7c567dc0bc","Type":"ContainerStarted","Data":"f0d16bf7bc5081ec05d695b294f206e1d14b79f4e1baef6723689056387aea8b"} Dec 10 16:13:18 crc kubenswrapper[4755]: I1210 16:13:18.570936 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mwqmw" podStartSLOduration=1.870212833 podStartE2EDuration="2.570912745s" podCreationTimestamp="2025-12-10 16:13:16 +0000 UTC" firstStartedPulling="2025-12-10 16:13:16.888823286 +0000 UTC m=+2993.489706918" lastFinishedPulling="2025-12-10 16:13:17.589523198 +0000 UTC m=+2994.190406830" observedRunningTime="2025-12-10 16:13:18.556482618 +0000 UTC m=+2995.157366270" watchObservedRunningTime="2025-12-10 16:13:18.570912745 +0000 UTC m=+2995.171796377" Dec 10 16:13:22 crc kubenswrapper[4755]: E1210 16:13:22.946771 4755 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/system.slice/rpm-ostreed.service\": RecentStats: unable to find data in memory cache]" Dec 10 16:13:23 crc kubenswrapper[4755]: I1210 16:13:23.758215 4755 scope.go:117] "RemoveContainer" containerID="30d53fb1dd018f11a561066e37ed5aa32e4d43a5cd91c13627ef984f5187f3fd" Dec 10 16:13:23 crc kubenswrapper[4755]: E1210 16:13:23.759544 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:13:25 crc kubenswrapper[4755]: E1210 16:13:25.759330 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:13:28 crc kubenswrapper[4755]: E1210 16:13:28.761834 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:13:36 crc kubenswrapper[4755]: E1210 16:13:36.759770 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:13:37 crc kubenswrapper[4755]: I1210 16:13:37.758613 4755 scope.go:117] "RemoveContainer" containerID="30d53fb1dd018f11a561066e37ed5aa32e4d43a5cd91c13627ef984f5187f3fd" Dec 10 16:13:37 crc kubenswrapper[4755]: E1210 16:13:37.758921 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:13:42 crc kubenswrapper[4755]: E1210 16:13:42.759768 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:13:47 crc kubenswrapper[4755]: E1210 16:13:47.760248 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:13:50 crc kubenswrapper[4755]: I1210 16:13:50.757523 4755 scope.go:117] "RemoveContainer" containerID="30d53fb1dd018f11a561066e37ed5aa32e4d43a5cd91c13627ef984f5187f3fd" Dec 10 16:13:50 crc kubenswrapper[4755]: E1210 16:13:50.758347 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:13:56 crc kubenswrapper[4755]: E1210 16:13:56.760292 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:13:59 crc kubenswrapper[4755]: E1210 16:13:59.760239 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:14:02 crc kubenswrapper[4755]: I1210 16:14:02.758332 4755 scope.go:117] "RemoveContainer" containerID="30d53fb1dd018f11a561066e37ed5aa32e4d43a5cd91c13627ef984f5187f3fd" Dec 10 16:14:02 crc kubenswrapper[4755]: E1210 16:14:02.759063 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:14:10 crc kubenswrapper[4755]: E1210 16:14:10.759767 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:14:14 crc kubenswrapper[4755]: E1210 16:14:14.904700 4755 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 10 16:14:14 crc kubenswrapper[4755]: E1210 16:14:14.905307 4755 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 10 16:14:14 crc kubenswrapper[4755]: E1210 16:14:14.905483 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mz4t5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-jfc28_openstack(998863b6-4f48-4c8b-8011-a40377686b99): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 10 16:14:14 crc kubenswrapper[4755]: E1210 16:14:14.906671 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:14:16 crc kubenswrapper[4755]: I1210 16:14:16.758020 4755 scope.go:117] "RemoveContainer" containerID="30d53fb1dd018f11a561066e37ed5aa32e4d43a5cd91c13627ef984f5187f3fd" Dec 10 16:14:16 crc kubenswrapper[4755]: E1210 16:14:16.759740 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:14:21 crc kubenswrapper[4755]: E1210 16:14:21.760710 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:14:28 crc kubenswrapper[4755]: I1210 16:14:28.757425 4755 scope.go:117] "RemoveContainer" containerID="30d53fb1dd018f11a561066e37ed5aa32e4d43a5cd91c13627ef984f5187f3fd" Dec 10 16:14:28 crc kubenswrapper[4755]: E1210 16:14:28.758357 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:14:28 crc kubenswrapper[4755]: E1210 16:14:28.759178 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:14:35 crc kubenswrapper[4755]: E1210 16:14:35.759965 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:14:40 crc kubenswrapper[4755]: E1210 16:14:40.760315 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:14:41 crc kubenswrapper[4755]: I1210 16:14:41.757595 4755 scope.go:117] "RemoveContainer" containerID="30d53fb1dd018f11a561066e37ed5aa32e4d43a5cd91c13627ef984f5187f3fd" Dec 10 16:14:41 crc kubenswrapper[4755]: E1210 16:14:41.758276 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:14:49 crc kubenswrapper[4755]: I1210 16:14:49.761597 4755 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 10 16:14:49 crc kubenswrapper[4755]: E1210 16:14:49.882105 4755 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 10 16:14:49 crc kubenswrapper[4755]: E1210 16:14:49.882398 4755 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 10 16:14:49 crc kubenswrapper[4755]: E1210 16:14:49.882610 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5d4h5b7hfbh5ddh688h9ch55bh7chf6h5ddh68ch94h69h5c5h596h59bh569hfchc4h676hcbh64dhdbh57fh75h5c9h98h59ch679h566h77h9cq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hw9gj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(6d104bea-ecdc-4fe1-9861-fb1a19fce845): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 10 16:14:49 crc kubenswrapper[4755]: E1210 16:14:49.884563 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:14:53 crc kubenswrapper[4755]: E1210 16:14:53.770065 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:14:54 crc kubenswrapper[4755]: I1210 16:14:54.757821 4755 scope.go:117] "RemoveContainer" containerID="30d53fb1dd018f11a561066e37ed5aa32e4d43a5cd91c13627ef984f5187f3fd" Dec 10 16:14:54 crc kubenswrapper[4755]: E1210 16:14:54.758150 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:15:00 crc kubenswrapper[4755]: I1210 16:15:00.154241 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29423055-8lsjm"] Dec 10 16:15:00 crc kubenswrapper[4755]: I1210 16:15:00.156623 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29423055-8lsjm" Dec 10 16:15:00 crc kubenswrapper[4755]: I1210 16:15:00.163358 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 10 16:15:00 crc kubenswrapper[4755]: I1210 16:15:00.165005 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 10 16:15:00 crc kubenswrapper[4755]: I1210 16:15:00.165821 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29423055-8lsjm"] Dec 10 16:15:00 crc kubenswrapper[4755]: I1210 16:15:00.261208 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k22wv\" (UniqueName: \"kubernetes.io/projected/4b3dcf6a-eaa0-4c77-a572-6d8f670c5b26-kube-api-access-k22wv\") pod \"collect-profiles-29423055-8lsjm\" (UID: \"4b3dcf6a-eaa0-4c77-a572-6d8f670c5b26\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423055-8lsjm" Dec 10 16:15:00 crc kubenswrapper[4755]: I1210 16:15:00.261536 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4b3dcf6a-eaa0-4c77-a572-6d8f670c5b26-secret-volume\") pod \"collect-profiles-29423055-8lsjm\" (UID: \"4b3dcf6a-eaa0-4c77-a572-6d8f670c5b26\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423055-8lsjm" Dec 10 16:15:00 crc kubenswrapper[4755]: I1210 16:15:00.261811 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4b3dcf6a-eaa0-4c77-a572-6d8f670c5b26-config-volume\") pod \"collect-profiles-29423055-8lsjm\" (UID: \"4b3dcf6a-eaa0-4c77-a572-6d8f670c5b26\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423055-8lsjm" Dec 10 16:15:00 crc kubenswrapper[4755]: I1210 16:15:00.363685 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4b3dcf6a-eaa0-4c77-a572-6d8f670c5b26-config-volume\") pod \"collect-profiles-29423055-8lsjm\" (UID: \"4b3dcf6a-eaa0-4c77-a572-6d8f670c5b26\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423055-8lsjm" Dec 10 16:15:00 crc kubenswrapper[4755]: I1210 16:15:00.363813 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k22wv\" (UniqueName: \"kubernetes.io/projected/4b3dcf6a-eaa0-4c77-a572-6d8f670c5b26-kube-api-access-k22wv\") pod \"collect-profiles-29423055-8lsjm\" (UID: \"4b3dcf6a-eaa0-4c77-a572-6d8f670c5b26\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423055-8lsjm" Dec 10 16:15:00 crc kubenswrapper[4755]: I1210 16:15:00.363858 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4b3dcf6a-eaa0-4c77-a572-6d8f670c5b26-secret-volume\") pod \"collect-profiles-29423055-8lsjm\" (UID: \"4b3dcf6a-eaa0-4c77-a572-6d8f670c5b26\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423055-8lsjm" Dec 10 16:15:00 crc kubenswrapper[4755]: I1210 16:15:00.365028 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4b3dcf6a-eaa0-4c77-a572-6d8f670c5b26-config-volume\") pod \"collect-profiles-29423055-8lsjm\" (UID: \"4b3dcf6a-eaa0-4c77-a572-6d8f670c5b26\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423055-8lsjm" Dec 10 16:15:00 crc kubenswrapper[4755]: I1210 16:15:00.371641 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4b3dcf6a-eaa0-4c77-a572-6d8f670c5b26-secret-volume\") pod \"collect-profiles-29423055-8lsjm\" (UID: \"4b3dcf6a-eaa0-4c77-a572-6d8f670c5b26\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423055-8lsjm" Dec 10 16:15:00 crc kubenswrapper[4755]: I1210 16:15:00.383821 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k22wv\" (UniqueName: \"kubernetes.io/projected/4b3dcf6a-eaa0-4c77-a572-6d8f670c5b26-kube-api-access-k22wv\") pod \"collect-profiles-29423055-8lsjm\" (UID: \"4b3dcf6a-eaa0-4c77-a572-6d8f670c5b26\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423055-8lsjm" Dec 10 16:15:00 crc kubenswrapper[4755]: I1210 16:15:00.488303 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29423055-8lsjm" Dec 10 16:15:00 crc kubenswrapper[4755]: E1210 16:15:00.771444 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:15:00 crc kubenswrapper[4755]: I1210 16:15:00.960140 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29423055-8lsjm"] Dec 10 16:15:01 crc kubenswrapper[4755]: I1210 16:15:01.547213 4755 generic.go:334] "Generic (PLEG): container finished" podID="4b3dcf6a-eaa0-4c77-a572-6d8f670c5b26" containerID="d44fd080980ef2ccf7576f8f0202605b930b156d7ad022932e00681c7dbf6994" exitCode=0 Dec 10 16:15:01 crc kubenswrapper[4755]: I1210 16:15:01.547253 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29423055-8lsjm" event={"ID":"4b3dcf6a-eaa0-4c77-a572-6d8f670c5b26","Type":"ContainerDied","Data":"d44fd080980ef2ccf7576f8f0202605b930b156d7ad022932e00681c7dbf6994"} Dec 10 16:15:01 crc kubenswrapper[4755]: I1210 16:15:01.547275 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29423055-8lsjm" event={"ID":"4b3dcf6a-eaa0-4c77-a572-6d8f670c5b26","Type":"ContainerStarted","Data":"182fe85dffb46e1de568a90bc19414f212220c83cd0d1c9636c639b59646ee4c"} Dec 10 16:15:02 crc kubenswrapper[4755]: I1210 16:15:02.966297 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29423055-8lsjm" Dec 10 16:15:03 crc kubenswrapper[4755]: I1210 16:15:03.030759 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4b3dcf6a-eaa0-4c77-a572-6d8f670c5b26-secret-volume\") pod \"4b3dcf6a-eaa0-4c77-a572-6d8f670c5b26\" (UID: \"4b3dcf6a-eaa0-4c77-a572-6d8f670c5b26\") " Dec 10 16:15:03 crc kubenswrapper[4755]: I1210 16:15:03.030943 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k22wv\" (UniqueName: \"kubernetes.io/projected/4b3dcf6a-eaa0-4c77-a572-6d8f670c5b26-kube-api-access-k22wv\") pod \"4b3dcf6a-eaa0-4c77-a572-6d8f670c5b26\" (UID: \"4b3dcf6a-eaa0-4c77-a572-6d8f670c5b26\") " Dec 10 16:15:03 crc kubenswrapper[4755]: I1210 16:15:03.031142 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4b3dcf6a-eaa0-4c77-a572-6d8f670c5b26-config-volume\") pod \"4b3dcf6a-eaa0-4c77-a572-6d8f670c5b26\" (UID: \"4b3dcf6a-eaa0-4c77-a572-6d8f670c5b26\") " Dec 10 16:15:03 crc kubenswrapper[4755]: I1210 16:15:03.031891 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b3dcf6a-eaa0-4c77-a572-6d8f670c5b26-config-volume" (OuterVolumeSpecName: "config-volume") pod "4b3dcf6a-eaa0-4c77-a572-6d8f670c5b26" (UID: "4b3dcf6a-eaa0-4c77-a572-6d8f670c5b26"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 16:15:03 crc kubenswrapper[4755]: I1210 16:15:03.039662 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b3dcf6a-eaa0-4c77-a572-6d8f670c5b26-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "4b3dcf6a-eaa0-4c77-a572-6d8f670c5b26" (UID: "4b3dcf6a-eaa0-4c77-a572-6d8f670c5b26"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 16:15:03 crc kubenswrapper[4755]: I1210 16:15:03.039812 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b3dcf6a-eaa0-4c77-a572-6d8f670c5b26-kube-api-access-k22wv" (OuterVolumeSpecName: "kube-api-access-k22wv") pod "4b3dcf6a-eaa0-4c77-a572-6d8f670c5b26" (UID: "4b3dcf6a-eaa0-4c77-a572-6d8f670c5b26"). InnerVolumeSpecName "kube-api-access-k22wv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 16:15:03 crc kubenswrapper[4755]: I1210 16:15:03.134063 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k22wv\" (UniqueName: \"kubernetes.io/projected/4b3dcf6a-eaa0-4c77-a572-6d8f670c5b26-kube-api-access-k22wv\") on node \"crc\" DevicePath \"\"" Dec 10 16:15:03 crc kubenswrapper[4755]: I1210 16:15:03.134094 4755 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4b3dcf6a-eaa0-4c77-a572-6d8f670c5b26-config-volume\") on node \"crc\" DevicePath \"\"" Dec 10 16:15:03 crc kubenswrapper[4755]: I1210 16:15:03.134105 4755 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4b3dcf6a-eaa0-4c77-a572-6d8f670c5b26-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 10 16:15:03 crc kubenswrapper[4755]: I1210 16:15:03.576374 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29423055-8lsjm" Dec 10 16:15:03 crc kubenswrapper[4755]: I1210 16:15:03.576872 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29423055-8lsjm" event={"ID":"4b3dcf6a-eaa0-4c77-a572-6d8f670c5b26","Type":"ContainerDied","Data":"182fe85dffb46e1de568a90bc19414f212220c83cd0d1c9636c639b59646ee4c"} Dec 10 16:15:03 crc kubenswrapper[4755]: I1210 16:15:03.576926 4755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="182fe85dffb46e1de568a90bc19414f212220c83cd0d1c9636c639b59646ee4c" Dec 10 16:15:04 crc kubenswrapper[4755]: I1210 16:15:04.037982 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29423010-xw2jd"] Dec 10 16:15:04 crc kubenswrapper[4755]: I1210 16:15:04.047602 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29423010-xw2jd"] Dec 10 16:15:05 crc kubenswrapper[4755]: E1210 16:15:05.759839 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:15:05 crc kubenswrapper[4755]: I1210 16:15:05.770938 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17322d90-1142-418c-81fc-13cc5e7396a9" path="/var/lib/kubelet/pods/17322d90-1142-418c-81fc-13cc5e7396a9/volumes" Dec 10 16:15:07 crc kubenswrapper[4755]: I1210 16:15:07.433106 4755 scope.go:117] "RemoveContainer" containerID="0082b52cec7eab4ea15c0fa209c2eaed0fa17f49da5d3abc46764effa2ba9b7a" Dec 10 16:15:07 crc kubenswrapper[4755]: I1210 16:15:07.673690 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-pkp5k"] Dec 10 16:15:07 crc kubenswrapper[4755]: E1210 16:15:07.674587 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b3dcf6a-eaa0-4c77-a572-6d8f670c5b26" containerName="collect-profiles" Dec 10 16:15:07 crc kubenswrapper[4755]: I1210 16:15:07.674629 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b3dcf6a-eaa0-4c77-a572-6d8f670c5b26" containerName="collect-profiles" Dec 10 16:15:07 crc kubenswrapper[4755]: I1210 16:15:07.674906 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b3dcf6a-eaa0-4c77-a572-6d8f670c5b26" containerName="collect-profiles" Dec 10 16:15:07 crc kubenswrapper[4755]: I1210 16:15:07.677115 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pkp5k" Dec 10 16:15:07 crc kubenswrapper[4755]: I1210 16:15:07.688394 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pkp5k"] Dec 10 16:15:07 crc kubenswrapper[4755]: I1210 16:15:07.834053 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16e9a333-4891-48db-bd8b-4e4aba1ed3ee-catalog-content\") pod \"community-operators-pkp5k\" (UID: \"16e9a333-4891-48db-bd8b-4e4aba1ed3ee\") " pod="openshift-marketplace/community-operators-pkp5k" Dec 10 16:15:07 crc kubenswrapper[4755]: I1210 16:15:07.834244 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5rn8\" (UniqueName: \"kubernetes.io/projected/16e9a333-4891-48db-bd8b-4e4aba1ed3ee-kube-api-access-d5rn8\") pod \"community-operators-pkp5k\" (UID: \"16e9a333-4891-48db-bd8b-4e4aba1ed3ee\") " pod="openshift-marketplace/community-operators-pkp5k" Dec 10 16:15:07 crc kubenswrapper[4755]: I1210 16:15:07.834284 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16e9a333-4891-48db-bd8b-4e4aba1ed3ee-utilities\") pod \"community-operators-pkp5k\" (UID: \"16e9a333-4891-48db-bd8b-4e4aba1ed3ee\") " pod="openshift-marketplace/community-operators-pkp5k" Dec 10 16:15:07 crc kubenswrapper[4755]: I1210 16:15:07.937624 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16e9a333-4891-48db-bd8b-4e4aba1ed3ee-catalog-content\") pod \"community-operators-pkp5k\" (UID: \"16e9a333-4891-48db-bd8b-4e4aba1ed3ee\") " pod="openshift-marketplace/community-operators-pkp5k" Dec 10 16:15:07 crc kubenswrapper[4755]: I1210 16:15:07.937766 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5rn8\" (UniqueName: \"kubernetes.io/projected/16e9a333-4891-48db-bd8b-4e4aba1ed3ee-kube-api-access-d5rn8\") pod \"community-operators-pkp5k\" (UID: \"16e9a333-4891-48db-bd8b-4e4aba1ed3ee\") " pod="openshift-marketplace/community-operators-pkp5k" Dec 10 16:15:07 crc kubenswrapper[4755]: I1210 16:15:07.937796 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16e9a333-4891-48db-bd8b-4e4aba1ed3ee-utilities\") pod \"community-operators-pkp5k\" (UID: \"16e9a333-4891-48db-bd8b-4e4aba1ed3ee\") " pod="openshift-marketplace/community-operators-pkp5k" Dec 10 16:15:07 crc kubenswrapper[4755]: I1210 16:15:07.938406 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16e9a333-4891-48db-bd8b-4e4aba1ed3ee-utilities\") pod \"community-operators-pkp5k\" (UID: \"16e9a333-4891-48db-bd8b-4e4aba1ed3ee\") " pod="openshift-marketplace/community-operators-pkp5k" Dec 10 16:15:07 crc kubenswrapper[4755]: I1210 16:15:07.938565 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16e9a333-4891-48db-bd8b-4e4aba1ed3ee-catalog-content\") pod \"community-operators-pkp5k\" (UID: \"16e9a333-4891-48db-bd8b-4e4aba1ed3ee\") " pod="openshift-marketplace/community-operators-pkp5k" Dec 10 16:15:07 crc kubenswrapper[4755]: I1210 16:15:07.964770 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5rn8\" (UniqueName: \"kubernetes.io/projected/16e9a333-4891-48db-bd8b-4e4aba1ed3ee-kube-api-access-d5rn8\") pod \"community-operators-pkp5k\" (UID: \"16e9a333-4891-48db-bd8b-4e4aba1ed3ee\") " pod="openshift-marketplace/community-operators-pkp5k" Dec 10 16:15:08 crc kubenswrapper[4755]: I1210 16:15:08.001266 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pkp5k" Dec 10 16:15:08 crc kubenswrapper[4755]: I1210 16:15:08.565749 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pkp5k"] Dec 10 16:15:08 crc kubenswrapper[4755]: I1210 16:15:08.622814 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pkp5k" event={"ID":"16e9a333-4891-48db-bd8b-4e4aba1ed3ee","Type":"ContainerStarted","Data":"cd409a647dfdc42aaa5eea1b902ec175909ceb16326f5250aa5d6d2b9cdd0a30"} Dec 10 16:15:09 crc kubenswrapper[4755]: I1210 16:15:09.635494 4755 generic.go:334] "Generic (PLEG): container finished" podID="16e9a333-4891-48db-bd8b-4e4aba1ed3ee" containerID="0c912b25383d57fbbc6e076ad0b060a3f5410e11c096f52b635a297e8024c398" exitCode=0 Dec 10 16:15:09 crc kubenswrapper[4755]: I1210 16:15:09.635546 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pkp5k" event={"ID":"16e9a333-4891-48db-bd8b-4e4aba1ed3ee","Type":"ContainerDied","Data":"0c912b25383d57fbbc6e076ad0b060a3f5410e11c096f52b635a297e8024c398"} Dec 10 16:15:09 crc kubenswrapper[4755]: I1210 16:15:09.758753 4755 scope.go:117] "RemoveContainer" containerID="30d53fb1dd018f11a561066e37ed5aa32e4d43a5cd91c13627ef984f5187f3fd" Dec 10 16:15:09 crc kubenswrapper[4755]: E1210 16:15:09.759345 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:15:12 crc kubenswrapper[4755]: I1210 16:15:12.664857 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pkp5k" event={"ID":"16e9a333-4891-48db-bd8b-4e4aba1ed3ee","Type":"ContainerStarted","Data":"458a9816768156688cd360ea662f699deef6fa576c3fbf6995ed412e9ce34489"} Dec 10 16:15:13 crc kubenswrapper[4755]: I1210 16:15:13.675252 4755 generic.go:334] "Generic (PLEG): container finished" podID="16e9a333-4891-48db-bd8b-4e4aba1ed3ee" containerID="458a9816768156688cd360ea662f699deef6fa576c3fbf6995ed412e9ce34489" exitCode=0 Dec 10 16:15:13 crc kubenswrapper[4755]: I1210 16:15:13.675297 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pkp5k" event={"ID":"16e9a333-4891-48db-bd8b-4e4aba1ed3ee","Type":"ContainerDied","Data":"458a9816768156688cd360ea662f699deef6fa576c3fbf6995ed412e9ce34489"} Dec 10 16:15:14 crc kubenswrapper[4755]: I1210 16:15:14.687862 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pkp5k" event={"ID":"16e9a333-4891-48db-bd8b-4e4aba1ed3ee","Type":"ContainerStarted","Data":"bb57f8503886738662d7ff6fa2b60c623d54bab94296241f252200ef97899401"} Dec 10 16:15:14 crc kubenswrapper[4755]: I1210 16:15:14.707671 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-pkp5k" podStartSLOduration=3.181252284 podStartE2EDuration="7.707652256s" podCreationTimestamp="2025-12-10 16:15:07 +0000 UTC" firstStartedPulling="2025-12-10 16:15:09.638267481 +0000 UTC m=+3106.239151113" lastFinishedPulling="2025-12-10 16:15:14.164667453 +0000 UTC m=+3110.765551085" observedRunningTime="2025-12-10 16:15:14.706977987 +0000 UTC m=+3111.307861629" watchObservedRunningTime="2025-12-10 16:15:14.707652256 +0000 UTC m=+3111.308535888" Dec 10 16:15:14 crc kubenswrapper[4755]: E1210 16:15:14.760091 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:15:18 crc kubenswrapper[4755]: I1210 16:15:18.001446 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-pkp5k" Dec 10 16:15:18 crc kubenswrapper[4755]: I1210 16:15:18.002142 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-pkp5k" Dec 10 16:15:18 crc kubenswrapper[4755]: I1210 16:15:18.052794 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-pkp5k" Dec 10 16:15:19 crc kubenswrapper[4755]: E1210 16:15:19.760920 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:15:20 crc kubenswrapper[4755]: I1210 16:15:20.757994 4755 scope.go:117] "RemoveContainer" containerID="30d53fb1dd018f11a561066e37ed5aa32e4d43a5cd91c13627ef984f5187f3fd" Dec 10 16:15:20 crc kubenswrapper[4755]: E1210 16:15:20.758352 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:15:25 crc kubenswrapper[4755]: E1210 16:15:25.759913 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:15:28 crc kubenswrapper[4755]: I1210 16:15:28.049425 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-pkp5k" Dec 10 16:15:28 crc kubenswrapper[4755]: I1210 16:15:28.107203 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pkp5k"] Dec 10 16:15:28 crc kubenswrapper[4755]: I1210 16:15:28.816620 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-pkp5k" podUID="16e9a333-4891-48db-bd8b-4e4aba1ed3ee" containerName="registry-server" containerID="cri-o://bb57f8503886738662d7ff6fa2b60c623d54bab94296241f252200ef97899401" gracePeriod=2 Dec 10 16:15:29 crc kubenswrapper[4755]: I1210 16:15:29.361887 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pkp5k" Dec 10 16:15:29 crc kubenswrapper[4755]: I1210 16:15:29.469977 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16e9a333-4891-48db-bd8b-4e4aba1ed3ee-catalog-content\") pod \"16e9a333-4891-48db-bd8b-4e4aba1ed3ee\" (UID: \"16e9a333-4891-48db-bd8b-4e4aba1ed3ee\") " Dec 10 16:15:29 crc kubenswrapper[4755]: I1210 16:15:29.470146 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16e9a333-4891-48db-bd8b-4e4aba1ed3ee-utilities\") pod \"16e9a333-4891-48db-bd8b-4e4aba1ed3ee\" (UID: \"16e9a333-4891-48db-bd8b-4e4aba1ed3ee\") " Dec 10 16:15:29 crc kubenswrapper[4755]: I1210 16:15:29.470417 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d5rn8\" (UniqueName: \"kubernetes.io/projected/16e9a333-4891-48db-bd8b-4e4aba1ed3ee-kube-api-access-d5rn8\") pod \"16e9a333-4891-48db-bd8b-4e4aba1ed3ee\" (UID: \"16e9a333-4891-48db-bd8b-4e4aba1ed3ee\") " Dec 10 16:15:29 crc kubenswrapper[4755]: I1210 16:15:29.472012 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16e9a333-4891-48db-bd8b-4e4aba1ed3ee-utilities" (OuterVolumeSpecName: "utilities") pod "16e9a333-4891-48db-bd8b-4e4aba1ed3ee" (UID: "16e9a333-4891-48db-bd8b-4e4aba1ed3ee"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 16:15:29 crc kubenswrapper[4755]: I1210 16:15:29.476519 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16e9a333-4891-48db-bd8b-4e4aba1ed3ee-kube-api-access-d5rn8" (OuterVolumeSpecName: "kube-api-access-d5rn8") pod "16e9a333-4891-48db-bd8b-4e4aba1ed3ee" (UID: "16e9a333-4891-48db-bd8b-4e4aba1ed3ee"). InnerVolumeSpecName "kube-api-access-d5rn8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 16:15:29 crc kubenswrapper[4755]: I1210 16:15:29.544262 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16e9a333-4891-48db-bd8b-4e4aba1ed3ee-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "16e9a333-4891-48db-bd8b-4e4aba1ed3ee" (UID: "16e9a333-4891-48db-bd8b-4e4aba1ed3ee"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 16:15:29 crc kubenswrapper[4755]: I1210 16:15:29.572643 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d5rn8\" (UniqueName: \"kubernetes.io/projected/16e9a333-4891-48db-bd8b-4e4aba1ed3ee-kube-api-access-d5rn8\") on node \"crc\" DevicePath \"\"" Dec 10 16:15:29 crc kubenswrapper[4755]: I1210 16:15:29.572678 4755 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16e9a333-4891-48db-bd8b-4e4aba1ed3ee-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 16:15:29 crc kubenswrapper[4755]: I1210 16:15:29.572687 4755 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16e9a333-4891-48db-bd8b-4e4aba1ed3ee-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 16:15:29 crc kubenswrapper[4755]: I1210 16:15:29.831534 4755 generic.go:334] "Generic (PLEG): container finished" podID="16e9a333-4891-48db-bd8b-4e4aba1ed3ee" containerID="bb57f8503886738662d7ff6fa2b60c623d54bab94296241f252200ef97899401" exitCode=0 Dec 10 16:15:29 crc kubenswrapper[4755]: I1210 16:15:29.831638 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pkp5k" Dec 10 16:15:29 crc kubenswrapper[4755]: I1210 16:15:29.831635 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pkp5k" event={"ID":"16e9a333-4891-48db-bd8b-4e4aba1ed3ee","Type":"ContainerDied","Data":"bb57f8503886738662d7ff6fa2b60c623d54bab94296241f252200ef97899401"} Dec 10 16:15:29 crc kubenswrapper[4755]: I1210 16:15:29.831705 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pkp5k" event={"ID":"16e9a333-4891-48db-bd8b-4e4aba1ed3ee","Type":"ContainerDied","Data":"cd409a647dfdc42aaa5eea1b902ec175909ceb16326f5250aa5d6d2b9cdd0a30"} Dec 10 16:15:29 crc kubenswrapper[4755]: I1210 16:15:29.831725 4755 scope.go:117] "RemoveContainer" containerID="bb57f8503886738662d7ff6fa2b60c623d54bab94296241f252200ef97899401" Dec 10 16:15:29 crc kubenswrapper[4755]: I1210 16:15:29.857771 4755 scope.go:117] "RemoveContainer" containerID="458a9816768156688cd360ea662f699deef6fa576c3fbf6995ed412e9ce34489" Dec 10 16:15:29 crc kubenswrapper[4755]: I1210 16:15:29.863746 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pkp5k"] Dec 10 16:15:29 crc kubenswrapper[4755]: I1210 16:15:29.871986 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-pkp5k"] Dec 10 16:15:29 crc kubenswrapper[4755]: I1210 16:15:29.889527 4755 scope.go:117] "RemoveContainer" containerID="0c912b25383d57fbbc6e076ad0b060a3f5410e11c096f52b635a297e8024c398" Dec 10 16:15:29 crc kubenswrapper[4755]: I1210 16:15:29.949959 4755 scope.go:117] "RemoveContainer" containerID="bb57f8503886738662d7ff6fa2b60c623d54bab94296241f252200ef97899401" Dec 10 16:15:29 crc kubenswrapper[4755]: E1210 16:15:29.951282 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb57f8503886738662d7ff6fa2b60c623d54bab94296241f252200ef97899401\": container with ID starting with bb57f8503886738662d7ff6fa2b60c623d54bab94296241f252200ef97899401 not found: ID does not exist" containerID="bb57f8503886738662d7ff6fa2b60c623d54bab94296241f252200ef97899401" Dec 10 16:15:29 crc kubenswrapper[4755]: I1210 16:15:29.951315 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb57f8503886738662d7ff6fa2b60c623d54bab94296241f252200ef97899401"} err="failed to get container status \"bb57f8503886738662d7ff6fa2b60c623d54bab94296241f252200ef97899401\": rpc error: code = NotFound desc = could not find container \"bb57f8503886738662d7ff6fa2b60c623d54bab94296241f252200ef97899401\": container with ID starting with bb57f8503886738662d7ff6fa2b60c623d54bab94296241f252200ef97899401 not found: ID does not exist" Dec 10 16:15:29 crc kubenswrapper[4755]: I1210 16:15:29.951341 4755 scope.go:117] "RemoveContainer" containerID="458a9816768156688cd360ea662f699deef6fa576c3fbf6995ed412e9ce34489" Dec 10 16:15:29 crc kubenswrapper[4755]: E1210 16:15:29.951728 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"458a9816768156688cd360ea662f699deef6fa576c3fbf6995ed412e9ce34489\": container with ID starting with 458a9816768156688cd360ea662f699deef6fa576c3fbf6995ed412e9ce34489 not found: ID does not exist" containerID="458a9816768156688cd360ea662f699deef6fa576c3fbf6995ed412e9ce34489" Dec 10 16:15:29 crc kubenswrapper[4755]: I1210 16:15:29.951749 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"458a9816768156688cd360ea662f699deef6fa576c3fbf6995ed412e9ce34489"} err="failed to get container status \"458a9816768156688cd360ea662f699deef6fa576c3fbf6995ed412e9ce34489\": rpc error: code = NotFound desc = could not find container \"458a9816768156688cd360ea662f699deef6fa576c3fbf6995ed412e9ce34489\": container with ID starting with 458a9816768156688cd360ea662f699deef6fa576c3fbf6995ed412e9ce34489 not found: ID does not exist" Dec 10 16:15:29 crc kubenswrapper[4755]: I1210 16:15:29.951763 4755 scope.go:117] "RemoveContainer" containerID="0c912b25383d57fbbc6e076ad0b060a3f5410e11c096f52b635a297e8024c398" Dec 10 16:15:29 crc kubenswrapper[4755]: E1210 16:15:29.952215 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c912b25383d57fbbc6e076ad0b060a3f5410e11c096f52b635a297e8024c398\": container with ID starting with 0c912b25383d57fbbc6e076ad0b060a3f5410e11c096f52b635a297e8024c398 not found: ID does not exist" containerID="0c912b25383d57fbbc6e076ad0b060a3f5410e11c096f52b635a297e8024c398" Dec 10 16:15:29 crc kubenswrapper[4755]: I1210 16:15:29.952273 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c912b25383d57fbbc6e076ad0b060a3f5410e11c096f52b635a297e8024c398"} err="failed to get container status \"0c912b25383d57fbbc6e076ad0b060a3f5410e11c096f52b635a297e8024c398\": rpc error: code = NotFound desc = could not find container \"0c912b25383d57fbbc6e076ad0b060a3f5410e11c096f52b635a297e8024c398\": container with ID starting with 0c912b25383d57fbbc6e076ad0b060a3f5410e11c096f52b635a297e8024c398 not found: ID does not exist" Dec 10 16:15:30 crc kubenswrapper[4755]: E1210 16:15:30.760076 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:15:31 crc kubenswrapper[4755]: I1210 16:15:31.758030 4755 scope.go:117] "RemoveContainer" containerID="30d53fb1dd018f11a561066e37ed5aa32e4d43a5cd91c13627ef984f5187f3fd" Dec 10 16:15:31 crc kubenswrapper[4755]: E1210 16:15:31.758308 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:15:31 crc kubenswrapper[4755]: I1210 16:15:31.768740 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16e9a333-4891-48db-bd8b-4e4aba1ed3ee" path="/var/lib/kubelet/pods/16e9a333-4891-48db-bd8b-4e4aba1ed3ee/volumes" Dec 10 16:15:36 crc kubenswrapper[4755]: E1210 16:15:36.759313 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:15:44 crc kubenswrapper[4755]: I1210 16:15:44.759044 4755 scope.go:117] "RemoveContainer" containerID="30d53fb1dd018f11a561066e37ed5aa32e4d43a5cd91c13627ef984f5187f3fd" Dec 10 16:15:44 crc kubenswrapper[4755]: E1210 16:15:44.760137 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:15:44 crc kubenswrapper[4755]: E1210 16:15:44.761848 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:15:48 crc kubenswrapper[4755]: E1210 16:15:48.759790 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:15:57 crc kubenswrapper[4755]: I1210 16:15:57.758454 4755 scope.go:117] "RemoveContainer" containerID="30d53fb1dd018f11a561066e37ed5aa32e4d43a5cd91c13627ef984f5187f3fd" Dec 10 16:15:57 crc kubenswrapper[4755]: E1210 16:15:57.759332 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:15:57 crc kubenswrapper[4755]: E1210 16:15:57.760758 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:16:01 crc kubenswrapper[4755]: E1210 16:16:01.759533 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:16:11 crc kubenswrapper[4755]: I1210 16:16:11.758809 4755 scope.go:117] "RemoveContainer" containerID="30d53fb1dd018f11a561066e37ed5aa32e4d43a5cd91c13627ef984f5187f3fd" Dec 10 16:16:11 crc kubenswrapper[4755]: E1210 16:16:11.759651 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:16:11 crc kubenswrapper[4755]: E1210 16:16:11.760507 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:16:13 crc kubenswrapper[4755]: E1210 16:16:13.787609 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:16:23 crc kubenswrapper[4755]: I1210 16:16:23.767631 4755 scope.go:117] "RemoveContainer" containerID="30d53fb1dd018f11a561066e37ed5aa32e4d43a5cd91c13627ef984f5187f3fd" Dec 10 16:16:23 crc kubenswrapper[4755]: E1210 16:16:23.768446 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:16:23 crc kubenswrapper[4755]: E1210 16:16:23.768739 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:16:28 crc kubenswrapper[4755]: E1210 16:16:28.767078 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:16:36 crc kubenswrapper[4755]: I1210 16:16:36.758028 4755 scope.go:117] "RemoveContainer" containerID="30d53fb1dd018f11a561066e37ed5aa32e4d43a5cd91c13627ef984f5187f3fd" Dec 10 16:16:36 crc kubenswrapper[4755]: E1210 16:16:36.758854 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:16:37 crc kubenswrapper[4755]: E1210 16:16:37.760322 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:16:40 crc kubenswrapper[4755]: E1210 16:16:40.760128 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:16:48 crc kubenswrapper[4755]: I1210 16:16:48.757691 4755 scope.go:117] "RemoveContainer" containerID="30d53fb1dd018f11a561066e37ed5aa32e4d43a5cd91c13627ef984f5187f3fd" Dec 10 16:16:48 crc kubenswrapper[4755]: E1210 16:16:48.758359 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:16:52 crc kubenswrapper[4755]: E1210 16:16:52.759737 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:16:55 crc kubenswrapper[4755]: E1210 16:16:55.758955 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:17:02 crc kubenswrapper[4755]: I1210 16:17:02.757939 4755 scope.go:117] "RemoveContainer" containerID="30d53fb1dd018f11a561066e37ed5aa32e4d43a5cd91c13627ef984f5187f3fd" Dec 10 16:17:02 crc kubenswrapper[4755]: E1210 16:17:02.758731 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:17:04 crc kubenswrapper[4755]: E1210 16:17:04.760758 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:17:08 crc kubenswrapper[4755]: E1210 16:17:08.761274 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:17:15 crc kubenswrapper[4755]: I1210 16:17:15.757865 4755 scope.go:117] "RemoveContainer" containerID="30d53fb1dd018f11a561066e37ed5aa32e4d43a5cd91c13627ef984f5187f3fd" Dec 10 16:17:15 crc kubenswrapper[4755]: E1210 16:17:15.758842 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:17:16 crc kubenswrapper[4755]: E1210 16:17:16.759835 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:17:17 crc kubenswrapper[4755]: I1210 16:17:17.911659 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vkpwz"] Dec 10 16:17:17 crc kubenswrapper[4755]: E1210 16:17:17.912208 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16e9a333-4891-48db-bd8b-4e4aba1ed3ee" containerName="registry-server" Dec 10 16:17:17 crc kubenswrapper[4755]: I1210 16:17:17.912228 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="16e9a333-4891-48db-bd8b-4e4aba1ed3ee" containerName="registry-server" Dec 10 16:17:17 crc kubenswrapper[4755]: E1210 16:17:17.912253 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16e9a333-4891-48db-bd8b-4e4aba1ed3ee" containerName="extract-utilities" Dec 10 16:17:17 crc kubenswrapper[4755]: I1210 16:17:17.912263 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="16e9a333-4891-48db-bd8b-4e4aba1ed3ee" containerName="extract-utilities" Dec 10 16:17:17 crc kubenswrapper[4755]: E1210 16:17:17.912281 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16e9a333-4891-48db-bd8b-4e4aba1ed3ee" containerName="extract-content" Dec 10 16:17:17 crc kubenswrapper[4755]: I1210 16:17:17.912288 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="16e9a333-4891-48db-bd8b-4e4aba1ed3ee" containerName="extract-content" Dec 10 16:17:17 crc kubenswrapper[4755]: I1210 16:17:17.912594 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="16e9a333-4891-48db-bd8b-4e4aba1ed3ee" containerName="registry-server" Dec 10 16:17:17 crc kubenswrapper[4755]: I1210 16:17:17.914882 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vkpwz" Dec 10 16:17:17 crc kubenswrapper[4755]: I1210 16:17:17.925954 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vkpwz"] Dec 10 16:17:18 crc kubenswrapper[4755]: I1210 16:17:18.057770 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b82d56bb-d73e-4ffb-9170-7f8b924c1b76-catalog-content\") pod \"redhat-operators-vkpwz\" (UID: \"b82d56bb-d73e-4ffb-9170-7f8b924c1b76\") " pod="openshift-marketplace/redhat-operators-vkpwz" Dec 10 16:17:18 crc kubenswrapper[4755]: I1210 16:17:18.057836 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kdpn\" (UniqueName: \"kubernetes.io/projected/b82d56bb-d73e-4ffb-9170-7f8b924c1b76-kube-api-access-7kdpn\") pod \"redhat-operators-vkpwz\" (UID: \"b82d56bb-d73e-4ffb-9170-7f8b924c1b76\") " pod="openshift-marketplace/redhat-operators-vkpwz" Dec 10 16:17:18 crc kubenswrapper[4755]: I1210 16:17:18.057978 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b82d56bb-d73e-4ffb-9170-7f8b924c1b76-utilities\") pod \"redhat-operators-vkpwz\" (UID: \"b82d56bb-d73e-4ffb-9170-7f8b924c1b76\") " pod="openshift-marketplace/redhat-operators-vkpwz" Dec 10 16:17:18 crc kubenswrapper[4755]: I1210 16:17:18.160655 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b82d56bb-d73e-4ffb-9170-7f8b924c1b76-catalog-content\") pod \"redhat-operators-vkpwz\" (UID: \"b82d56bb-d73e-4ffb-9170-7f8b924c1b76\") " pod="openshift-marketplace/redhat-operators-vkpwz" Dec 10 16:17:18 crc kubenswrapper[4755]: I1210 16:17:18.160713 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kdpn\" (UniqueName: \"kubernetes.io/projected/b82d56bb-d73e-4ffb-9170-7f8b924c1b76-kube-api-access-7kdpn\") pod \"redhat-operators-vkpwz\" (UID: \"b82d56bb-d73e-4ffb-9170-7f8b924c1b76\") " pod="openshift-marketplace/redhat-operators-vkpwz" Dec 10 16:17:18 crc kubenswrapper[4755]: I1210 16:17:18.160764 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b82d56bb-d73e-4ffb-9170-7f8b924c1b76-utilities\") pod \"redhat-operators-vkpwz\" (UID: \"b82d56bb-d73e-4ffb-9170-7f8b924c1b76\") " pod="openshift-marketplace/redhat-operators-vkpwz" Dec 10 16:17:18 crc kubenswrapper[4755]: I1210 16:17:18.161172 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b82d56bb-d73e-4ffb-9170-7f8b924c1b76-catalog-content\") pod \"redhat-operators-vkpwz\" (UID: \"b82d56bb-d73e-4ffb-9170-7f8b924c1b76\") " pod="openshift-marketplace/redhat-operators-vkpwz" Dec 10 16:17:18 crc kubenswrapper[4755]: I1210 16:17:18.161297 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b82d56bb-d73e-4ffb-9170-7f8b924c1b76-utilities\") pod \"redhat-operators-vkpwz\" (UID: \"b82d56bb-d73e-4ffb-9170-7f8b924c1b76\") " pod="openshift-marketplace/redhat-operators-vkpwz" Dec 10 16:17:18 crc kubenswrapper[4755]: I1210 16:17:18.180035 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kdpn\" (UniqueName: \"kubernetes.io/projected/b82d56bb-d73e-4ffb-9170-7f8b924c1b76-kube-api-access-7kdpn\") pod \"redhat-operators-vkpwz\" (UID: \"b82d56bb-d73e-4ffb-9170-7f8b924c1b76\") " pod="openshift-marketplace/redhat-operators-vkpwz" Dec 10 16:17:18 crc kubenswrapper[4755]: I1210 16:17:18.255391 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vkpwz" Dec 10 16:17:18 crc kubenswrapper[4755]: I1210 16:17:18.815067 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vkpwz"] Dec 10 16:17:19 crc kubenswrapper[4755]: I1210 16:17:19.028365 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vkpwz" event={"ID":"b82d56bb-d73e-4ffb-9170-7f8b924c1b76","Type":"ContainerStarted","Data":"cd50aab22ced63f5777860caf1c4112cf645e2ac3694ea099c8d98fdd93c04b8"} Dec 10 16:17:20 crc kubenswrapper[4755]: I1210 16:17:20.039445 4755 generic.go:334] "Generic (PLEG): container finished" podID="b82d56bb-d73e-4ffb-9170-7f8b924c1b76" containerID="ef235453a664e99d128add9bfecd8e6637030e6b99278f2726a0b59c7442eaaf" exitCode=0 Dec 10 16:17:20 crc kubenswrapper[4755]: I1210 16:17:20.039525 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vkpwz" event={"ID":"b82d56bb-d73e-4ffb-9170-7f8b924c1b76","Type":"ContainerDied","Data":"ef235453a664e99d128add9bfecd8e6637030e6b99278f2726a0b59c7442eaaf"} Dec 10 16:17:22 crc kubenswrapper[4755]: I1210 16:17:22.060034 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vkpwz" event={"ID":"b82d56bb-d73e-4ffb-9170-7f8b924c1b76","Type":"ContainerStarted","Data":"abdbd584b7dc65fbd47d490a46b728b6103ebaa793107f6ddd35496fb4212efb"} Dec 10 16:17:23 crc kubenswrapper[4755]: E1210 16:17:23.792833 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:17:25 crc kubenswrapper[4755]: I1210 16:17:25.091027 4755 generic.go:334] "Generic (PLEG): container finished" podID="b82d56bb-d73e-4ffb-9170-7f8b924c1b76" containerID="abdbd584b7dc65fbd47d490a46b728b6103ebaa793107f6ddd35496fb4212efb" exitCode=0 Dec 10 16:17:25 crc kubenswrapper[4755]: I1210 16:17:25.091090 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vkpwz" event={"ID":"b82d56bb-d73e-4ffb-9170-7f8b924c1b76","Type":"ContainerDied","Data":"abdbd584b7dc65fbd47d490a46b728b6103ebaa793107f6ddd35496fb4212efb"} Dec 10 16:17:27 crc kubenswrapper[4755]: I1210 16:17:27.116300 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vkpwz" event={"ID":"b82d56bb-d73e-4ffb-9170-7f8b924c1b76","Type":"ContainerStarted","Data":"e3ff906da6aaf58060014de364ae1e623c684e8f4f61b997d911f5775cb50a49"} Dec 10 16:17:27 crc kubenswrapper[4755]: I1210 16:17:27.141114 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vkpwz" podStartSLOduration=3.924811637 podStartE2EDuration="10.141087952s" podCreationTimestamp="2025-12-10 16:17:17 +0000 UTC" firstStartedPulling="2025-12-10 16:17:20.041447785 +0000 UTC m=+3236.642331417" lastFinishedPulling="2025-12-10 16:17:26.2577241 +0000 UTC m=+3242.858607732" observedRunningTime="2025-12-10 16:17:27.133924726 +0000 UTC m=+3243.734808358" watchObservedRunningTime="2025-12-10 16:17:27.141087952 +0000 UTC m=+3243.741971584" Dec 10 16:17:28 crc kubenswrapper[4755]: I1210 16:17:28.255840 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vkpwz" Dec 10 16:17:28 crc kubenswrapper[4755]: I1210 16:17:28.255910 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vkpwz" Dec 10 16:17:28 crc kubenswrapper[4755]: E1210 16:17:28.759674 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:17:29 crc kubenswrapper[4755]: I1210 16:17:29.304446 4755 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vkpwz" podUID="b82d56bb-d73e-4ffb-9170-7f8b924c1b76" containerName="registry-server" probeResult="failure" output=< Dec 10 16:17:29 crc kubenswrapper[4755]: timeout: failed to connect service ":50051" within 1s Dec 10 16:17:29 crc kubenswrapper[4755]: > Dec 10 16:17:30 crc kubenswrapper[4755]: I1210 16:17:30.757953 4755 scope.go:117] "RemoveContainer" containerID="30d53fb1dd018f11a561066e37ed5aa32e4d43a5cd91c13627ef984f5187f3fd" Dec 10 16:17:30 crc kubenswrapper[4755]: E1210 16:17:30.758447 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:17:34 crc kubenswrapper[4755]: E1210 16:17:34.759699 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:17:38 crc kubenswrapper[4755]: I1210 16:17:38.304364 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vkpwz" Dec 10 16:17:38 crc kubenswrapper[4755]: I1210 16:17:38.358952 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vkpwz" Dec 10 16:17:38 crc kubenswrapper[4755]: I1210 16:17:38.543902 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vkpwz"] Dec 10 16:17:40 crc kubenswrapper[4755]: I1210 16:17:40.232571 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vkpwz" podUID="b82d56bb-d73e-4ffb-9170-7f8b924c1b76" containerName="registry-server" containerID="cri-o://e3ff906da6aaf58060014de364ae1e623c684e8f4f61b997d911f5775cb50a49" gracePeriod=2 Dec 10 16:17:40 crc kubenswrapper[4755]: I1210 16:17:40.815169 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vkpwz" Dec 10 16:17:40 crc kubenswrapper[4755]: I1210 16:17:40.958027 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7kdpn\" (UniqueName: \"kubernetes.io/projected/b82d56bb-d73e-4ffb-9170-7f8b924c1b76-kube-api-access-7kdpn\") pod \"b82d56bb-d73e-4ffb-9170-7f8b924c1b76\" (UID: \"b82d56bb-d73e-4ffb-9170-7f8b924c1b76\") " Dec 10 16:17:40 crc kubenswrapper[4755]: I1210 16:17:40.958097 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b82d56bb-d73e-4ffb-9170-7f8b924c1b76-catalog-content\") pod \"b82d56bb-d73e-4ffb-9170-7f8b924c1b76\" (UID: \"b82d56bb-d73e-4ffb-9170-7f8b924c1b76\") " Dec 10 16:17:40 crc kubenswrapper[4755]: I1210 16:17:40.958257 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b82d56bb-d73e-4ffb-9170-7f8b924c1b76-utilities\") pod \"b82d56bb-d73e-4ffb-9170-7f8b924c1b76\" (UID: \"b82d56bb-d73e-4ffb-9170-7f8b924c1b76\") " Dec 10 16:17:40 crc kubenswrapper[4755]: I1210 16:17:40.959209 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b82d56bb-d73e-4ffb-9170-7f8b924c1b76-utilities" (OuterVolumeSpecName: "utilities") pod "b82d56bb-d73e-4ffb-9170-7f8b924c1b76" (UID: "b82d56bb-d73e-4ffb-9170-7f8b924c1b76"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 16:17:40 crc kubenswrapper[4755]: I1210 16:17:40.969339 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b82d56bb-d73e-4ffb-9170-7f8b924c1b76-kube-api-access-7kdpn" (OuterVolumeSpecName: "kube-api-access-7kdpn") pod "b82d56bb-d73e-4ffb-9170-7f8b924c1b76" (UID: "b82d56bb-d73e-4ffb-9170-7f8b924c1b76"). InnerVolumeSpecName "kube-api-access-7kdpn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 16:17:41 crc kubenswrapper[4755]: I1210 16:17:41.060293 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7kdpn\" (UniqueName: \"kubernetes.io/projected/b82d56bb-d73e-4ffb-9170-7f8b924c1b76-kube-api-access-7kdpn\") on node \"crc\" DevicePath \"\"" Dec 10 16:17:41 crc kubenswrapper[4755]: I1210 16:17:41.060325 4755 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b82d56bb-d73e-4ffb-9170-7f8b924c1b76-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 16:17:41 crc kubenswrapper[4755]: I1210 16:17:41.074966 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b82d56bb-d73e-4ffb-9170-7f8b924c1b76-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b82d56bb-d73e-4ffb-9170-7f8b924c1b76" (UID: "b82d56bb-d73e-4ffb-9170-7f8b924c1b76"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 16:17:41 crc kubenswrapper[4755]: I1210 16:17:41.162181 4755 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b82d56bb-d73e-4ffb-9170-7f8b924c1b76-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 16:17:41 crc kubenswrapper[4755]: I1210 16:17:41.248048 4755 generic.go:334] "Generic (PLEG): container finished" podID="b82d56bb-d73e-4ffb-9170-7f8b924c1b76" containerID="e3ff906da6aaf58060014de364ae1e623c684e8f4f61b997d911f5775cb50a49" exitCode=0 Dec 10 16:17:41 crc kubenswrapper[4755]: I1210 16:17:41.248093 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vkpwz" Dec 10 16:17:41 crc kubenswrapper[4755]: I1210 16:17:41.248124 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vkpwz" event={"ID":"b82d56bb-d73e-4ffb-9170-7f8b924c1b76","Type":"ContainerDied","Data":"e3ff906da6aaf58060014de364ae1e623c684e8f4f61b997d911f5775cb50a49"} Dec 10 16:17:41 crc kubenswrapper[4755]: I1210 16:17:41.248206 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vkpwz" event={"ID":"b82d56bb-d73e-4ffb-9170-7f8b924c1b76","Type":"ContainerDied","Data":"cd50aab22ced63f5777860caf1c4112cf645e2ac3694ea099c8d98fdd93c04b8"} Dec 10 16:17:41 crc kubenswrapper[4755]: I1210 16:17:41.248241 4755 scope.go:117] "RemoveContainer" containerID="e3ff906da6aaf58060014de364ae1e623c684e8f4f61b997d911f5775cb50a49" Dec 10 16:17:41 crc kubenswrapper[4755]: I1210 16:17:41.281581 4755 scope.go:117] "RemoveContainer" containerID="abdbd584b7dc65fbd47d490a46b728b6103ebaa793107f6ddd35496fb4212efb" Dec 10 16:17:41 crc kubenswrapper[4755]: I1210 16:17:41.293754 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vkpwz"] Dec 10 16:17:41 crc kubenswrapper[4755]: I1210 16:17:41.299706 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-vkpwz"] Dec 10 16:17:41 crc kubenswrapper[4755]: I1210 16:17:41.358581 4755 scope.go:117] "RemoveContainer" containerID="ef235453a664e99d128add9bfecd8e6637030e6b99278f2726a0b59c7442eaaf" Dec 10 16:17:41 crc kubenswrapper[4755]: I1210 16:17:41.389211 4755 scope.go:117] "RemoveContainer" containerID="e3ff906da6aaf58060014de364ae1e623c684e8f4f61b997d911f5775cb50a49" Dec 10 16:17:41 crc kubenswrapper[4755]: E1210 16:17:41.389828 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3ff906da6aaf58060014de364ae1e623c684e8f4f61b997d911f5775cb50a49\": container with ID starting with e3ff906da6aaf58060014de364ae1e623c684e8f4f61b997d911f5775cb50a49 not found: ID does not exist" containerID="e3ff906da6aaf58060014de364ae1e623c684e8f4f61b997d911f5775cb50a49" Dec 10 16:17:41 crc kubenswrapper[4755]: I1210 16:17:41.389862 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3ff906da6aaf58060014de364ae1e623c684e8f4f61b997d911f5775cb50a49"} err="failed to get container status \"e3ff906da6aaf58060014de364ae1e623c684e8f4f61b997d911f5775cb50a49\": rpc error: code = NotFound desc = could not find container \"e3ff906da6aaf58060014de364ae1e623c684e8f4f61b997d911f5775cb50a49\": container with ID starting with e3ff906da6aaf58060014de364ae1e623c684e8f4f61b997d911f5775cb50a49 not found: ID does not exist" Dec 10 16:17:41 crc kubenswrapper[4755]: I1210 16:17:41.389883 4755 scope.go:117] "RemoveContainer" containerID="abdbd584b7dc65fbd47d490a46b728b6103ebaa793107f6ddd35496fb4212efb" Dec 10 16:17:41 crc kubenswrapper[4755]: E1210 16:17:41.390502 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"abdbd584b7dc65fbd47d490a46b728b6103ebaa793107f6ddd35496fb4212efb\": container with ID starting with abdbd584b7dc65fbd47d490a46b728b6103ebaa793107f6ddd35496fb4212efb not found: ID does not exist" containerID="abdbd584b7dc65fbd47d490a46b728b6103ebaa793107f6ddd35496fb4212efb" Dec 10 16:17:41 crc kubenswrapper[4755]: I1210 16:17:41.390595 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abdbd584b7dc65fbd47d490a46b728b6103ebaa793107f6ddd35496fb4212efb"} err="failed to get container status \"abdbd584b7dc65fbd47d490a46b728b6103ebaa793107f6ddd35496fb4212efb\": rpc error: code = NotFound desc = could not find container \"abdbd584b7dc65fbd47d490a46b728b6103ebaa793107f6ddd35496fb4212efb\": container with ID starting with abdbd584b7dc65fbd47d490a46b728b6103ebaa793107f6ddd35496fb4212efb not found: ID does not exist" Dec 10 16:17:41 crc kubenswrapper[4755]: I1210 16:17:41.390680 4755 scope.go:117] "RemoveContainer" containerID="ef235453a664e99d128add9bfecd8e6637030e6b99278f2726a0b59c7442eaaf" Dec 10 16:17:41 crc kubenswrapper[4755]: E1210 16:17:41.391205 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef235453a664e99d128add9bfecd8e6637030e6b99278f2726a0b59c7442eaaf\": container with ID starting with ef235453a664e99d128add9bfecd8e6637030e6b99278f2726a0b59c7442eaaf not found: ID does not exist" containerID="ef235453a664e99d128add9bfecd8e6637030e6b99278f2726a0b59c7442eaaf" Dec 10 16:17:41 crc kubenswrapper[4755]: I1210 16:17:41.391287 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef235453a664e99d128add9bfecd8e6637030e6b99278f2726a0b59c7442eaaf"} err="failed to get container status \"ef235453a664e99d128add9bfecd8e6637030e6b99278f2726a0b59c7442eaaf\": rpc error: code = NotFound desc = could not find container \"ef235453a664e99d128add9bfecd8e6637030e6b99278f2726a0b59c7442eaaf\": container with ID starting with ef235453a664e99d128add9bfecd8e6637030e6b99278f2726a0b59c7442eaaf not found: ID does not exist" Dec 10 16:17:41 crc kubenswrapper[4755]: I1210 16:17:41.758783 4755 scope.go:117] "RemoveContainer" containerID="30d53fb1dd018f11a561066e37ed5aa32e4d43a5cd91c13627ef984f5187f3fd" Dec 10 16:17:41 crc kubenswrapper[4755]: E1210 16:17:41.759144 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:17:41 crc kubenswrapper[4755]: E1210 16:17:41.761306 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:17:41 crc kubenswrapper[4755]: I1210 16:17:41.769889 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b82d56bb-d73e-4ffb-9170-7f8b924c1b76" path="/var/lib/kubelet/pods/b82d56bb-d73e-4ffb-9170-7f8b924c1b76/volumes" Dec 10 16:17:49 crc kubenswrapper[4755]: E1210 16:17:49.760067 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:17:54 crc kubenswrapper[4755]: I1210 16:17:54.757751 4755 scope.go:117] "RemoveContainer" containerID="30d53fb1dd018f11a561066e37ed5aa32e4d43a5cd91c13627ef984f5187f3fd" Dec 10 16:17:54 crc kubenswrapper[4755]: E1210 16:17:54.758673 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:17:54 crc kubenswrapper[4755]: E1210 16:17:54.759501 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:18:03 crc kubenswrapper[4755]: E1210 16:18:03.768939 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:18:09 crc kubenswrapper[4755]: I1210 16:18:09.757590 4755 scope.go:117] "RemoveContainer" containerID="30d53fb1dd018f11a561066e37ed5aa32e4d43a5cd91c13627ef984f5187f3fd" Dec 10 16:18:09 crc kubenswrapper[4755]: E1210 16:18:09.758394 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:18:09 crc kubenswrapper[4755]: E1210 16:18:09.760222 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:18:17 crc kubenswrapper[4755]: E1210 16:18:17.760589 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:18:22 crc kubenswrapper[4755]: I1210 16:18:22.759661 4755 scope.go:117] "RemoveContainer" containerID="30d53fb1dd018f11a561066e37ed5aa32e4d43a5cd91c13627ef984f5187f3fd" Dec 10 16:18:22 crc kubenswrapper[4755]: E1210 16:18:22.761286 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:18:23 crc kubenswrapper[4755]: I1210 16:18:23.660967 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" event={"ID":"b132a8b9-1c99-414d-8773-229bf36b305d","Type":"ContainerStarted","Data":"5b27a5e0503cafb735ea7d1e2d88d5085b602a7e219184192e9541ad489864c6"} Dec 10 16:18:28 crc kubenswrapper[4755]: E1210 16:18:28.759915 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:18:34 crc kubenswrapper[4755]: E1210 16:18:34.759607 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:18:40 crc kubenswrapper[4755]: E1210 16:18:40.761363 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:18:45 crc kubenswrapper[4755]: E1210 16:18:45.760365 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:18:52 crc kubenswrapper[4755]: E1210 16:18:52.760227 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:18:58 crc kubenswrapper[4755]: E1210 16:18:58.759330 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:19:03 crc kubenswrapper[4755]: E1210 16:19:03.768949 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:19:12 crc kubenswrapper[4755]: E1210 16:19:12.761495 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:19:15 crc kubenswrapper[4755]: E1210 16:19:15.760581 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:19:24 crc kubenswrapper[4755]: E1210 16:19:24.884122 4755 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 10 16:19:24 crc kubenswrapper[4755]: E1210 16:19:24.884673 4755 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 10 16:19:24 crc kubenswrapper[4755]: E1210 16:19:24.884842 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mz4t5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-jfc28_openstack(998863b6-4f48-4c8b-8011-a40377686b99): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 10 16:19:24 crc kubenswrapper[4755]: E1210 16:19:24.886033 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:19:29 crc kubenswrapper[4755]: E1210 16:19:29.760282 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:19:33 crc kubenswrapper[4755]: I1210 16:19:33.364634 4755 generic.go:334] "Generic (PLEG): container finished" podID="b4ab39f5-c779-4d0c-9497-5c7c567dc0bc" containerID="f0d16bf7bc5081ec05d695b294f206e1d14b79f4e1baef6723689056387aea8b" exitCode=2 Dec 10 16:19:33 crc kubenswrapper[4755]: I1210 16:19:33.364716 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mwqmw" event={"ID":"b4ab39f5-c779-4d0c-9497-5c7c567dc0bc","Type":"ContainerDied","Data":"f0d16bf7bc5081ec05d695b294f206e1d14b79f4e1baef6723689056387aea8b"} Dec 10 16:19:34 crc kubenswrapper[4755]: I1210 16:19:34.998769 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mwqmw" Dec 10 16:19:35 crc kubenswrapper[4755]: I1210 16:19:35.103071 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b4ab39f5-c779-4d0c-9497-5c7c567dc0bc-inventory\") pod \"b4ab39f5-c779-4d0c-9497-5c7c567dc0bc\" (UID: \"b4ab39f5-c779-4d0c-9497-5c7c567dc0bc\") " Dec 10 16:19:35 crc kubenswrapper[4755]: I1210 16:19:35.103282 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b4ab39f5-c779-4d0c-9497-5c7c567dc0bc-ssh-key\") pod \"b4ab39f5-c779-4d0c-9497-5c7c567dc0bc\" (UID: \"b4ab39f5-c779-4d0c-9497-5c7c567dc0bc\") " Dec 10 16:19:35 crc kubenswrapper[4755]: I1210 16:19:35.103316 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5wsqf\" (UniqueName: \"kubernetes.io/projected/b4ab39f5-c779-4d0c-9497-5c7c567dc0bc-kube-api-access-5wsqf\") pod \"b4ab39f5-c779-4d0c-9497-5c7c567dc0bc\" (UID: \"b4ab39f5-c779-4d0c-9497-5c7c567dc0bc\") " Dec 10 16:19:35 crc kubenswrapper[4755]: I1210 16:19:35.108277 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4ab39f5-c779-4d0c-9497-5c7c567dc0bc-kube-api-access-5wsqf" (OuterVolumeSpecName: "kube-api-access-5wsqf") pod "b4ab39f5-c779-4d0c-9497-5c7c567dc0bc" (UID: "b4ab39f5-c779-4d0c-9497-5c7c567dc0bc"). InnerVolumeSpecName "kube-api-access-5wsqf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 16:19:35 crc kubenswrapper[4755]: I1210 16:19:35.138141 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4ab39f5-c779-4d0c-9497-5c7c567dc0bc-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "b4ab39f5-c779-4d0c-9497-5c7c567dc0bc" (UID: "b4ab39f5-c779-4d0c-9497-5c7c567dc0bc"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 16:19:35 crc kubenswrapper[4755]: I1210 16:19:35.139975 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4ab39f5-c779-4d0c-9497-5c7c567dc0bc-inventory" (OuterVolumeSpecName: "inventory") pod "b4ab39f5-c779-4d0c-9497-5c7c567dc0bc" (UID: "b4ab39f5-c779-4d0c-9497-5c7c567dc0bc"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 16:19:35 crc kubenswrapper[4755]: I1210 16:19:35.205918 4755 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b4ab39f5-c779-4d0c-9497-5c7c567dc0bc-inventory\") on node \"crc\" DevicePath \"\"" Dec 10 16:19:35 crc kubenswrapper[4755]: I1210 16:19:35.205947 4755 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b4ab39f5-c779-4d0c-9497-5c7c567dc0bc-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 10 16:19:35 crc kubenswrapper[4755]: I1210 16:19:35.205956 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5wsqf\" (UniqueName: \"kubernetes.io/projected/b4ab39f5-c779-4d0c-9497-5c7c567dc0bc-kube-api-access-5wsqf\") on node \"crc\" DevicePath \"\"" Dec 10 16:19:35 crc kubenswrapper[4755]: I1210 16:19:35.382128 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mwqmw" event={"ID":"b4ab39f5-c779-4d0c-9497-5c7c567dc0bc","Type":"ContainerDied","Data":"e369f2df46e0b5da1629462d690027c09e3d7418cada8479415f73d6925f5d24"} Dec 10 16:19:35 crc kubenswrapper[4755]: I1210 16:19:35.382480 4755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e369f2df46e0b5da1629462d690027c09e3d7418cada8479415f73d6925f5d24" Dec 10 16:19:35 crc kubenswrapper[4755]: I1210 16:19:35.382192 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mwqmw" Dec 10 16:19:37 crc kubenswrapper[4755]: E1210 16:19:37.759744 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:19:44 crc kubenswrapper[4755]: E1210 16:19:44.760772 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:19:49 crc kubenswrapper[4755]: E1210 16:19:49.761613 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:19:55 crc kubenswrapper[4755]: I1210 16:19:55.762701 4755 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 10 16:19:55 crc kubenswrapper[4755]: E1210 16:19:55.891855 4755 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 10 16:19:55 crc kubenswrapper[4755]: E1210 16:19:55.891924 4755 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 10 16:19:55 crc kubenswrapper[4755]: E1210 16:19:55.892046 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5d4h5b7hfbh5ddh688h9ch55bh7chf6h5ddh68ch94h69h5c5h596h59bh569hfchc4h676hcbh64dhdbh57fh75h5c9h98h59ch679h566h77h9cq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hw9gj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(6d104bea-ecdc-4fe1-9861-fb1a19fce845): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 10 16:19:55 crc kubenswrapper[4755]: E1210 16:19:55.893262 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:20:00 crc kubenswrapper[4755]: E1210 16:20:00.760094 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:20:07 crc kubenswrapper[4755]: E1210 16:20:07.759967 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:20:13 crc kubenswrapper[4755]: E1210 16:20:13.773766 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:20:21 crc kubenswrapper[4755]: E1210 16:20:21.759364 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:20:27 crc kubenswrapper[4755]: E1210 16:20:27.761782 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:20:34 crc kubenswrapper[4755]: E1210 16:20:34.760517 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:20:40 crc kubenswrapper[4755]: I1210 16:20:40.359747 4755 patch_prober.go:28] interesting pod/machine-config-daemon-ggt8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 16:20:40 crc kubenswrapper[4755]: I1210 16:20:40.360236 4755 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 16:20:41 crc kubenswrapper[4755]: E1210 16:20:41.764711 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:20:47 crc kubenswrapper[4755]: E1210 16:20:47.760027 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:20:52 crc kubenswrapper[4755]: I1210 16:20:52.039139 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8fhgf"] Dec 10 16:20:52 crc kubenswrapper[4755]: E1210 16:20:52.041068 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b82d56bb-d73e-4ffb-9170-7f8b924c1b76" containerName="extract-content" Dec 10 16:20:52 crc kubenswrapper[4755]: I1210 16:20:52.041090 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="b82d56bb-d73e-4ffb-9170-7f8b924c1b76" containerName="extract-content" Dec 10 16:20:52 crc kubenswrapper[4755]: E1210 16:20:52.041110 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b82d56bb-d73e-4ffb-9170-7f8b924c1b76" containerName="extract-utilities" Dec 10 16:20:52 crc kubenswrapper[4755]: I1210 16:20:52.041120 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="b82d56bb-d73e-4ffb-9170-7f8b924c1b76" containerName="extract-utilities" Dec 10 16:20:52 crc kubenswrapper[4755]: E1210 16:20:52.041129 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4ab39f5-c779-4d0c-9497-5c7c567dc0bc" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 10 16:20:52 crc kubenswrapper[4755]: I1210 16:20:52.041140 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4ab39f5-c779-4d0c-9497-5c7c567dc0bc" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 10 16:20:52 crc kubenswrapper[4755]: E1210 16:20:52.041157 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b82d56bb-d73e-4ffb-9170-7f8b924c1b76" containerName="registry-server" Dec 10 16:20:52 crc kubenswrapper[4755]: I1210 16:20:52.041166 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="b82d56bb-d73e-4ffb-9170-7f8b924c1b76" containerName="registry-server" Dec 10 16:20:52 crc kubenswrapper[4755]: I1210 16:20:52.041457 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="b82d56bb-d73e-4ffb-9170-7f8b924c1b76" containerName="registry-server" Dec 10 16:20:52 crc kubenswrapper[4755]: I1210 16:20:52.041487 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4ab39f5-c779-4d0c-9497-5c7c567dc0bc" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 10 16:20:52 crc kubenswrapper[4755]: I1210 16:20:52.043059 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8fhgf" Dec 10 16:20:52 crc kubenswrapper[4755]: I1210 16:20:52.046362 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 10 16:20:52 crc kubenswrapper[4755]: I1210 16:20:52.046384 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 10 16:20:52 crc kubenswrapper[4755]: I1210 16:20:52.046434 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 10 16:20:52 crc kubenswrapper[4755]: I1210 16:20:52.048842 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-74mg7" Dec 10 16:20:52 crc kubenswrapper[4755]: I1210 16:20:52.053210 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8fhgf"] Dec 10 16:20:52 crc kubenswrapper[4755]: I1210 16:20:52.149655 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/696db2de-32c6-4679-965f-ec8d2a52ae64-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-8fhgf\" (UID: \"696db2de-32c6-4679-965f-ec8d2a52ae64\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8fhgf" Dec 10 16:20:52 crc kubenswrapper[4755]: I1210 16:20:52.149724 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqjsk\" (UniqueName: \"kubernetes.io/projected/696db2de-32c6-4679-965f-ec8d2a52ae64-kube-api-access-lqjsk\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-8fhgf\" (UID: \"696db2de-32c6-4679-965f-ec8d2a52ae64\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8fhgf" Dec 10 16:20:52 crc kubenswrapper[4755]: I1210 16:20:52.149893 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/696db2de-32c6-4679-965f-ec8d2a52ae64-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-8fhgf\" (UID: \"696db2de-32c6-4679-965f-ec8d2a52ae64\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8fhgf" Dec 10 16:20:52 crc kubenswrapper[4755]: I1210 16:20:52.251565 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/696db2de-32c6-4679-965f-ec8d2a52ae64-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-8fhgf\" (UID: \"696db2de-32c6-4679-965f-ec8d2a52ae64\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8fhgf" Dec 10 16:20:52 crc kubenswrapper[4755]: I1210 16:20:52.251922 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/696db2de-32c6-4679-965f-ec8d2a52ae64-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-8fhgf\" (UID: \"696db2de-32c6-4679-965f-ec8d2a52ae64\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8fhgf" Dec 10 16:20:52 crc kubenswrapper[4755]: I1210 16:20:52.251967 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqjsk\" (UniqueName: \"kubernetes.io/projected/696db2de-32c6-4679-965f-ec8d2a52ae64-kube-api-access-lqjsk\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-8fhgf\" (UID: \"696db2de-32c6-4679-965f-ec8d2a52ae64\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8fhgf" Dec 10 16:20:52 crc kubenswrapper[4755]: I1210 16:20:52.257140 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/696db2de-32c6-4679-965f-ec8d2a52ae64-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-8fhgf\" (UID: \"696db2de-32c6-4679-965f-ec8d2a52ae64\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8fhgf" Dec 10 16:20:52 crc kubenswrapper[4755]: I1210 16:20:52.257600 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/696db2de-32c6-4679-965f-ec8d2a52ae64-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-8fhgf\" (UID: \"696db2de-32c6-4679-965f-ec8d2a52ae64\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8fhgf" Dec 10 16:20:52 crc kubenswrapper[4755]: I1210 16:20:52.272436 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqjsk\" (UniqueName: \"kubernetes.io/projected/696db2de-32c6-4679-965f-ec8d2a52ae64-kube-api-access-lqjsk\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-8fhgf\" (UID: \"696db2de-32c6-4679-965f-ec8d2a52ae64\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8fhgf" Dec 10 16:20:52 crc kubenswrapper[4755]: I1210 16:20:52.367059 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8fhgf" Dec 10 16:20:52 crc kubenswrapper[4755]: I1210 16:20:52.941999 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8fhgf"] Dec 10 16:20:53 crc kubenswrapper[4755]: I1210 16:20:53.090689 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8fhgf" event={"ID":"696db2de-32c6-4679-965f-ec8d2a52ae64","Type":"ContainerStarted","Data":"7a2485859a37c18ccbf25fa6a6e1327831b010ac894819858e30e6e38a452440"} Dec 10 16:20:53 crc kubenswrapper[4755]: E1210 16:20:53.758966 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:20:54 crc kubenswrapper[4755]: I1210 16:20:54.101433 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8fhgf" event={"ID":"696db2de-32c6-4679-965f-ec8d2a52ae64","Type":"ContainerStarted","Data":"d540a036bb09a3fa4ecc89a95d169e76c4b9ef7a0e8544bceabd1f3dd47f7ca9"} Dec 10 16:20:54 crc kubenswrapper[4755]: I1210 16:20:54.115583 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8fhgf" podStartSLOduration=1.5175189100000002 podStartE2EDuration="2.115560686s" podCreationTimestamp="2025-12-10 16:20:52 +0000 UTC" firstStartedPulling="2025-12-10 16:20:52.941952552 +0000 UTC m=+3449.542836184" lastFinishedPulling="2025-12-10 16:20:53.539994308 +0000 UTC m=+3450.140877960" observedRunningTime="2025-12-10 16:20:54.113056417 +0000 UTC m=+3450.713940049" watchObservedRunningTime="2025-12-10 16:20:54.115560686 +0000 UTC m=+3450.716444318" Dec 10 16:20:56 crc kubenswrapper[4755]: I1210 16:20:56.633909 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-j5qxz"] Dec 10 16:20:56 crc kubenswrapper[4755]: I1210 16:20:56.636538 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j5qxz" Dec 10 16:20:56 crc kubenswrapper[4755]: I1210 16:20:56.654112 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j5qxz"] Dec 10 16:20:56 crc kubenswrapper[4755]: I1210 16:20:56.743763 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwvv2\" (UniqueName: \"kubernetes.io/projected/6e681f66-5723-412a-bc78-b2d4f5131f50-kube-api-access-xwvv2\") pod \"redhat-marketplace-j5qxz\" (UID: \"6e681f66-5723-412a-bc78-b2d4f5131f50\") " pod="openshift-marketplace/redhat-marketplace-j5qxz" Dec 10 16:20:56 crc kubenswrapper[4755]: I1210 16:20:56.743814 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e681f66-5723-412a-bc78-b2d4f5131f50-catalog-content\") pod \"redhat-marketplace-j5qxz\" (UID: \"6e681f66-5723-412a-bc78-b2d4f5131f50\") " pod="openshift-marketplace/redhat-marketplace-j5qxz" Dec 10 16:20:56 crc kubenswrapper[4755]: I1210 16:20:56.744138 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e681f66-5723-412a-bc78-b2d4f5131f50-utilities\") pod \"redhat-marketplace-j5qxz\" (UID: \"6e681f66-5723-412a-bc78-b2d4f5131f50\") " pod="openshift-marketplace/redhat-marketplace-j5qxz" Dec 10 16:20:56 crc kubenswrapper[4755]: I1210 16:20:56.845501 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e681f66-5723-412a-bc78-b2d4f5131f50-utilities\") pod \"redhat-marketplace-j5qxz\" (UID: \"6e681f66-5723-412a-bc78-b2d4f5131f50\") " pod="openshift-marketplace/redhat-marketplace-j5qxz" Dec 10 16:20:56 crc kubenswrapper[4755]: I1210 16:20:56.845744 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwvv2\" (UniqueName: \"kubernetes.io/projected/6e681f66-5723-412a-bc78-b2d4f5131f50-kube-api-access-xwvv2\") pod \"redhat-marketplace-j5qxz\" (UID: \"6e681f66-5723-412a-bc78-b2d4f5131f50\") " pod="openshift-marketplace/redhat-marketplace-j5qxz" Dec 10 16:20:56 crc kubenswrapper[4755]: I1210 16:20:56.845780 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e681f66-5723-412a-bc78-b2d4f5131f50-catalog-content\") pod \"redhat-marketplace-j5qxz\" (UID: \"6e681f66-5723-412a-bc78-b2d4f5131f50\") " pod="openshift-marketplace/redhat-marketplace-j5qxz" Dec 10 16:20:56 crc kubenswrapper[4755]: I1210 16:20:56.847072 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e681f66-5723-412a-bc78-b2d4f5131f50-catalog-content\") pod \"redhat-marketplace-j5qxz\" (UID: \"6e681f66-5723-412a-bc78-b2d4f5131f50\") " pod="openshift-marketplace/redhat-marketplace-j5qxz" Dec 10 16:20:56 crc kubenswrapper[4755]: I1210 16:20:56.847584 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e681f66-5723-412a-bc78-b2d4f5131f50-utilities\") pod \"redhat-marketplace-j5qxz\" (UID: \"6e681f66-5723-412a-bc78-b2d4f5131f50\") " pod="openshift-marketplace/redhat-marketplace-j5qxz" Dec 10 16:20:56 crc kubenswrapper[4755]: I1210 16:20:56.890226 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwvv2\" (UniqueName: \"kubernetes.io/projected/6e681f66-5723-412a-bc78-b2d4f5131f50-kube-api-access-xwvv2\") pod \"redhat-marketplace-j5qxz\" (UID: \"6e681f66-5723-412a-bc78-b2d4f5131f50\") " pod="openshift-marketplace/redhat-marketplace-j5qxz" Dec 10 16:20:56 crc kubenswrapper[4755]: I1210 16:20:56.964532 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j5qxz" Dec 10 16:20:57 crc kubenswrapper[4755]: I1210 16:20:57.460787 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j5qxz"] Dec 10 16:20:57 crc kubenswrapper[4755]: W1210 16:20:57.464072 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6e681f66_5723_412a_bc78_b2d4f5131f50.slice/crio-cf98ec8692a90fbb0a8d470daa8ffa8cd7f1fb53483c4c98fddfc82a9721fdaf WatchSource:0}: Error finding container cf98ec8692a90fbb0a8d470daa8ffa8cd7f1fb53483c4c98fddfc82a9721fdaf: Status 404 returned error can't find the container with id cf98ec8692a90fbb0a8d470daa8ffa8cd7f1fb53483c4c98fddfc82a9721fdaf Dec 10 16:20:58 crc kubenswrapper[4755]: I1210 16:20:58.165460 4755 generic.go:334] "Generic (PLEG): container finished" podID="6e681f66-5723-412a-bc78-b2d4f5131f50" containerID="0d107282425eb2108e0768dbdf11a4e8e24f6613304b316dcb255d165c20e2d1" exitCode=0 Dec 10 16:20:58 crc kubenswrapper[4755]: I1210 16:20:58.165620 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j5qxz" event={"ID":"6e681f66-5723-412a-bc78-b2d4f5131f50","Type":"ContainerDied","Data":"0d107282425eb2108e0768dbdf11a4e8e24f6613304b316dcb255d165c20e2d1"} Dec 10 16:20:58 crc kubenswrapper[4755]: I1210 16:20:58.165870 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j5qxz" event={"ID":"6e681f66-5723-412a-bc78-b2d4f5131f50","Type":"ContainerStarted","Data":"cf98ec8692a90fbb0a8d470daa8ffa8cd7f1fb53483c4c98fddfc82a9721fdaf"} Dec 10 16:21:00 crc kubenswrapper[4755]: I1210 16:21:00.185161 4755 generic.go:334] "Generic (PLEG): container finished" podID="6e681f66-5723-412a-bc78-b2d4f5131f50" containerID="55f3743581bc44e7b69d16db0145b8d4c40d6e912672149829a49313979faf25" exitCode=0 Dec 10 16:21:00 crc kubenswrapper[4755]: I1210 16:21:00.185231 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j5qxz" event={"ID":"6e681f66-5723-412a-bc78-b2d4f5131f50","Type":"ContainerDied","Data":"55f3743581bc44e7b69d16db0145b8d4c40d6e912672149829a49313979faf25"} Dec 10 16:21:00 crc kubenswrapper[4755]: E1210 16:21:00.759229 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:21:02 crc kubenswrapper[4755]: I1210 16:21:02.212061 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j5qxz" event={"ID":"6e681f66-5723-412a-bc78-b2d4f5131f50","Type":"ContainerStarted","Data":"b5576a45756c030ddb85a973a6534a8933c61e56d29c4daa824d7f29a4b1bfb7"} Dec 10 16:21:02 crc kubenswrapper[4755]: I1210 16:21:02.236349 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-j5qxz" podStartSLOduration=3.4167405779999998 podStartE2EDuration="6.236326231s" podCreationTimestamp="2025-12-10 16:20:56 +0000 UTC" firstStartedPulling="2025-12-10 16:20:58.169377738 +0000 UTC m=+3454.770261370" lastFinishedPulling="2025-12-10 16:21:00.988963381 +0000 UTC m=+3457.589847023" observedRunningTime="2025-12-10 16:21:02.226772889 +0000 UTC m=+3458.827656541" watchObservedRunningTime="2025-12-10 16:21:02.236326231 +0000 UTC m=+3458.837209863" Dec 10 16:21:04 crc kubenswrapper[4755]: E1210 16:21:04.759525 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:21:06 crc kubenswrapper[4755]: I1210 16:21:06.965417 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-j5qxz" Dec 10 16:21:06 crc kubenswrapper[4755]: I1210 16:21:06.965758 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-j5qxz" Dec 10 16:21:07 crc kubenswrapper[4755]: I1210 16:21:07.042511 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-j5qxz" Dec 10 16:21:07 crc kubenswrapper[4755]: I1210 16:21:07.311518 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-j5qxz" Dec 10 16:21:07 crc kubenswrapper[4755]: I1210 16:21:07.368162 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j5qxz"] Dec 10 16:21:09 crc kubenswrapper[4755]: I1210 16:21:09.277590 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-j5qxz" podUID="6e681f66-5723-412a-bc78-b2d4f5131f50" containerName="registry-server" containerID="cri-o://b5576a45756c030ddb85a973a6534a8933c61e56d29c4daa824d7f29a4b1bfb7" gracePeriod=2 Dec 10 16:21:09 crc kubenswrapper[4755]: I1210 16:21:09.804946 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j5qxz" Dec 10 16:21:09 crc kubenswrapper[4755]: I1210 16:21:09.885934 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e681f66-5723-412a-bc78-b2d4f5131f50-catalog-content\") pod \"6e681f66-5723-412a-bc78-b2d4f5131f50\" (UID: \"6e681f66-5723-412a-bc78-b2d4f5131f50\") " Dec 10 16:21:09 crc kubenswrapper[4755]: I1210 16:21:09.886123 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e681f66-5723-412a-bc78-b2d4f5131f50-utilities\") pod \"6e681f66-5723-412a-bc78-b2d4f5131f50\" (UID: \"6e681f66-5723-412a-bc78-b2d4f5131f50\") " Dec 10 16:21:09 crc kubenswrapper[4755]: I1210 16:21:09.886254 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xwvv2\" (UniqueName: \"kubernetes.io/projected/6e681f66-5723-412a-bc78-b2d4f5131f50-kube-api-access-xwvv2\") pod \"6e681f66-5723-412a-bc78-b2d4f5131f50\" (UID: \"6e681f66-5723-412a-bc78-b2d4f5131f50\") " Dec 10 16:21:09 crc kubenswrapper[4755]: I1210 16:21:09.888272 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e681f66-5723-412a-bc78-b2d4f5131f50-utilities" (OuterVolumeSpecName: "utilities") pod "6e681f66-5723-412a-bc78-b2d4f5131f50" (UID: "6e681f66-5723-412a-bc78-b2d4f5131f50"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 16:21:09 crc kubenswrapper[4755]: I1210 16:21:09.893135 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e681f66-5723-412a-bc78-b2d4f5131f50-kube-api-access-xwvv2" (OuterVolumeSpecName: "kube-api-access-xwvv2") pod "6e681f66-5723-412a-bc78-b2d4f5131f50" (UID: "6e681f66-5723-412a-bc78-b2d4f5131f50"). InnerVolumeSpecName "kube-api-access-xwvv2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 16:21:09 crc kubenswrapper[4755]: I1210 16:21:09.909150 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e681f66-5723-412a-bc78-b2d4f5131f50-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6e681f66-5723-412a-bc78-b2d4f5131f50" (UID: "6e681f66-5723-412a-bc78-b2d4f5131f50"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 16:21:09 crc kubenswrapper[4755]: I1210 16:21:09.988791 4755 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e681f66-5723-412a-bc78-b2d4f5131f50-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 16:21:09 crc kubenswrapper[4755]: I1210 16:21:09.989184 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xwvv2\" (UniqueName: \"kubernetes.io/projected/6e681f66-5723-412a-bc78-b2d4f5131f50-kube-api-access-xwvv2\") on node \"crc\" DevicePath \"\"" Dec 10 16:21:09 crc kubenswrapper[4755]: I1210 16:21:09.989200 4755 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e681f66-5723-412a-bc78-b2d4f5131f50-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 16:21:10 crc kubenswrapper[4755]: I1210 16:21:10.289282 4755 generic.go:334] "Generic (PLEG): container finished" podID="6e681f66-5723-412a-bc78-b2d4f5131f50" containerID="b5576a45756c030ddb85a973a6534a8933c61e56d29c4daa824d7f29a4b1bfb7" exitCode=0 Dec 10 16:21:10 crc kubenswrapper[4755]: I1210 16:21:10.289333 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j5qxz" event={"ID":"6e681f66-5723-412a-bc78-b2d4f5131f50","Type":"ContainerDied","Data":"b5576a45756c030ddb85a973a6534a8933c61e56d29c4daa824d7f29a4b1bfb7"} Dec 10 16:21:10 crc kubenswrapper[4755]: I1210 16:21:10.289363 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j5qxz" event={"ID":"6e681f66-5723-412a-bc78-b2d4f5131f50","Type":"ContainerDied","Data":"cf98ec8692a90fbb0a8d470daa8ffa8cd7f1fb53483c4c98fddfc82a9721fdaf"} Dec 10 16:21:10 crc kubenswrapper[4755]: I1210 16:21:10.289398 4755 scope.go:117] "RemoveContainer" containerID="b5576a45756c030ddb85a973a6534a8933c61e56d29c4daa824d7f29a4b1bfb7" Dec 10 16:21:10 crc kubenswrapper[4755]: I1210 16:21:10.289337 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j5qxz" Dec 10 16:21:10 crc kubenswrapper[4755]: I1210 16:21:10.315283 4755 scope.go:117] "RemoveContainer" containerID="55f3743581bc44e7b69d16db0145b8d4c40d6e912672149829a49313979faf25" Dec 10 16:21:10 crc kubenswrapper[4755]: I1210 16:21:10.332584 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j5qxz"] Dec 10 16:21:10 crc kubenswrapper[4755]: I1210 16:21:10.343405 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-j5qxz"] Dec 10 16:21:10 crc kubenswrapper[4755]: I1210 16:21:10.348221 4755 scope.go:117] "RemoveContainer" containerID="0d107282425eb2108e0768dbdf11a4e8e24f6613304b316dcb255d165c20e2d1" Dec 10 16:21:10 crc kubenswrapper[4755]: I1210 16:21:10.359286 4755 patch_prober.go:28] interesting pod/machine-config-daemon-ggt8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 16:21:10 crc kubenswrapper[4755]: I1210 16:21:10.359359 4755 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 16:21:10 crc kubenswrapper[4755]: I1210 16:21:10.394886 4755 scope.go:117] "RemoveContainer" containerID="b5576a45756c030ddb85a973a6534a8933c61e56d29c4daa824d7f29a4b1bfb7" Dec 10 16:21:10 crc kubenswrapper[4755]: E1210 16:21:10.395398 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5576a45756c030ddb85a973a6534a8933c61e56d29c4daa824d7f29a4b1bfb7\": container with ID starting with b5576a45756c030ddb85a973a6534a8933c61e56d29c4daa824d7f29a4b1bfb7 not found: ID does not exist" containerID="b5576a45756c030ddb85a973a6534a8933c61e56d29c4daa824d7f29a4b1bfb7" Dec 10 16:21:10 crc kubenswrapper[4755]: I1210 16:21:10.395439 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5576a45756c030ddb85a973a6534a8933c61e56d29c4daa824d7f29a4b1bfb7"} err="failed to get container status \"b5576a45756c030ddb85a973a6534a8933c61e56d29c4daa824d7f29a4b1bfb7\": rpc error: code = NotFound desc = could not find container \"b5576a45756c030ddb85a973a6534a8933c61e56d29c4daa824d7f29a4b1bfb7\": container with ID starting with b5576a45756c030ddb85a973a6534a8933c61e56d29c4daa824d7f29a4b1bfb7 not found: ID does not exist" Dec 10 16:21:10 crc kubenswrapper[4755]: I1210 16:21:10.395489 4755 scope.go:117] "RemoveContainer" containerID="55f3743581bc44e7b69d16db0145b8d4c40d6e912672149829a49313979faf25" Dec 10 16:21:10 crc kubenswrapper[4755]: E1210 16:21:10.395813 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55f3743581bc44e7b69d16db0145b8d4c40d6e912672149829a49313979faf25\": container with ID starting with 55f3743581bc44e7b69d16db0145b8d4c40d6e912672149829a49313979faf25 not found: ID does not exist" containerID="55f3743581bc44e7b69d16db0145b8d4c40d6e912672149829a49313979faf25" Dec 10 16:21:10 crc kubenswrapper[4755]: I1210 16:21:10.395853 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55f3743581bc44e7b69d16db0145b8d4c40d6e912672149829a49313979faf25"} err="failed to get container status \"55f3743581bc44e7b69d16db0145b8d4c40d6e912672149829a49313979faf25\": rpc error: code = NotFound desc = could not find container \"55f3743581bc44e7b69d16db0145b8d4c40d6e912672149829a49313979faf25\": container with ID starting with 55f3743581bc44e7b69d16db0145b8d4c40d6e912672149829a49313979faf25 not found: ID does not exist" Dec 10 16:21:10 crc kubenswrapper[4755]: I1210 16:21:10.395875 4755 scope.go:117] "RemoveContainer" containerID="0d107282425eb2108e0768dbdf11a4e8e24f6613304b316dcb255d165c20e2d1" Dec 10 16:21:10 crc kubenswrapper[4755]: E1210 16:21:10.396096 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d107282425eb2108e0768dbdf11a4e8e24f6613304b316dcb255d165c20e2d1\": container with ID starting with 0d107282425eb2108e0768dbdf11a4e8e24f6613304b316dcb255d165c20e2d1 not found: ID does not exist" containerID="0d107282425eb2108e0768dbdf11a4e8e24f6613304b316dcb255d165c20e2d1" Dec 10 16:21:10 crc kubenswrapper[4755]: I1210 16:21:10.396120 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d107282425eb2108e0768dbdf11a4e8e24f6613304b316dcb255d165c20e2d1"} err="failed to get container status \"0d107282425eb2108e0768dbdf11a4e8e24f6613304b316dcb255d165c20e2d1\": rpc error: code = NotFound desc = could not find container \"0d107282425eb2108e0768dbdf11a4e8e24f6613304b316dcb255d165c20e2d1\": container with ID starting with 0d107282425eb2108e0768dbdf11a4e8e24f6613304b316dcb255d165c20e2d1 not found: ID does not exist" Dec 10 16:21:11 crc kubenswrapper[4755]: I1210 16:21:11.772588 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e681f66-5723-412a-bc78-b2d4f5131f50" path="/var/lib/kubelet/pods/6e681f66-5723-412a-bc78-b2d4f5131f50/volumes" Dec 10 16:21:14 crc kubenswrapper[4755]: E1210 16:21:14.760883 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:21:19 crc kubenswrapper[4755]: E1210 16:21:19.759654 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:21:28 crc kubenswrapper[4755]: E1210 16:21:28.759860 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:21:30 crc kubenswrapper[4755]: E1210 16:21:30.759627 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:21:40 crc kubenswrapper[4755]: I1210 16:21:40.359361 4755 patch_prober.go:28] interesting pod/machine-config-daemon-ggt8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 16:21:40 crc kubenswrapper[4755]: I1210 16:21:40.359991 4755 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 16:21:40 crc kubenswrapper[4755]: I1210 16:21:40.360049 4755 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" Dec 10 16:21:40 crc kubenswrapper[4755]: I1210 16:21:40.360933 4755 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5b27a5e0503cafb735ea7d1e2d88d5085b602a7e219184192e9541ad489864c6"} pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 10 16:21:40 crc kubenswrapper[4755]: I1210 16:21:40.360994 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" containerName="machine-config-daemon" containerID="cri-o://5b27a5e0503cafb735ea7d1e2d88d5085b602a7e219184192e9541ad489864c6" gracePeriod=600 Dec 10 16:21:40 crc kubenswrapper[4755]: I1210 16:21:40.586660 4755 generic.go:334] "Generic (PLEG): container finished" podID="b132a8b9-1c99-414d-8773-229bf36b305d" containerID="5b27a5e0503cafb735ea7d1e2d88d5085b602a7e219184192e9541ad489864c6" exitCode=0 Dec 10 16:21:40 crc kubenswrapper[4755]: I1210 16:21:40.586960 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" event={"ID":"b132a8b9-1c99-414d-8773-229bf36b305d","Type":"ContainerDied","Data":"5b27a5e0503cafb735ea7d1e2d88d5085b602a7e219184192e9541ad489864c6"} Dec 10 16:21:40 crc kubenswrapper[4755]: I1210 16:21:40.587078 4755 scope.go:117] "RemoveContainer" containerID="30d53fb1dd018f11a561066e37ed5aa32e4d43a5cd91c13627ef984f5187f3fd" Dec 10 16:21:40 crc kubenswrapper[4755]: E1210 16:21:40.759322 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:21:41 crc kubenswrapper[4755]: I1210 16:21:41.598785 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" event={"ID":"b132a8b9-1c99-414d-8773-229bf36b305d","Type":"ContainerStarted","Data":"3a32840bbc4b33bf990552b48c16e0d53e66710a5880db985795b1072a3ba36c"} Dec 10 16:21:42 crc kubenswrapper[4755]: E1210 16:21:42.759700 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:21:51 crc kubenswrapper[4755]: E1210 16:21:51.760193 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:21:52 crc kubenswrapper[4755]: I1210 16:21:52.468317 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qqsbn"] Dec 10 16:21:52 crc kubenswrapper[4755]: E1210 16:21:52.469082 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e681f66-5723-412a-bc78-b2d4f5131f50" containerName="extract-content" Dec 10 16:21:52 crc kubenswrapper[4755]: I1210 16:21:52.469100 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e681f66-5723-412a-bc78-b2d4f5131f50" containerName="extract-content" Dec 10 16:21:52 crc kubenswrapper[4755]: E1210 16:21:52.469127 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e681f66-5723-412a-bc78-b2d4f5131f50" containerName="extract-utilities" Dec 10 16:21:52 crc kubenswrapper[4755]: I1210 16:21:52.469134 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e681f66-5723-412a-bc78-b2d4f5131f50" containerName="extract-utilities" Dec 10 16:21:52 crc kubenswrapper[4755]: E1210 16:21:52.469156 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e681f66-5723-412a-bc78-b2d4f5131f50" containerName="registry-server" Dec 10 16:21:52 crc kubenswrapper[4755]: I1210 16:21:52.469162 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e681f66-5723-412a-bc78-b2d4f5131f50" containerName="registry-server" Dec 10 16:21:52 crc kubenswrapper[4755]: I1210 16:21:52.469368 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e681f66-5723-412a-bc78-b2d4f5131f50" containerName="registry-server" Dec 10 16:21:52 crc kubenswrapper[4755]: I1210 16:21:52.477809 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qqsbn" Dec 10 16:21:52 crc kubenswrapper[4755]: I1210 16:21:52.485158 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qqsbn"] Dec 10 16:21:52 crc kubenswrapper[4755]: I1210 16:21:52.583564 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdd666e2-9e8d-42d2-af08-119d8aeea143-utilities\") pod \"certified-operators-qqsbn\" (UID: \"bdd666e2-9e8d-42d2-af08-119d8aeea143\") " pod="openshift-marketplace/certified-operators-qqsbn" Dec 10 16:21:52 crc kubenswrapper[4755]: I1210 16:21:52.583631 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdd666e2-9e8d-42d2-af08-119d8aeea143-catalog-content\") pod \"certified-operators-qqsbn\" (UID: \"bdd666e2-9e8d-42d2-af08-119d8aeea143\") " pod="openshift-marketplace/certified-operators-qqsbn" Dec 10 16:21:52 crc kubenswrapper[4755]: I1210 16:21:52.583727 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4s8tj\" (UniqueName: \"kubernetes.io/projected/bdd666e2-9e8d-42d2-af08-119d8aeea143-kube-api-access-4s8tj\") pod \"certified-operators-qqsbn\" (UID: \"bdd666e2-9e8d-42d2-af08-119d8aeea143\") " pod="openshift-marketplace/certified-operators-qqsbn" Dec 10 16:21:52 crc kubenswrapper[4755]: I1210 16:21:52.685726 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdd666e2-9e8d-42d2-af08-119d8aeea143-utilities\") pod \"certified-operators-qqsbn\" (UID: \"bdd666e2-9e8d-42d2-af08-119d8aeea143\") " pod="openshift-marketplace/certified-operators-qqsbn" Dec 10 16:21:52 crc kubenswrapper[4755]: I1210 16:21:52.686010 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdd666e2-9e8d-42d2-af08-119d8aeea143-catalog-content\") pod \"certified-operators-qqsbn\" (UID: \"bdd666e2-9e8d-42d2-af08-119d8aeea143\") " pod="openshift-marketplace/certified-operators-qqsbn" Dec 10 16:21:52 crc kubenswrapper[4755]: I1210 16:21:52.686159 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4s8tj\" (UniqueName: \"kubernetes.io/projected/bdd666e2-9e8d-42d2-af08-119d8aeea143-kube-api-access-4s8tj\") pod \"certified-operators-qqsbn\" (UID: \"bdd666e2-9e8d-42d2-af08-119d8aeea143\") " pod="openshift-marketplace/certified-operators-qqsbn" Dec 10 16:21:52 crc kubenswrapper[4755]: I1210 16:21:52.686305 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdd666e2-9e8d-42d2-af08-119d8aeea143-utilities\") pod \"certified-operators-qqsbn\" (UID: \"bdd666e2-9e8d-42d2-af08-119d8aeea143\") " pod="openshift-marketplace/certified-operators-qqsbn" Dec 10 16:21:52 crc kubenswrapper[4755]: I1210 16:21:52.686498 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdd666e2-9e8d-42d2-af08-119d8aeea143-catalog-content\") pod \"certified-operators-qqsbn\" (UID: \"bdd666e2-9e8d-42d2-af08-119d8aeea143\") " pod="openshift-marketplace/certified-operators-qqsbn" Dec 10 16:21:52 crc kubenswrapper[4755]: I1210 16:21:52.707708 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4s8tj\" (UniqueName: \"kubernetes.io/projected/bdd666e2-9e8d-42d2-af08-119d8aeea143-kube-api-access-4s8tj\") pod \"certified-operators-qqsbn\" (UID: \"bdd666e2-9e8d-42d2-af08-119d8aeea143\") " pod="openshift-marketplace/certified-operators-qqsbn" Dec 10 16:21:52 crc kubenswrapper[4755]: I1210 16:21:52.815249 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qqsbn" Dec 10 16:21:53 crc kubenswrapper[4755]: I1210 16:21:53.454411 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qqsbn"] Dec 10 16:21:53 crc kubenswrapper[4755]: I1210 16:21:53.714803 4755 generic.go:334] "Generic (PLEG): container finished" podID="bdd666e2-9e8d-42d2-af08-119d8aeea143" containerID="335fd75a3282bc9dcbeb2ff3688e1c4f01dbc0639a289f42a125f3ce3859a24e" exitCode=0 Dec 10 16:21:53 crc kubenswrapper[4755]: I1210 16:21:53.714851 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qqsbn" event={"ID":"bdd666e2-9e8d-42d2-af08-119d8aeea143","Type":"ContainerDied","Data":"335fd75a3282bc9dcbeb2ff3688e1c4f01dbc0639a289f42a125f3ce3859a24e"} Dec 10 16:21:53 crc kubenswrapper[4755]: I1210 16:21:53.714878 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qqsbn" event={"ID":"bdd666e2-9e8d-42d2-af08-119d8aeea143","Type":"ContainerStarted","Data":"c572f2b61662bb882ec7e3585e46b2d1faebfb409f44c4c3cb83cedbfa42bd15"} Dec 10 16:21:53 crc kubenswrapper[4755]: E1210 16:21:53.766535 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:21:55 crc kubenswrapper[4755]: I1210 16:21:55.734409 4755 generic.go:334] "Generic (PLEG): container finished" podID="bdd666e2-9e8d-42d2-af08-119d8aeea143" containerID="644c63e7b0d7ac4e8dfef3aa48ac2b6c5b5b4f3a22639396edf1c472f175ef85" exitCode=0 Dec 10 16:21:55 crc kubenswrapper[4755]: I1210 16:21:55.734455 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qqsbn" event={"ID":"bdd666e2-9e8d-42d2-af08-119d8aeea143","Type":"ContainerDied","Data":"644c63e7b0d7ac4e8dfef3aa48ac2b6c5b5b4f3a22639396edf1c472f175ef85"} Dec 10 16:21:56 crc kubenswrapper[4755]: I1210 16:21:56.752391 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qqsbn" event={"ID":"bdd666e2-9e8d-42d2-af08-119d8aeea143","Type":"ContainerStarted","Data":"5d21f39b89a155caa97703bb88197ce1705533f6dfe9c27b60140740dee08750"} Dec 10 16:21:56 crc kubenswrapper[4755]: I1210 16:21:56.776857 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qqsbn" podStartSLOduration=2.1672722540000002 podStartE2EDuration="4.776808926s" podCreationTimestamp="2025-12-10 16:21:52 +0000 UTC" firstStartedPulling="2025-12-10 16:21:53.71638207 +0000 UTC m=+3510.317265702" lastFinishedPulling="2025-12-10 16:21:56.325918742 +0000 UTC m=+3512.926802374" observedRunningTime="2025-12-10 16:21:56.774700639 +0000 UTC m=+3513.375584281" watchObservedRunningTime="2025-12-10 16:21:56.776808926 +0000 UTC m=+3513.377692558" Dec 10 16:22:02 crc kubenswrapper[4755]: I1210 16:22:02.816319 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qqsbn" Dec 10 16:22:02 crc kubenswrapper[4755]: I1210 16:22:02.816928 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qqsbn" Dec 10 16:22:02 crc kubenswrapper[4755]: I1210 16:22:02.862986 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qqsbn" Dec 10 16:22:03 crc kubenswrapper[4755]: I1210 16:22:03.858175 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qqsbn" Dec 10 16:22:03 crc kubenswrapper[4755]: I1210 16:22:03.908434 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qqsbn"] Dec 10 16:22:05 crc kubenswrapper[4755]: E1210 16:22:05.760191 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:22:05 crc kubenswrapper[4755]: I1210 16:22:05.834113 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-qqsbn" podUID="bdd666e2-9e8d-42d2-af08-119d8aeea143" containerName="registry-server" containerID="cri-o://5d21f39b89a155caa97703bb88197ce1705533f6dfe9c27b60140740dee08750" gracePeriod=2 Dec 10 16:22:06 crc kubenswrapper[4755]: I1210 16:22:06.688685 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qqsbn" Dec 10 16:22:06 crc kubenswrapper[4755]: E1210 16:22:06.765900 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:22:06 crc kubenswrapper[4755]: I1210 16:22:06.845941 4755 generic.go:334] "Generic (PLEG): container finished" podID="bdd666e2-9e8d-42d2-af08-119d8aeea143" containerID="5d21f39b89a155caa97703bb88197ce1705533f6dfe9c27b60140740dee08750" exitCode=0 Dec 10 16:22:06 crc kubenswrapper[4755]: I1210 16:22:06.845999 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qqsbn" event={"ID":"bdd666e2-9e8d-42d2-af08-119d8aeea143","Type":"ContainerDied","Data":"5d21f39b89a155caa97703bb88197ce1705533f6dfe9c27b60140740dee08750"} Dec 10 16:22:06 crc kubenswrapper[4755]: I1210 16:22:06.846027 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qqsbn" event={"ID":"bdd666e2-9e8d-42d2-af08-119d8aeea143","Type":"ContainerDied","Data":"c572f2b61662bb882ec7e3585e46b2d1faebfb409f44c4c3cb83cedbfa42bd15"} Dec 10 16:22:06 crc kubenswrapper[4755]: I1210 16:22:06.846048 4755 scope.go:117] "RemoveContainer" containerID="5d21f39b89a155caa97703bb88197ce1705533f6dfe9c27b60140740dee08750" Dec 10 16:22:06 crc kubenswrapper[4755]: I1210 16:22:06.846071 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qqsbn" Dec 10 16:22:06 crc kubenswrapper[4755]: I1210 16:22:06.871073 4755 scope.go:117] "RemoveContainer" containerID="644c63e7b0d7ac4e8dfef3aa48ac2b6c5b5b4f3a22639396edf1c472f175ef85" Dec 10 16:22:06 crc kubenswrapper[4755]: I1210 16:22:06.882293 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdd666e2-9e8d-42d2-af08-119d8aeea143-utilities\") pod \"bdd666e2-9e8d-42d2-af08-119d8aeea143\" (UID: \"bdd666e2-9e8d-42d2-af08-119d8aeea143\") " Dec 10 16:22:06 crc kubenswrapper[4755]: I1210 16:22:06.882424 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4s8tj\" (UniqueName: \"kubernetes.io/projected/bdd666e2-9e8d-42d2-af08-119d8aeea143-kube-api-access-4s8tj\") pod \"bdd666e2-9e8d-42d2-af08-119d8aeea143\" (UID: \"bdd666e2-9e8d-42d2-af08-119d8aeea143\") " Dec 10 16:22:06 crc kubenswrapper[4755]: I1210 16:22:06.882510 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdd666e2-9e8d-42d2-af08-119d8aeea143-catalog-content\") pod \"bdd666e2-9e8d-42d2-af08-119d8aeea143\" (UID: \"bdd666e2-9e8d-42d2-af08-119d8aeea143\") " Dec 10 16:22:06 crc kubenswrapper[4755]: I1210 16:22:06.883567 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bdd666e2-9e8d-42d2-af08-119d8aeea143-utilities" (OuterVolumeSpecName: "utilities") pod "bdd666e2-9e8d-42d2-af08-119d8aeea143" (UID: "bdd666e2-9e8d-42d2-af08-119d8aeea143"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 16:22:06 crc kubenswrapper[4755]: I1210 16:22:06.890730 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdd666e2-9e8d-42d2-af08-119d8aeea143-kube-api-access-4s8tj" (OuterVolumeSpecName: "kube-api-access-4s8tj") pod "bdd666e2-9e8d-42d2-af08-119d8aeea143" (UID: "bdd666e2-9e8d-42d2-af08-119d8aeea143"). InnerVolumeSpecName "kube-api-access-4s8tj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 16:22:06 crc kubenswrapper[4755]: I1210 16:22:06.909755 4755 scope.go:117] "RemoveContainer" containerID="335fd75a3282bc9dcbeb2ff3688e1c4f01dbc0639a289f42a125f3ce3859a24e" Dec 10 16:22:06 crc kubenswrapper[4755]: I1210 16:22:06.952069 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bdd666e2-9e8d-42d2-af08-119d8aeea143-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bdd666e2-9e8d-42d2-af08-119d8aeea143" (UID: "bdd666e2-9e8d-42d2-af08-119d8aeea143"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 16:22:06 crc kubenswrapper[4755]: I1210 16:22:06.984667 4755 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdd666e2-9e8d-42d2-af08-119d8aeea143-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 16:22:06 crc kubenswrapper[4755]: I1210 16:22:06.984695 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4s8tj\" (UniqueName: \"kubernetes.io/projected/bdd666e2-9e8d-42d2-af08-119d8aeea143-kube-api-access-4s8tj\") on node \"crc\" DevicePath \"\"" Dec 10 16:22:06 crc kubenswrapper[4755]: I1210 16:22:06.984705 4755 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdd666e2-9e8d-42d2-af08-119d8aeea143-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 16:22:07 crc kubenswrapper[4755]: I1210 16:22:07.016147 4755 scope.go:117] "RemoveContainer" containerID="5d21f39b89a155caa97703bb88197ce1705533f6dfe9c27b60140740dee08750" Dec 10 16:22:07 crc kubenswrapper[4755]: E1210 16:22:07.016589 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d21f39b89a155caa97703bb88197ce1705533f6dfe9c27b60140740dee08750\": container with ID starting with 5d21f39b89a155caa97703bb88197ce1705533f6dfe9c27b60140740dee08750 not found: ID does not exist" containerID="5d21f39b89a155caa97703bb88197ce1705533f6dfe9c27b60140740dee08750" Dec 10 16:22:07 crc kubenswrapper[4755]: I1210 16:22:07.016619 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d21f39b89a155caa97703bb88197ce1705533f6dfe9c27b60140740dee08750"} err="failed to get container status \"5d21f39b89a155caa97703bb88197ce1705533f6dfe9c27b60140740dee08750\": rpc error: code = NotFound desc = could not find container \"5d21f39b89a155caa97703bb88197ce1705533f6dfe9c27b60140740dee08750\": container with ID starting with 5d21f39b89a155caa97703bb88197ce1705533f6dfe9c27b60140740dee08750 not found: ID does not exist" Dec 10 16:22:07 crc kubenswrapper[4755]: I1210 16:22:07.016640 4755 scope.go:117] "RemoveContainer" containerID="644c63e7b0d7ac4e8dfef3aa48ac2b6c5b5b4f3a22639396edf1c472f175ef85" Dec 10 16:22:07 crc kubenswrapper[4755]: E1210 16:22:07.016964 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"644c63e7b0d7ac4e8dfef3aa48ac2b6c5b5b4f3a22639396edf1c472f175ef85\": container with ID starting with 644c63e7b0d7ac4e8dfef3aa48ac2b6c5b5b4f3a22639396edf1c472f175ef85 not found: ID does not exist" containerID="644c63e7b0d7ac4e8dfef3aa48ac2b6c5b5b4f3a22639396edf1c472f175ef85" Dec 10 16:22:07 crc kubenswrapper[4755]: I1210 16:22:07.016983 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"644c63e7b0d7ac4e8dfef3aa48ac2b6c5b5b4f3a22639396edf1c472f175ef85"} err="failed to get container status \"644c63e7b0d7ac4e8dfef3aa48ac2b6c5b5b4f3a22639396edf1c472f175ef85\": rpc error: code = NotFound desc = could not find container \"644c63e7b0d7ac4e8dfef3aa48ac2b6c5b5b4f3a22639396edf1c472f175ef85\": container with ID starting with 644c63e7b0d7ac4e8dfef3aa48ac2b6c5b5b4f3a22639396edf1c472f175ef85 not found: ID does not exist" Dec 10 16:22:07 crc kubenswrapper[4755]: I1210 16:22:07.016993 4755 scope.go:117] "RemoveContainer" containerID="335fd75a3282bc9dcbeb2ff3688e1c4f01dbc0639a289f42a125f3ce3859a24e" Dec 10 16:22:07 crc kubenswrapper[4755]: E1210 16:22:07.017231 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"335fd75a3282bc9dcbeb2ff3688e1c4f01dbc0639a289f42a125f3ce3859a24e\": container with ID starting with 335fd75a3282bc9dcbeb2ff3688e1c4f01dbc0639a289f42a125f3ce3859a24e not found: ID does not exist" containerID="335fd75a3282bc9dcbeb2ff3688e1c4f01dbc0639a289f42a125f3ce3859a24e" Dec 10 16:22:07 crc kubenswrapper[4755]: I1210 16:22:07.017250 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"335fd75a3282bc9dcbeb2ff3688e1c4f01dbc0639a289f42a125f3ce3859a24e"} err="failed to get container status \"335fd75a3282bc9dcbeb2ff3688e1c4f01dbc0639a289f42a125f3ce3859a24e\": rpc error: code = NotFound desc = could not find container \"335fd75a3282bc9dcbeb2ff3688e1c4f01dbc0639a289f42a125f3ce3859a24e\": container with ID starting with 335fd75a3282bc9dcbeb2ff3688e1c4f01dbc0639a289f42a125f3ce3859a24e not found: ID does not exist" Dec 10 16:22:07 crc kubenswrapper[4755]: I1210 16:22:07.186658 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qqsbn"] Dec 10 16:22:07 crc kubenswrapper[4755]: I1210 16:22:07.198136 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-qqsbn"] Dec 10 16:22:07 crc kubenswrapper[4755]: I1210 16:22:07.770757 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bdd666e2-9e8d-42d2-af08-119d8aeea143" path="/var/lib/kubelet/pods/bdd666e2-9e8d-42d2-af08-119d8aeea143/volumes" Dec 10 16:22:16 crc kubenswrapper[4755]: E1210 16:22:16.761149 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:22:19 crc kubenswrapper[4755]: E1210 16:22:19.759604 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:22:29 crc kubenswrapper[4755]: E1210 16:22:29.764587 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:22:34 crc kubenswrapper[4755]: E1210 16:22:34.760584 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:22:43 crc kubenswrapper[4755]: E1210 16:22:43.760828 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:22:46 crc kubenswrapper[4755]: E1210 16:22:46.771402 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:22:55 crc kubenswrapper[4755]: E1210 16:22:55.759516 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:22:58 crc kubenswrapper[4755]: E1210 16:22:58.759166 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:23:09 crc kubenswrapper[4755]: E1210 16:23:09.760190 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:23:12 crc kubenswrapper[4755]: E1210 16:23:12.762815 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:23:23 crc kubenswrapper[4755]: E1210 16:23:23.777230 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:23:24 crc kubenswrapper[4755]: E1210 16:23:24.760412 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:23:36 crc kubenswrapper[4755]: E1210 16:23:36.759255 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:23:37 crc kubenswrapper[4755]: E1210 16:23:37.759415 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:23:40 crc kubenswrapper[4755]: I1210 16:23:40.359591 4755 patch_prober.go:28] interesting pod/machine-config-daemon-ggt8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 16:23:40 crc kubenswrapper[4755]: I1210 16:23:40.360823 4755 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 16:23:50 crc kubenswrapper[4755]: E1210 16:23:50.760051 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:23:51 crc kubenswrapper[4755]: E1210 16:23:51.760817 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:24:01 crc kubenswrapper[4755]: E1210 16:24:01.760543 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:24:04 crc kubenswrapper[4755]: E1210 16:24:04.759651 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:24:10 crc kubenswrapper[4755]: I1210 16:24:10.358750 4755 patch_prober.go:28] interesting pod/machine-config-daemon-ggt8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 16:24:10 crc kubenswrapper[4755]: I1210 16:24:10.359257 4755 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 16:24:14 crc kubenswrapper[4755]: E1210 16:24:14.760127 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:24:16 crc kubenswrapper[4755]: E1210 16:24:16.775974 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:24:28 crc kubenswrapper[4755]: E1210 16:24:28.888670 4755 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 10 16:24:28 crc kubenswrapper[4755]: E1210 16:24:28.889175 4755 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 10 16:24:28 crc kubenswrapper[4755]: E1210 16:24:28.889321 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mz4t5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-jfc28_openstack(998863b6-4f48-4c8b-8011-a40377686b99): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 10 16:24:28 crc kubenswrapper[4755]: E1210 16:24:28.890508 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:24:30 crc kubenswrapper[4755]: E1210 16:24:30.760244 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:24:39 crc kubenswrapper[4755]: E1210 16:24:39.762236 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:24:40 crc kubenswrapper[4755]: I1210 16:24:40.359379 4755 patch_prober.go:28] interesting pod/machine-config-daemon-ggt8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 16:24:40 crc kubenswrapper[4755]: I1210 16:24:40.359492 4755 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 16:24:40 crc kubenswrapper[4755]: I1210 16:24:40.359538 4755 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" Dec 10 16:24:40 crc kubenswrapper[4755]: I1210 16:24:40.360391 4755 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3a32840bbc4b33bf990552b48c16e0d53e66710a5880db985795b1072a3ba36c"} pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 10 16:24:40 crc kubenswrapper[4755]: I1210 16:24:40.360493 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" containerName="machine-config-daemon" containerID="cri-o://3a32840bbc4b33bf990552b48c16e0d53e66710a5880db985795b1072a3ba36c" gracePeriod=600 Dec 10 16:24:40 crc kubenswrapper[4755]: E1210 16:24:40.482410 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:24:41 crc kubenswrapper[4755]: I1210 16:24:41.350793 4755 generic.go:334] "Generic (PLEG): container finished" podID="b132a8b9-1c99-414d-8773-229bf36b305d" containerID="3a32840bbc4b33bf990552b48c16e0d53e66710a5880db985795b1072a3ba36c" exitCode=0 Dec 10 16:24:41 crc kubenswrapper[4755]: I1210 16:24:41.350871 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" event={"ID":"b132a8b9-1c99-414d-8773-229bf36b305d","Type":"ContainerDied","Data":"3a32840bbc4b33bf990552b48c16e0d53e66710a5880db985795b1072a3ba36c"} Dec 10 16:24:41 crc kubenswrapper[4755]: I1210 16:24:41.351172 4755 scope.go:117] "RemoveContainer" containerID="5b27a5e0503cafb735ea7d1e2d88d5085b602a7e219184192e9541ad489864c6" Dec 10 16:24:41 crc kubenswrapper[4755]: I1210 16:24:41.351919 4755 scope.go:117] "RemoveContainer" containerID="3a32840bbc4b33bf990552b48c16e0d53e66710a5880db985795b1072a3ba36c" Dec 10 16:24:41 crc kubenswrapper[4755]: E1210 16:24:41.352270 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:24:43 crc kubenswrapper[4755]: E1210 16:24:43.794933 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:24:51 crc kubenswrapper[4755]: I1210 16:24:51.757750 4755 scope.go:117] "RemoveContainer" containerID="3a32840bbc4b33bf990552b48c16e0d53e66710a5880db985795b1072a3ba36c" Dec 10 16:24:51 crc kubenswrapper[4755]: E1210 16:24:51.758726 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:24:52 crc kubenswrapper[4755]: E1210 16:24:52.759698 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:24:59 crc kubenswrapper[4755]: I1210 16:24:59.760015 4755 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 10 16:24:59 crc kubenswrapper[4755]: E1210 16:24:59.898533 4755 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 10 16:24:59 crc kubenswrapper[4755]: E1210 16:24:59.898609 4755 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 10 16:24:59 crc kubenswrapper[4755]: E1210 16:24:59.898750 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5d4h5b7hfbh5ddh688h9ch55bh7chf6h5ddh68ch94h69h5c5h596h59bh569hfchc4h676hcbh64dhdbh57fh75h5c9h98h59ch679h566h77h9cq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hw9gj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(6d104bea-ecdc-4fe1-9861-fb1a19fce845): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 10 16:24:59 crc kubenswrapper[4755]: E1210 16:24:59.899938 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:25:06 crc kubenswrapper[4755]: I1210 16:25:06.757696 4755 scope.go:117] "RemoveContainer" containerID="3a32840bbc4b33bf990552b48c16e0d53e66710a5880db985795b1072a3ba36c" Dec 10 16:25:06 crc kubenswrapper[4755]: E1210 16:25:06.758578 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:25:07 crc kubenswrapper[4755]: E1210 16:25:07.760057 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:25:14 crc kubenswrapper[4755]: E1210 16:25:14.760592 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:25:18 crc kubenswrapper[4755]: I1210 16:25:18.758217 4755 scope.go:117] "RemoveContainer" containerID="3a32840bbc4b33bf990552b48c16e0d53e66710a5880db985795b1072a3ba36c" Dec 10 16:25:18 crc kubenswrapper[4755]: E1210 16:25:18.759301 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:25:20 crc kubenswrapper[4755]: E1210 16:25:20.760805 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:25:25 crc kubenswrapper[4755]: E1210 16:25:25.766200 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:25:31 crc kubenswrapper[4755]: I1210 16:25:31.758650 4755 scope.go:117] "RemoveContainer" containerID="3a32840bbc4b33bf990552b48c16e0d53e66710a5880db985795b1072a3ba36c" Dec 10 16:25:31 crc kubenswrapper[4755]: E1210 16:25:31.759327 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:25:31 crc kubenswrapper[4755]: E1210 16:25:31.761395 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:25:40 crc kubenswrapper[4755]: E1210 16:25:40.759929 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:25:42 crc kubenswrapper[4755]: E1210 16:25:42.759056 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:25:45 crc kubenswrapper[4755]: I1210 16:25:45.757568 4755 scope.go:117] "RemoveContainer" containerID="3a32840bbc4b33bf990552b48c16e0d53e66710a5880db985795b1072a3ba36c" Dec 10 16:25:45 crc kubenswrapper[4755]: E1210 16:25:45.758429 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:25:54 crc kubenswrapper[4755]: E1210 16:25:54.761416 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:25:57 crc kubenswrapper[4755]: E1210 16:25:57.760782 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:25:58 crc kubenswrapper[4755]: I1210 16:25:58.758211 4755 scope.go:117] "RemoveContainer" containerID="3a32840bbc4b33bf990552b48c16e0d53e66710a5880db985795b1072a3ba36c" Dec 10 16:25:58 crc kubenswrapper[4755]: E1210 16:25:58.758895 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:26:08 crc kubenswrapper[4755]: E1210 16:26:08.761338 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:26:09 crc kubenswrapper[4755]: I1210 16:26:09.757330 4755 scope.go:117] "RemoveContainer" containerID="3a32840bbc4b33bf990552b48c16e0d53e66710a5880db985795b1072a3ba36c" Dec 10 16:26:09 crc kubenswrapper[4755]: E1210 16:26:09.758010 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:26:12 crc kubenswrapper[4755]: E1210 16:26:12.759975 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:26:23 crc kubenswrapper[4755]: E1210 16:26:23.768255 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:26:24 crc kubenswrapper[4755]: I1210 16:26:24.758090 4755 scope.go:117] "RemoveContainer" containerID="3a32840bbc4b33bf990552b48c16e0d53e66710a5880db985795b1072a3ba36c" Dec 10 16:26:24 crc kubenswrapper[4755]: E1210 16:26:24.758779 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:26:26 crc kubenswrapper[4755]: E1210 16:26:26.763602 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:26:34 crc kubenswrapper[4755]: E1210 16:26:34.760138 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:26:39 crc kubenswrapper[4755]: I1210 16:26:39.758715 4755 scope.go:117] "RemoveContainer" containerID="3a32840bbc4b33bf990552b48c16e0d53e66710a5880db985795b1072a3ba36c" Dec 10 16:26:39 crc kubenswrapper[4755]: E1210 16:26:39.761455 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:26:40 crc kubenswrapper[4755]: E1210 16:26:40.761067 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:26:49 crc kubenswrapper[4755]: E1210 16:26:49.760661 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:26:52 crc kubenswrapper[4755]: I1210 16:26:52.757873 4755 scope.go:117] "RemoveContainer" containerID="3a32840bbc4b33bf990552b48c16e0d53e66710a5880db985795b1072a3ba36c" Dec 10 16:26:52 crc kubenswrapper[4755]: E1210 16:26:52.758647 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:26:53 crc kubenswrapper[4755]: E1210 16:26:53.767307 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:27:03 crc kubenswrapper[4755]: E1210 16:27:03.765930 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:27:04 crc kubenswrapper[4755]: I1210 16:27:04.757706 4755 scope.go:117] "RemoveContainer" containerID="3a32840bbc4b33bf990552b48c16e0d53e66710a5880db985795b1072a3ba36c" Dec 10 16:27:04 crc kubenswrapper[4755]: E1210 16:27:04.758226 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:27:04 crc kubenswrapper[4755]: E1210 16:27:04.759299 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:27:13 crc kubenswrapper[4755]: I1210 16:27:13.912810 4755 generic.go:334] "Generic (PLEG): container finished" podID="696db2de-32c6-4679-965f-ec8d2a52ae64" containerID="d540a036bb09a3fa4ecc89a95d169e76c4b9ef7a0e8544bceabd1f3dd47f7ca9" exitCode=2 Dec 10 16:27:13 crc kubenswrapper[4755]: I1210 16:27:13.912897 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8fhgf" event={"ID":"696db2de-32c6-4679-965f-ec8d2a52ae64","Type":"ContainerDied","Data":"d540a036bb09a3fa4ecc89a95d169e76c4b9ef7a0e8544bceabd1f3dd47f7ca9"} Dec 10 16:27:15 crc kubenswrapper[4755]: I1210 16:27:15.381503 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8fhgf" Dec 10 16:27:15 crc kubenswrapper[4755]: I1210 16:27:15.491194 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lqjsk\" (UniqueName: \"kubernetes.io/projected/696db2de-32c6-4679-965f-ec8d2a52ae64-kube-api-access-lqjsk\") pod \"696db2de-32c6-4679-965f-ec8d2a52ae64\" (UID: \"696db2de-32c6-4679-965f-ec8d2a52ae64\") " Dec 10 16:27:15 crc kubenswrapper[4755]: I1210 16:27:15.491391 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/696db2de-32c6-4679-965f-ec8d2a52ae64-ssh-key\") pod \"696db2de-32c6-4679-965f-ec8d2a52ae64\" (UID: \"696db2de-32c6-4679-965f-ec8d2a52ae64\") " Dec 10 16:27:15 crc kubenswrapper[4755]: I1210 16:27:15.491588 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/696db2de-32c6-4679-965f-ec8d2a52ae64-inventory\") pod \"696db2de-32c6-4679-965f-ec8d2a52ae64\" (UID: \"696db2de-32c6-4679-965f-ec8d2a52ae64\") " Dec 10 16:27:15 crc kubenswrapper[4755]: I1210 16:27:15.507689 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/696db2de-32c6-4679-965f-ec8d2a52ae64-kube-api-access-lqjsk" (OuterVolumeSpecName: "kube-api-access-lqjsk") pod "696db2de-32c6-4679-965f-ec8d2a52ae64" (UID: "696db2de-32c6-4679-965f-ec8d2a52ae64"). InnerVolumeSpecName "kube-api-access-lqjsk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 16:27:15 crc kubenswrapper[4755]: I1210 16:27:15.569314 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/696db2de-32c6-4679-965f-ec8d2a52ae64-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "696db2de-32c6-4679-965f-ec8d2a52ae64" (UID: "696db2de-32c6-4679-965f-ec8d2a52ae64"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 16:27:15 crc kubenswrapper[4755]: I1210 16:27:15.597851 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lqjsk\" (UniqueName: \"kubernetes.io/projected/696db2de-32c6-4679-965f-ec8d2a52ae64-kube-api-access-lqjsk\") on node \"crc\" DevicePath \"\"" Dec 10 16:27:15 crc kubenswrapper[4755]: I1210 16:27:15.597891 4755 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/696db2de-32c6-4679-965f-ec8d2a52ae64-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 10 16:27:15 crc kubenswrapper[4755]: I1210 16:27:15.644304 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/696db2de-32c6-4679-965f-ec8d2a52ae64-inventory" (OuterVolumeSpecName: "inventory") pod "696db2de-32c6-4679-965f-ec8d2a52ae64" (UID: "696db2de-32c6-4679-965f-ec8d2a52ae64"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 16:27:15 crc kubenswrapper[4755]: I1210 16:27:15.701532 4755 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/696db2de-32c6-4679-965f-ec8d2a52ae64-inventory\") on node \"crc\" DevicePath \"\"" Dec 10 16:27:15 crc kubenswrapper[4755]: I1210 16:27:15.759325 4755 scope.go:117] "RemoveContainer" containerID="3a32840bbc4b33bf990552b48c16e0d53e66710a5880db985795b1072a3ba36c" Dec 10 16:27:15 crc kubenswrapper[4755]: E1210 16:27:15.759673 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:27:15 crc kubenswrapper[4755]: E1210 16:27:15.760740 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:27:15 crc kubenswrapper[4755]: I1210 16:27:15.930884 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8fhgf" event={"ID":"696db2de-32c6-4679-965f-ec8d2a52ae64","Type":"ContainerDied","Data":"7a2485859a37c18ccbf25fa6a6e1327831b010ac894819858e30e6e38a452440"} Dec 10 16:27:15 crc kubenswrapper[4755]: I1210 16:27:15.931130 4755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a2485859a37c18ccbf25fa6a6e1327831b010ac894819858e30e6e38a452440" Dec 10 16:27:15 crc kubenswrapper[4755]: I1210 16:27:15.930947 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8fhgf" Dec 10 16:27:18 crc kubenswrapper[4755]: E1210 16:27:18.759616 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:27:23 crc kubenswrapper[4755]: I1210 16:27:23.756329 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lkzzd"] Dec 10 16:27:23 crc kubenswrapper[4755]: E1210 16:27:23.757552 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdd666e2-9e8d-42d2-af08-119d8aeea143" containerName="extract-content" Dec 10 16:27:23 crc kubenswrapper[4755]: I1210 16:27:23.757565 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdd666e2-9e8d-42d2-af08-119d8aeea143" containerName="extract-content" Dec 10 16:27:23 crc kubenswrapper[4755]: E1210 16:27:23.757584 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="696db2de-32c6-4679-965f-ec8d2a52ae64" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 10 16:27:23 crc kubenswrapper[4755]: I1210 16:27:23.757592 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="696db2de-32c6-4679-965f-ec8d2a52ae64" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 10 16:27:23 crc kubenswrapper[4755]: E1210 16:27:23.757605 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdd666e2-9e8d-42d2-af08-119d8aeea143" containerName="registry-server" Dec 10 16:27:23 crc kubenswrapper[4755]: I1210 16:27:23.757611 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdd666e2-9e8d-42d2-af08-119d8aeea143" containerName="registry-server" Dec 10 16:27:23 crc kubenswrapper[4755]: E1210 16:27:23.757624 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdd666e2-9e8d-42d2-af08-119d8aeea143" containerName="extract-utilities" Dec 10 16:27:23 crc kubenswrapper[4755]: I1210 16:27:23.757630 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdd666e2-9e8d-42d2-af08-119d8aeea143" containerName="extract-utilities" Dec 10 16:27:23 crc kubenswrapper[4755]: I1210 16:27:23.757836 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="696db2de-32c6-4679-965f-ec8d2a52ae64" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 10 16:27:23 crc kubenswrapper[4755]: I1210 16:27:23.757868 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdd666e2-9e8d-42d2-af08-119d8aeea143" containerName="registry-server" Dec 10 16:27:23 crc kubenswrapper[4755]: I1210 16:27:23.759811 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lkzzd" Dec 10 16:27:23 crc kubenswrapper[4755]: I1210 16:27:23.806965 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lkzzd"] Dec 10 16:27:23 crc kubenswrapper[4755]: I1210 16:27:23.881997 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3df44112-9453-4958-8a64-ce354428a949-catalog-content\") pod \"redhat-operators-lkzzd\" (UID: \"3df44112-9453-4958-8a64-ce354428a949\") " pod="openshift-marketplace/redhat-operators-lkzzd" Dec 10 16:27:23 crc kubenswrapper[4755]: I1210 16:27:23.882253 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3df44112-9453-4958-8a64-ce354428a949-utilities\") pod \"redhat-operators-lkzzd\" (UID: \"3df44112-9453-4958-8a64-ce354428a949\") " pod="openshift-marketplace/redhat-operators-lkzzd" Dec 10 16:27:23 crc kubenswrapper[4755]: I1210 16:27:23.882460 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjvjj\" (UniqueName: \"kubernetes.io/projected/3df44112-9453-4958-8a64-ce354428a949-kube-api-access-tjvjj\") pod \"redhat-operators-lkzzd\" (UID: \"3df44112-9453-4958-8a64-ce354428a949\") " pod="openshift-marketplace/redhat-operators-lkzzd" Dec 10 16:27:23 crc kubenswrapper[4755]: I1210 16:27:23.985030 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjvjj\" (UniqueName: \"kubernetes.io/projected/3df44112-9453-4958-8a64-ce354428a949-kube-api-access-tjvjj\") pod \"redhat-operators-lkzzd\" (UID: \"3df44112-9453-4958-8a64-ce354428a949\") " pod="openshift-marketplace/redhat-operators-lkzzd" Dec 10 16:27:23 crc kubenswrapper[4755]: I1210 16:27:23.985161 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3df44112-9453-4958-8a64-ce354428a949-catalog-content\") pod \"redhat-operators-lkzzd\" (UID: \"3df44112-9453-4958-8a64-ce354428a949\") " pod="openshift-marketplace/redhat-operators-lkzzd" Dec 10 16:27:23 crc kubenswrapper[4755]: I1210 16:27:23.985339 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3df44112-9453-4958-8a64-ce354428a949-utilities\") pod \"redhat-operators-lkzzd\" (UID: \"3df44112-9453-4958-8a64-ce354428a949\") " pod="openshift-marketplace/redhat-operators-lkzzd" Dec 10 16:27:23 crc kubenswrapper[4755]: I1210 16:27:23.985686 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3df44112-9453-4958-8a64-ce354428a949-catalog-content\") pod \"redhat-operators-lkzzd\" (UID: \"3df44112-9453-4958-8a64-ce354428a949\") " pod="openshift-marketplace/redhat-operators-lkzzd" Dec 10 16:27:23 crc kubenswrapper[4755]: I1210 16:27:23.985790 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3df44112-9453-4958-8a64-ce354428a949-utilities\") pod \"redhat-operators-lkzzd\" (UID: \"3df44112-9453-4958-8a64-ce354428a949\") " pod="openshift-marketplace/redhat-operators-lkzzd" Dec 10 16:27:24 crc kubenswrapper[4755]: I1210 16:27:24.005694 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjvjj\" (UniqueName: \"kubernetes.io/projected/3df44112-9453-4958-8a64-ce354428a949-kube-api-access-tjvjj\") pod \"redhat-operators-lkzzd\" (UID: \"3df44112-9453-4958-8a64-ce354428a949\") " pod="openshift-marketplace/redhat-operators-lkzzd" Dec 10 16:27:24 crc kubenswrapper[4755]: I1210 16:27:24.119460 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lkzzd" Dec 10 16:27:24 crc kubenswrapper[4755]: I1210 16:27:24.671354 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lkzzd"] Dec 10 16:27:25 crc kubenswrapper[4755]: I1210 16:27:25.026418 4755 generic.go:334] "Generic (PLEG): container finished" podID="3df44112-9453-4958-8a64-ce354428a949" containerID="1265445563a833bf49dc96307dbbae90aef1ee03c61a75474838128d207c73ea" exitCode=0 Dec 10 16:27:25 crc kubenswrapper[4755]: I1210 16:27:25.026527 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lkzzd" event={"ID":"3df44112-9453-4958-8a64-ce354428a949","Type":"ContainerDied","Data":"1265445563a833bf49dc96307dbbae90aef1ee03c61a75474838128d207c73ea"} Dec 10 16:27:25 crc kubenswrapper[4755]: I1210 16:27:25.026701 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lkzzd" event={"ID":"3df44112-9453-4958-8a64-ce354428a949","Type":"ContainerStarted","Data":"2049d66b1f6c5a347cb278682deb7cfecf58d0ea40b8c9b015763d3a4d12d347"} Dec 10 16:27:28 crc kubenswrapper[4755]: E1210 16:27:28.759984 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:27:30 crc kubenswrapper[4755]: I1210 16:27:30.758567 4755 scope.go:117] "RemoveContainer" containerID="3a32840bbc4b33bf990552b48c16e0d53e66710a5880db985795b1072a3ba36c" Dec 10 16:27:30 crc kubenswrapper[4755]: E1210 16:27:30.759129 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:27:33 crc kubenswrapper[4755]: E1210 16:27:33.769277 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:27:38 crc kubenswrapper[4755]: I1210 16:27:38.160002 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lkzzd" event={"ID":"3df44112-9453-4958-8a64-ce354428a949","Type":"ContainerStarted","Data":"97caa4e204453a88c1b4a4bc0f611d7f7318488d775e399ee91cf5dcae6d4ecc"} Dec 10 16:27:40 crc kubenswrapper[4755]: I1210 16:27:40.181943 4755 generic.go:334] "Generic (PLEG): container finished" podID="3df44112-9453-4958-8a64-ce354428a949" containerID="97caa4e204453a88c1b4a4bc0f611d7f7318488d775e399ee91cf5dcae6d4ecc" exitCode=0 Dec 10 16:27:40 crc kubenswrapper[4755]: I1210 16:27:40.182032 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lkzzd" event={"ID":"3df44112-9453-4958-8a64-ce354428a949","Type":"ContainerDied","Data":"97caa4e204453a88c1b4a4bc0f611d7f7318488d775e399ee91cf5dcae6d4ecc"} Dec 10 16:27:40 crc kubenswrapper[4755]: E1210 16:27:40.759772 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:27:41 crc kubenswrapper[4755]: I1210 16:27:41.197203 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lkzzd" event={"ID":"3df44112-9453-4958-8a64-ce354428a949","Type":"ContainerStarted","Data":"48c7a948ab3834f78b928489cafa3e39dedbbe6690dc358f9ea9c10c6b55af6a"} Dec 10 16:27:41 crc kubenswrapper[4755]: I1210 16:27:41.228374 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lkzzd" podStartSLOduration=2.5938211190000002 podStartE2EDuration="18.228350322s" podCreationTimestamp="2025-12-10 16:27:23 +0000 UTC" firstStartedPulling="2025-12-10 16:27:25.028704433 +0000 UTC m=+3841.629588065" lastFinishedPulling="2025-12-10 16:27:40.663233636 +0000 UTC m=+3857.264117268" observedRunningTime="2025-12-10 16:27:41.218710356 +0000 UTC m=+3857.819593988" watchObservedRunningTime="2025-12-10 16:27:41.228350322 +0000 UTC m=+3857.829233954" Dec 10 16:27:44 crc kubenswrapper[4755]: I1210 16:27:44.119740 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lkzzd" Dec 10 16:27:44 crc kubenswrapper[4755]: I1210 16:27:44.120360 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lkzzd" Dec 10 16:27:44 crc kubenswrapper[4755]: I1210 16:27:44.758692 4755 scope.go:117] "RemoveContainer" containerID="3a32840bbc4b33bf990552b48c16e0d53e66710a5880db985795b1072a3ba36c" Dec 10 16:27:44 crc kubenswrapper[4755]: E1210 16:27:44.758951 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:27:45 crc kubenswrapper[4755]: I1210 16:27:45.172509 4755 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lkzzd" podUID="3df44112-9453-4958-8a64-ce354428a949" containerName="registry-server" probeResult="failure" output=< Dec 10 16:27:45 crc kubenswrapper[4755]: timeout: failed to connect service ":50051" within 1s Dec 10 16:27:45 crc kubenswrapper[4755]: > Dec 10 16:27:47 crc kubenswrapper[4755]: E1210 16:27:47.761111 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:27:52 crc kubenswrapper[4755]: E1210 16:27:52.759641 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:27:54 crc kubenswrapper[4755]: I1210 16:27:54.192270 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lkzzd" Dec 10 16:27:54 crc kubenswrapper[4755]: I1210 16:27:54.271726 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lkzzd" Dec 10 16:27:54 crc kubenswrapper[4755]: I1210 16:27:54.780261 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lkzzd"] Dec 10 16:27:54 crc kubenswrapper[4755]: I1210 16:27:54.968066 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wxn4w"] Dec 10 16:27:54 crc kubenswrapper[4755]: I1210 16:27:54.968389 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-wxn4w" podUID="4cb6ed71-48b0-45c8-a470-4b6441c7bff5" containerName="registry-server" containerID="cri-o://9d7d034b98b54ecedc10a03548420eee8b73a2a66128a96badee77de13588318" gracePeriod=2 Dec 10 16:27:55 crc kubenswrapper[4755]: I1210 16:27:55.406553 4755 generic.go:334] "Generic (PLEG): container finished" podID="4cb6ed71-48b0-45c8-a470-4b6441c7bff5" containerID="9d7d034b98b54ecedc10a03548420eee8b73a2a66128a96badee77de13588318" exitCode=0 Dec 10 16:27:55 crc kubenswrapper[4755]: I1210 16:27:55.406641 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wxn4w" event={"ID":"4cb6ed71-48b0-45c8-a470-4b6441c7bff5","Type":"ContainerDied","Data":"9d7d034b98b54ecedc10a03548420eee8b73a2a66128a96badee77de13588318"} Dec 10 16:27:56 crc kubenswrapper[4755]: I1210 16:27:56.303586 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wxn4w" Dec 10 16:27:56 crc kubenswrapper[4755]: I1210 16:27:56.419626 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wxn4w" event={"ID":"4cb6ed71-48b0-45c8-a470-4b6441c7bff5","Type":"ContainerDied","Data":"aefea556f3904abbf91f85784deee7d232bdbc4be754fa983bc40cb0a4978e11"} Dec 10 16:27:56 crc kubenswrapper[4755]: I1210 16:27:56.419674 4755 scope.go:117] "RemoveContainer" containerID="9d7d034b98b54ecedc10a03548420eee8b73a2a66128a96badee77de13588318" Dec 10 16:27:56 crc kubenswrapper[4755]: I1210 16:27:56.419801 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wxn4w" Dec 10 16:27:56 crc kubenswrapper[4755]: I1210 16:27:56.445538 4755 scope.go:117] "RemoveContainer" containerID="93b4acfc4caa646841ae16d8c2c69e54573a0966d606f11bfdd56c1ac1136939" Dec 10 16:27:56 crc kubenswrapper[4755]: I1210 16:27:56.468581 4755 scope.go:117] "RemoveContainer" containerID="0e102a3cd3d4f61e9aba3a0e51bed4a02589683422eeb7418b48d093f13ab8dd" Dec 10 16:27:56 crc kubenswrapper[4755]: I1210 16:27:56.501824 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cb6ed71-48b0-45c8-a470-4b6441c7bff5-utilities\") pod \"4cb6ed71-48b0-45c8-a470-4b6441c7bff5\" (UID: \"4cb6ed71-48b0-45c8-a470-4b6441c7bff5\") " Dec 10 16:27:56 crc kubenswrapper[4755]: I1210 16:27:56.502125 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cb6ed71-48b0-45c8-a470-4b6441c7bff5-catalog-content\") pod \"4cb6ed71-48b0-45c8-a470-4b6441c7bff5\" (UID: \"4cb6ed71-48b0-45c8-a470-4b6441c7bff5\") " Dec 10 16:27:56 crc kubenswrapper[4755]: I1210 16:27:56.502162 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rjzmr\" (UniqueName: \"kubernetes.io/projected/4cb6ed71-48b0-45c8-a470-4b6441c7bff5-kube-api-access-rjzmr\") pod \"4cb6ed71-48b0-45c8-a470-4b6441c7bff5\" (UID: \"4cb6ed71-48b0-45c8-a470-4b6441c7bff5\") " Dec 10 16:27:56 crc kubenswrapper[4755]: I1210 16:27:56.504886 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4cb6ed71-48b0-45c8-a470-4b6441c7bff5-utilities" (OuterVolumeSpecName: "utilities") pod "4cb6ed71-48b0-45c8-a470-4b6441c7bff5" (UID: "4cb6ed71-48b0-45c8-a470-4b6441c7bff5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 16:27:56 crc kubenswrapper[4755]: I1210 16:27:56.510128 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cb6ed71-48b0-45c8-a470-4b6441c7bff5-kube-api-access-rjzmr" (OuterVolumeSpecName: "kube-api-access-rjzmr") pod "4cb6ed71-48b0-45c8-a470-4b6441c7bff5" (UID: "4cb6ed71-48b0-45c8-a470-4b6441c7bff5"). InnerVolumeSpecName "kube-api-access-rjzmr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 16:27:56 crc kubenswrapper[4755]: I1210 16:27:56.604041 4755 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cb6ed71-48b0-45c8-a470-4b6441c7bff5-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 16:27:56 crc kubenswrapper[4755]: I1210 16:27:56.604073 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rjzmr\" (UniqueName: \"kubernetes.io/projected/4cb6ed71-48b0-45c8-a470-4b6441c7bff5-kube-api-access-rjzmr\") on node \"crc\" DevicePath \"\"" Dec 10 16:27:56 crc kubenswrapper[4755]: I1210 16:27:56.617157 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4cb6ed71-48b0-45c8-a470-4b6441c7bff5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4cb6ed71-48b0-45c8-a470-4b6441c7bff5" (UID: "4cb6ed71-48b0-45c8-a470-4b6441c7bff5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 16:27:56 crc kubenswrapper[4755]: I1210 16:27:56.705836 4755 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cb6ed71-48b0-45c8-a470-4b6441c7bff5-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 16:27:56 crc kubenswrapper[4755]: I1210 16:27:56.762735 4755 scope.go:117] "RemoveContainer" containerID="3a32840bbc4b33bf990552b48c16e0d53e66710a5880db985795b1072a3ba36c" Dec 10 16:27:56 crc kubenswrapper[4755]: E1210 16:27:56.763324 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:27:56 crc kubenswrapper[4755]: I1210 16:27:56.778626 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wxn4w"] Dec 10 16:27:56 crc kubenswrapper[4755]: I1210 16:27:56.807060 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-wxn4w"] Dec 10 16:27:57 crc kubenswrapper[4755]: I1210 16:27:57.788581 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4cb6ed71-48b0-45c8-a470-4b6441c7bff5" path="/var/lib/kubelet/pods/4cb6ed71-48b0-45c8-a470-4b6441c7bff5/volumes" Dec 10 16:28:02 crc kubenswrapper[4755]: E1210 16:28:02.760402 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:28:05 crc kubenswrapper[4755]: E1210 16:28:05.760124 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:28:08 crc kubenswrapper[4755]: I1210 16:28:08.758770 4755 scope.go:117] "RemoveContainer" containerID="3a32840bbc4b33bf990552b48c16e0d53e66710a5880db985795b1072a3ba36c" Dec 10 16:28:08 crc kubenswrapper[4755]: E1210 16:28:08.759957 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:28:13 crc kubenswrapper[4755]: E1210 16:28:13.771828 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:28:16 crc kubenswrapper[4755]: E1210 16:28:16.759880 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:28:19 crc kubenswrapper[4755]: I1210 16:28:19.758547 4755 scope.go:117] "RemoveContainer" containerID="3a32840bbc4b33bf990552b48c16e0d53e66710a5880db985795b1072a3ba36c" Dec 10 16:28:19 crc kubenswrapper[4755]: E1210 16:28:19.759351 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:28:25 crc kubenswrapper[4755]: E1210 16:28:25.760035 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:28:29 crc kubenswrapper[4755]: E1210 16:28:29.760138 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:28:32 crc kubenswrapper[4755]: I1210 16:28:32.758862 4755 scope.go:117] "RemoveContainer" containerID="3a32840bbc4b33bf990552b48c16e0d53e66710a5880db985795b1072a3ba36c" Dec 10 16:28:32 crc kubenswrapper[4755]: E1210 16:28:32.759532 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:28:40 crc kubenswrapper[4755]: E1210 16:28:40.760170 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:28:43 crc kubenswrapper[4755]: E1210 16:28:43.767689 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:28:46 crc kubenswrapper[4755]: I1210 16:28:46.757860 4755 scope.go:117] "RemoveContainer" containerID="3a32840bbc4b33bf990552b48c16e0d53e66710a5880db985795b1072a3ba36c" Dec 10 16:28:46 crc kubenswrapper[4755]: E1210 16:28:46.758608 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:28:51 crc kubenswrapper[4755]: E1210 16:28:51.760246 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:28:56 crc kubenswrapper[4755]: E1210 16:28:56.760670 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:29:01 crc kubenswrapper[4755]: I1210 16:29:01.757823 4755 scope.go:117] "RemoveContainer" containerID="3a32840bbc4b33bf990552b48c16e0d53e66710a5880db985795b1072a3ba36c" Dec 10 16:29:01 crc kubenswrapper[4755]: E1210 16:29:01.758774 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:29:03 crc kubenswrapper[4755]: E1210 16:29:03.776302 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:29:09 crc kubenswrapper[4755]: E1210 16:29:09.760626 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:29:14 crc kubenswrapper[4755]: I1210 16:29:14.758198 4755 scope.go:117] "RemoveContainer" containerID="3a32840bbc4b33bf990552b48c16e0d53e66710a5880db985795b1072a3ba36c" Dec 10 16:29:14 crc kubenswrapper[4755]: E1210 16:29:14.760108 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:29:14 crc kubenswrapper[4755]: E1210 16:29:14.760482 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:29:22 crc kubenswrapper[4755]: E1210 16:29:22.760050 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:29:25 crc kubenswrapper[4755]: I1210 16:29:25.757870 4755 scope.go:117] "RemoveContainer" containerID="3a32840bbc4b33bf990552b48c16e0d53e66710a5880db985795b1072a3ba36c" Dec 10 16:29:25 crc kubenswrapper[4755]: E1210 16:29:25.758615 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:29:25 crc kubenswrapper[4755]: E1210 16:29:25.759733 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:29:33 crc kubenswrapper[4755]: E1210 16:29:33.768736 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:29:40 crc kubenswrapper[4755]: I1210 16:29:40.758446 4755 scope.go:117] "RemoveContainer" containerID="3a32840bbc4b33bf990552b48c16e0d53e66710a5880db985795b1072a3ba36c" Dec 10 16:29:40 crc kubenswrapper[4755]: E1210 16:29:40.894691 4755 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 10 16:29:40 crc kubenswrapper[4755]: E1210 16:29:40.894789 4755 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 10 16:29:40 crc kubenswrapper[4755]: E1210 16:29:40.894957 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mz4t5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-jfc28_openstack(998863b6-4f48-4c8b-8011-a40377686b99): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 10 16:29:40 crc kubenswrapper[4755]: E1210 16:29:40.896181 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:29:41 crc kubenswrapper[4755]: I1210 16:29:41.426725 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" event={"ID":"b132a8b9-1c99-414d-8773-229bf36b305d","Type":"ContainerStarted","Data":"2a9a1a5649a7241f53a4cb79d7a0bc610b813c7bcddded9cc76a86dcecf742ad"} Dec 10 16:29:48 crc kubenswrapper[4755]: E1210 16:29:48.761078 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:29:52 crc kubenswrapper[4755]: E1210 16:29:52.760876 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:29:53 crc kubenswrapper[4755]: I1210 16:29:53.033263 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pq5sx"] Dec 10 16:29:53 crc kubenswrapper[4755]: E1210 16:29:53.033763 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cb6ed71-48b0-45c8-a470-4b6441c7bff5" containerName="extract-utilities" Dec 10 16:29:53 crc kubenswrapper[4755]: I1210 16:29:53.033784 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cb6ed71-48b0-45c8-a470-4b6441c7bff5" containerName="extract-utilities" Dec 10 16:29:53 crc kubenswrapper[4755]: E1210 16:29:53.033804 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cb6ed71-48b0-45c8-a470-4b6441c7bff5" containerName="extract-content" Dec 10 16:29:53 crc kubenswrapper[4755]: I1210 16:29:53.033811 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cb6ed71-48b0-45c8-a470-4b6441c7bff5" containerName="extract-content" Dec 10 16:29:53 crc kubenswrapper[4755]: E1210 16:29:53.033821 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cb6ed71-48b0-45c8-a470-4b6441c7bff5" containerName="registry-server" Dec 10 16:29:53 crc kubenswrapper[4755]: I1210 16:29:53.033828 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cb6ed71-48b0-45c8-a470-4b6441c7bff5" containerName="registry-server" Dec 10 16:29:53 crc kubenswrapper[4755]: I1210 16:29:53.034014 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cb6ed71-48b0-45c8-a470-4b6441c7bff5" containerName="registry-server" Dec 10 16:29:53 crc kubenswrapper[4755]: I1210 16:29:53.034851 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pq5sx" Dec 10 16:29:53 crc kubenswrapper[4755]: I1210 16:29:53.037263 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 10 16:29:53 crc kubenswrapper[4755]: I1210 16:29:53.037564 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 10 16:29:53 crc kubenswrapper[4755]: I1210 16:29:53.038816 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-74mg7" Dec 10 16:29:53 crc kubenswrapper[4755]: I1210 16:29:53.038899 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 10 16:29:53 crc kubenswrapper[4755]: I1210 16:29:53.053413 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pq5sx"] Dec 10 16:29:53 crc kubenswrapper[4755]: I1210 16:29:53.063316 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwhm8\" (UniqueName: \"kubernetes.io/projected/48fe9944-e282-45c9-b9b2-6716af358188-kube-api-access-nwhm8\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-pq5sx\" (UID: \"48fe9944-e282-45c9-b9b2-6716af358188\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pq5sx" Dec 10 16:29:53 crc kubenswrapper[4755]: I1210 16:29:53.063427 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/48fe9944-e282-45c9-b9b2-6716af358188-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-pq5sx\" (UID: \"48fe9944-e282-45c9-b9b2-6716af358188\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pq5sx" Dec 10 16:29:53 crc kubenswrapper[4755]: I1210 16:29:53.063525 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/48fe9944-e282-45c9-b9b2-6716af358188-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-pq5sx\" (UID: \"48fe9944-e282-45c9-b9b2-6716af358188\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pq5sx" Dec 10 16:29:53 crc kubenswrapper[4755]: I1210 16:29:53.165008 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwhm8\" (UniqueName: \"kubernetes.io/projected/48fe9944-e282-45c9-b9b2-6716af358188-kube-api-access-nwhm8\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-pq5sx\" (UID: \"48fe9944-e282-45c9-b9b2-6716af358188\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pq5sx" Dec 10 16:29:53 crc kubenswrapper[4755]: I1210 16:29:53.165140 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/48fe9944-e282-45c9-b9b2-6716af358188-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-pq5sx\" (UID: \"48fe9944-e282-45c9-b9b2-6716af358188\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pq5sx" Dec 10 16:29:53 crc kubenswrapper[4755]: I1210 16:29:53.165234 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/48fe9944-e282-45c9-b9b2-6716af358188-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-pq5sx\" (UID: \"48fe9944-e282-45c9-b9b2-6716af358188\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pq5sx" Dec 10 16:29:53 crc kubenswrapper[4755]: I1210 16:29:53.171074 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/48fe9944-e282-45c9-b9b2-6716af358188-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-pq5sx\" (UID: \"48fe9944-e282-45c9-b9b2-6716af358188\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pq5sx" Dec 10 16:29:53 crc kubenswrapper[4755]: I1210 16:29:53.171182 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/48fe9944-e282-45c9-b9b2-6716af358188-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-pq5sx\" (UID: \"48fe9944-e282-45c9-b9b2-6716af358188\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pq5sx" Dec 10 16:29:53 crc kubenswrapper[4755]: I1210 16:29:53.180732 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwhm8\" (UniqueName: \"kubernetes.io/projected/48fe9944-e282-45c9-b9b2-6716af358188-kube-api-access-nwhm8\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-pq5sx\" (UID: \"48fe9944-e282-45c9-b9b2-6716af358188\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pq5sx" Dec 10 16:29:53 crc kubenswrapper[4755]: I1210 16:29:53.374266 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pq5sx" Dec 10 16:29:53 crc kubenswrapper[4755]: I1210 16:29:53.933950 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pq5sx"] Dec 10 16:29:54 crc kubenswrapper[4755]: I1210 16:29:54.546000 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pq5sx" event={"ID":"48fe9944-e282-45c9-b9b2-6716af358188","Type":"ContainerStarted","Data":"d481b0c8c9d5cb766b918c910c2766511d4b67fad078f3be518bfc03e340fa11"} Dec 10 16:29:55 crc kubenswrapper[4755]: I1210 16:29:55.555827 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pq5sx" event={"ID":"48fe9944-e282-45c9-b9b2-6716af358188","Type":"ContainerStarted","Data":"3115dafb5df3819821b3e9e18a98268295e89c720cd1771241373d6bc211363a"} Dec 10 16:29:55 crc kubenswrapper[4755]: I1210 16:29:55.573412 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pq5sx" podStartSLOduration=1.5550346419999999 podStartE2EDuration="2.573389016s" podCreationTimestamp="2025-12-10 16:29:53 +0000 UTC" firstStartedPulling="2025-12-10 16:29:53.936903759 +0000 UTC m=+3990.537787391" lastFinishedPulling="2025-12-10 16:29:54.955258123 +0000 UTC m=+3991.556141765" observedRunningTime="2025-12-10 16:29:55.570883678 +0000 UTC m=+3992.171767310" watchObservedRunningTime="2025-12-10 16:29:55.573389016 +0000 UTC m=+3992.174272648" Dec 10 16:29:59 crc kubenswrapper[4755]: E1210 16:29:59.760776 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:30:00 crc kubenswrapper[4755]: I1210 16:30:00.161845 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29423070-r8tpw"] Dec 10 16:30:00 crc kubenswrapper[4755]: I1210 16:30:00.163889 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29423070-r8tpw" Dec 10 16:30:00 crc kubenswrapper[4755]: I1210 16:30:00.167300 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 10 16:30:00 crc kubenswrapper[4755]: I1210 16:30:00.168919 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 10 16:30:00 crc kubenswrapper[4755]: I1210 16:30:00.172141 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29423070-r8tpw"] Dec 10 16:30:00 crc kubenswrapper[4755]: I1210 16:30:00.346222 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7cee7cf6-4d9f-482b-b82b-98a69db682e0-config-volume\") pod \"collect-profiles-29423070-r8tpw\" (UID: \"7cee7cf6-4d9f-482b-b82b-98a69db682e0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423070-r8tpw" Dec 10 16:30:00 crc kubenswrapper[4755]: I1210 16:30:00.346628 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7cee7cf6-4d9f-482b-b82b-98a69db682e0-secret-volume\") pod \"collect-profiles-29423070-r8tpw\" (UID: \"7cee7cf6-4d9f-482b-b82b-98a69db682e0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423070-r8tpw" Dec 10 16:30:00 crc kubenswrapper[4755]: I1210 16:30:00.346798 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jwc8\" (UniqueName: \"kubernetes.io/projected/7cee7cf6-4d9f-482b-b82b-98a69db682e0-kube-api-access-8jwc8\") pod \"collect-profiles-29423070-r8tpw\" (UID: \"7cee7cf6-4d9f-482b-b82b-98a69db682e0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423070-r8tpw" Dec 10 16:30:00 crc kubenswrapper[4755]: I1210 16:30:00.449172 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7cee7cf6-4d9f-482b-b82b-98a69db682e0-secret-volume\") pod \"collect-profiles-29423070-r8tpw\" (UID: \"7cee7cf6-4d9f-482b-b82b-98a69db682e0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423070-r8tpw" Dec 10 16:30:00 crc kubenswrapper[4755]: I1210 16:30:00.449227 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jwc8\" (UniqueName: \"kubernetes.io/projected/7cee7cf6-4d9f-482b-b82b-98a69db682e0-kube-api-access-8jwc8\") pod \"collect-profiles-29423070-r8tpw\" (UID: \"7cee7cf6-4d9f-482b-b82b-98a69db682e0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423070-r8tpw" Dec 10 16:30:00 crc kubenswrapper[4755]: I1210 16:30:00.449372 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7cee7cf6-4d9f-482b-b82b-98a69db682e0-config-volume\") pod \"collect-profiles-29423070-r8tpw\" (UID: \"7cee7cf6-4d9f-482b-b82b-98a69db682e0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423070-r8tpw" Dec 10 16:30:00 crc kubenswrapper[4755]: I1210 16:30:00.450293 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7cee7cf6-4d9f-482b-b82b-98a69db682e0-config-volume\") pod \"collect-profiles-29423070-r8tpw\" (UID: \"7cee7cf6-4d9f-482b-b82b-98a69db682e0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423070-r8tpw" Dec 10 16:30:00 crc kubenswrapper[4755]: I1210 16:30:00.457494 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7cee7cf6-4d9f-482b-b82b-98a69db682e0-secret-volume\") pod \"collect-profiles-29423070-r8tpw\" (UID: \"7cee7cf6-4d9f-482b-b82b-98a69db682e0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423070-r8tpw" Dec 10 16:30:00 crc kubenswrapper[4755]: I1210 16:30:00.466001 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jwc8\" (UniqueName: \"kubernetes.io/projected/7cee7cf6-4d9f-482b-b82b-98a69db682e0-kube-api-access-8jwc8\") pod \"collect-profiles-29423070-r8tpw\" (UID: \"7cee7cf6-4d9f-482b-b82b-98a69db682e0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423070-r8tpw" Dec 10 16:30:00 crc kubenswrapper[4755]: I1210 16:30:00.491931 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29423070-r8tpw" Dec 10 16:30:00 crc kubenswrapper[4755]: I1210 16:30:00.946832 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29423070-r8tpw"] Dec 10 16:30:00 crc kubenswrapper[4755]: W1210 16:30:00.948096 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7cee7cf6_4d9f_482b_b82b_98a69db682e0.slice/crio-0a47bbdc939588f02ad5f0e791744c6952b2ef18c0d785f95ca61c31d75bc405 WatchSource:0}: Error finding container 0a47bbdc939588f02ad5f0e791744c6952b2ef18c0d785f95ca61c31d75bc405: Status 404 returned error can't find the container with id 0a47bbdc939588f02ad5f0e791744c6952b2ef18c0d785f95ca61c31d75bc405 Dec 10 16:30:01 crc kubenswrapper[4755]: I1210 16:30:01.621038 4755 generic.go:334] "Generic (PLEG): container finished" podID="7cee7cf6-4d9f-482b-b82b-98a69db682e0" containerID="e8c4e95a9b454f139c1732e5dfd3e26088f8c8a82efba4803b1f0add11259d2f" exitCode=0 Dec 10 16:30:01 crc kubenswrapper[4755]: I1210 16:30:01.621081 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29423070-r8tpw" event={"ID":"7cee7cf6-4d9f-482b-b82b-98a69db682e0","Type":"ContainerDied","Data":"e8c4e95a9b454f139c1732e5dfd3e26088f8c8a82efba4803b1f0add11259d2f"} Dec 10 16:30:01 crc kubenswrapper[4755]: I1210 16:30:01.621342 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29423070-r8tpw" event={"ID":"7cee7cf6-4d9f-482b-b82b-98a69db682e0","Type":"ContainerStarted","Data":"0a47bbdc939588f02ad5f0e791744c6952b2ef18c0d785f95ca61c31d75bc405"} Dec 10 16:30:03 crc kubenswrapper[4755]: I1210 16:30:03.197781 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29423070-r8tpw" Dec 10 16:30:03 crc kubenswrapper[4755]: I1210 16:30:03.211426 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7cee7cf6-4d9f-482b-b82b-98a69db682e0-secret-volume\") pod \"7cee7cf6-4d9f-482b-b82b-98a69db682e0\" (UID: \"7cee7cf6-4d9f-482b-b82b-98a69db682e0\") " Dec 10 16:30:03 crc kubenswrapper[4755]: I1210 16:30:03.211746 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8jwc8\" (UniqueName: \"kubernetes.io/projected/7cee7cf6-4d9f-482b-b82b-98a69db682e0-kube-api-access-8jwc8\") pod \"7cee7cf6-4d9f-482b-b82b-98a69db682e0\" (UID: \"7cee7cf6-4d9f-482b-b82b-98a69db682e0\") " Dec 10 16:30:03 crc kubenswrapper[4755]: I1210 16:30:03.211788 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7cee7cf6-4d9f-482b-b82b-98a69db682e0-config-volume\") pod \"7cee7cf6-4d9f-482b-b82b-98a69db682e0\" (UID: \"7cee7cf6-4d9f-482b-b82b-98a69db682e0\") " Dec 10 16:30:03 crc kubenswrapper[4755]: I1210 16:30:03.213688 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7cee7cf6-4d9f-482b-b82b-98a69db682e0-config-volume" (OuterVolumeSpecName: "config-volume") pod "7cee7cf6-4d9f-482b-b82b-98a69db682e0" (UID: "7cee7cf6-4d9f-482b-b82b-98a69db682e0"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 16:30:03 crc kubenswrapper[4755]: I1210 16:30:03.220541 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cee7cf6-4d9f-482b-b82b-98a69db682e0-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7cee7cf6-4d9f-482b-b82b-98a69db682e0" (UID: "7cee7cf6-4d9f-482b-b82b-98a69db682e0"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 16:30:03 crc kubenswrapper[4755]: I1210 16:30:03.228818 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cee7cf6-4d9f-482b-b82b-98a69db682e0-kube-api-access-8jwc8" (OuterVolumeSpecName: "kube-api-access-8jwc8") pod "7cee7cf6-4d9f-482b-b82b-98a69db682e0" (UID: "7cee7cf6-4d9f-482b-b82b-98a69db682e0"). InnerVolumeSpecName "kube-api-access-8jwc8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 16:30:03 crc kubenswrapper[4755]: I1210 16:30:03.314170 4755 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7cee7cf6-4d9f-482b-b82b-98a69db682e0-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 10 16:30:03 crc kubenswrapper[4755]: I1210 16:30:03.314215 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8jwc8\" (UniqueName: \"kubernetes.io/projected/7cee7cf6-4d9f-482b-b82b-98a69db682e0-kube-api-access-8jwc8\") on node \"crc\" DevicePath \"\"" Dec 10 16:30:03 crc kubenswrapper[4755]: I1210 16:30:03.314224 4755 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7cee7cf6-4d9f-482b-b82b-98a69db682e0-config-volume\") on node \"crc\" DevicePath \"\"" Dec 10 16:30:03 crc kubenswrapper[4755]: I1210 16:30:03.640450 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29423070-r8tpw" event={"ID":"7cee7cf6-4d9f-482b-b82b-98a69db682e0","Type":"ContainerDied","Data":"0a47bbdc939588f02ad5f0e791744c6952b2ef18c0d785f95ca61c31d75bc405"} Dec 10 16:30:03 crc kubenswrapper[4755]: I1210 16:30:03.640509 4755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a47bbdc939588f02ad5f0e791744c6952b2ef18c0d785f95ca61c31d75bc405" Dec 10 16:30:03 crc kubenswrapper[4755]: I1210 16:30:03.640541 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29423070-r8tpw" Dec 10 16:30:04 crc kubenswrapper[4755]: I1210 16:30:04.284750 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29423025-mmtdq"] Dec 10 16:30:04 crc kubenswrapper[4755]: I1210 16:30:04.297180 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29423025-mmtdq"] Dec 10 16:30:05 crc kubenswrapper[4755]: I1210 16:30:05.781848 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a116975b-8d46-40b6-99e4-134b1558c5d9" path="/var/lib/kubelet/pods/a116975b-8d46-40b6-99e4-134b1558c5d9/volumes" Dec 10 16:30:07 crc kubenswrapper[4755]: E1210 16:30:07.760270 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:30:07 crc kubenswrapper[4755]: I1210 16:30:07.898936 4755 scope.go:117] "RemoveContainer" containerID="12f329ae012dedd19f8b9e8a92666195fe04129d66f2e6d1935f981a121c909b" Dec 10 16:30:13 crc kubenswrapper[4755]: I1210 16:30:13.773677 4755 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 10 16:30:13 crc kubenswrapper[4755]: E1210 16:30:13.908926 4755 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 10 16:30:13 crc kubenswrapper[4755]: E1210 16:30:13.909011 4755 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 10 16:30:13 crc kubenswrapper[4755]: E1210 16:30:13.909172 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5d4h5b7hfbh5ddh688h9ch55bh7chf6h5ddh68ch94h69h5c5h596h59bh569hfchc4h676hcbh64dhdbh57fh75h5c9h98h59ch679h566h77h9cq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hw9gj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(6d104bea-ecdc-4fe1-9861-fb1a19fce845): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 10 16:30:13 crc kubenswrapper[4755]: E1210 16:30:13.910576 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:30:17 crc kubenswrapper[4755]: I1210 16:30:17.744059 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7cfbl"] Dec 10 16:30:17 crc kubenswrapper[4755]: E1210 16:30:17.745167 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cee7cf6-4d9f-482b-b82b-98a69db682e0" containerName="collect-profiles" Dec 10 16:30:17 crc kubenswrapper[4755]: I1210 16:30:17.745187 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cee7cf6-4d9f-482b-b82b-98a69db682e0" containerName="collect-profiles" Dec 10 16:30:17 crc kubenswrapper[4755]: I1210 16:30:17.745567 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cee7cf6-4d9f-482b-b82b-98a69db682e0" containerName="collect-profiles" Dec 10 16:30:17 crc kubenswrapper[4755]: I1210 16:30:17.747612 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7cfbl" Dec 10 16:30:17 crc kubenswrapper[4755]: I1210 16:30:17.771669 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7cfbl"] Dec 10 16:30:17 crc kubenswrapper[4755]: I1210 16:30:17.933101 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4dfc11c-10fe-4dd0-8f19-d25bd8e87312-utilities\") pod \"community-operators-7cfbl\" (UID: \"a4dfc11c-10fe-4dd0-8f19-d25bd8e87312\") " pod="openshift-marketplace/community-operators-7cfbl" Dec 10 16:30:17 crc kubenswrapper[4755]: I1210 16:30:17.933349 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4dfc11c-10fe-4dd0-8f19-d25bd8e87312-catalog-content\") pod \"community-operators-7cfbl\" (UID: \"a4dfc11c-10fe-4dd0-8f19-d25bd8e87312\") " pod="openshift-marketplace/community-operators-7cfbl" Dec 10 16:30:17 crc kubenswrapper[4755]: I1210 16:30:17.933634 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n22zd\" (UniqueName: \"kubernetes.io/projected/a4dfc11c-10fe-4dd0-8f19-d25bd8e87312-kube-api-access-n22zd\") pod \"community-operators-7cfbl\" (UID: \"a4dfc11c-10fe-4dd0-8f19-d25bd8e87312\") " pod="openshift-marketplace/community-operators-7cfbl" Dec 10 16:30:18 crc kubenswrapper[4755]: I1210 16:30:18.035986 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n22zd\" (UniqueName: \"kubernetes.io/projected/a4dfc11c-10fe-4dd0-8f19-d25bd8e87312-kube-api-access-n22zd\") pod \"community-operators-7cfbl\" (UID: \"a4dfc11c-10fe-4dd0-8f19-d25bd8e87312\") " pod="openshift-marketplace/community-operators-7cfbl" Dec 10 16:30:18 crc kubenswrapper[4755]: I1210 16:30:18.036360 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4dfc11c-10fe-4dd0-8f19-d25bd8e87312-utilities\") pod \"community-operators-7cfbl\" (UID: \"a4dfc11c-10fe-4dd0-8f19-d25bd8e87312\") " pod="openshift-marketplace/community-operators-7cfbl" Dec 10 16:30:18 crc kubenswrapper[4755]: I1210 16:30:18.036502 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4dfc11c-10fe-4dd0-8f19-d25bd8e87312-catalog-content\") pod \"community-operators-7cfbl\" (UID: \"a4dfc11c-10fe-4dd0-8f19-d25bd8e87312\") " pod="openshift-marketplace/community-operators-7cfbl" Dec 10 16:30:18 crc kubenswrapper[4755]: I1210 16:30:18.036954 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4dfc11c-10fe-4dd0-8f19-d25bd8e87312-utilities\") pod \"community-operators-7cfbl\" (UID: \"a4dfc11c-10fe-4dd0-8f19-d25bd8e87312\") " pod="openshift-marketplace/community-operators-7cfbl" Dec 10 16:30:18 crc kubenswrapper[4755]: I1210 16:30:18.037134 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4dfc11c-10fe-4dd0-8f19-d25bd8e87312-catalog-content\") pod \"community-operators-7cfbl\" (UID: \"a4dfc11c-10fe-4dd0-8f19-d25bd8e87312\") " pod="openshift-marketplace/community-operators-7cfbl" Dec 10 16:30:18 crc kubenswrapper[4755]: I1210 16:30:18.064774 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n22zd\" (UniqueName: \"kubernetes.io/projected/a4dfc11c-10fe-4dd0-8f19-d25bd8e87312-kube-api-access-n22zd\") pod \"community-operators-7cfbl\" (UID: \"a4dfc11c-10fe-4dd0-8f19-d25bd8e87312\") " pod="openshift-marketplace/community-operators-7cfbl" Dec 10 16:30:18 crc kubenswrapper[4755]: I1210 16:30:18.067822 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7cfbl" Dec 10 16:30:18 crc kubenswrapper[4755]: I1210 16:30:18.614095 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7cfbl"] Dec 10 16:30:19 crc kubenswrapper[4755]: I1210 16:30:18.808728 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7cfbl" event={"ID":"a4dfc11c-10fe-4dd0-8f19-d25bd8e87312","Type":"ContainerStarted","Data":"b57be9c6c652caf2ac11d01b3ce9465be1e0785009439b48650ab555c78c6fbe"} Dec 10 16:30:19 crc kubenswrapper[4755]: I1210 16:30:19.820953 4755 generic.go:334] "Generic (PLEG): container finished" podID="a4dfc11c-10fe-4dd0-8f19-d25bd8e87312" containerID="ebc038f5d4f73bf64459fdf08a2234010c34596526cf15ef4c713ccd6a1a61fa" exitCode=0 Dec 10 16:30:19 crc kubenswrapper[4755]: I1210 16:30:19.821020 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7cfbl" event={"ID":"a4dfc11c-10fe-4dd0-8f19-d25bd8e87312","Type":"ContainerDied","Data":"ebc038f5d4f73bf64459fdf08a2234010c34596526cf15ef4c713ccd6a1a61fa"} Dec 10 16:30:20 crc kubenswrapper[4755]: E1210 16:30:20.759941 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:30:20 crc kubenswrapper[4755]: I1210 16:30:20.833358 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7cfbl" event={"ID":"a4dfc11c-10fe-4dd0-8f19-d25bd8e87312","Type":"ContainerStarted","Data":"ddcdd793a32e77941b140b8a2558c8b5e1f03a3a8b09e515e1347578db8150da"} Dec 10 16:30:21 crc kubenswrapper[4755]: I1210 16:30:21.844093 4755 generic.go:334] "Generic (PLEG): container finished" podID="a4dfc11c-10fe-4dd0-8f19-d25bd8e87312" containerID="ddcdd793a32e77941b140b8a2558c8b5e1f03a3a8b09e515e1347578db8150da" exitCode=0 Dec 10 16:30:21 crc kubenswrapper[4755]: I1210 16:30:21.844129 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7cfbl" event={"ID":"a4dfc11c-10fe-4dd0-8f19-d25bd8e87312","Type":"ContainerDied","Data":"ddcdd793a32e77941b140b8a2558c8b5e1f03a3a8b09e515e1347578db8150da"} Dec 10 16:30:22 crc kubenswrapper[4755]: I1210 16:30:22.857754 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7cfbl" event={"ID":"a4dfc11c-10fe-4dd0-8f19-d25bd8e87312","Type":"ContainerStarted","Data":"31cc3fe74de9ea5a1c9ef598938ab7e1c7d851058a25490114dcafa6f32dd7a9"} Dec 10 16:30:22 crc kubenswrapper[4755]: I1210 16:30:22.878558 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7cfbl" podStartSLOduration=3.360485974 podStartE2EDuration="5.878538972s" podCreationTimestamp="2025-12-10 16:30:17 +0000 UTC" firstStartedPulling="2025-12-10 16:30:19.82305419 +0000 UTC m=+4016.423937822" lastFinishedPulling="2025-12-10 16:30:22.341107188 +0000 UTC m=+4018.941990820" observedRunningTime="2025-12-10 16:30:22.875029235 +0000 UTC m=+4019.475912877" watchObservedRunningTime="2025-12-10 16:30:22.878538972 +0000 UTC m=+4019.479422604" Dec 10 16:30:26 crc kubenswrapper[4755]: E1210 16:30:26.759179 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:30:28 crc kubenswrapper[4755]: I1210 16:30:28.068596 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7cfbl" Dec 10 16:30:28 crc kubenswrapper[4755]: I1210 16:30:28.068966 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7cfbl" Dec 10 16:30:28 crc kubenswrapper[4755]: I1210 16:30:28.723458 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7cfbl" Dec 10 16:30:29 crc kubenswrapper[4755]: I1210 16:30:29.032520 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7cfbl" Dec 10 16:30:29 crc kubenswrapper[4755]: I1210 16:30:29.741293 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7cfbl"] Dec 10 16:30:30 crc kubenswrapper[4755]: I1210 16:30:30.994563 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7cfbl" podUID="a4dfc11c-10fe-4dd0-8f19-d25bd8e87312" containerName="registry-server" containerID="cri-o://31cc3fe74de9ea5a1c9ef598938ab7e1c7d851058a25490114dcafa6f32dd7a9" gracePeriod=2 Dec 10 16:30:31 crc kubenswrapper[4755]: E1210 16:30:31.126702 4755 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda4dfc11c_10fe_4dd0_8f19_d25bd8e87312.slice/crio-31cc3fe74de9ea5a1c9ef598938ab7e1c7d851058a25490114dcafa6f32dd7a9.scope\": RecentStats: unable to find data in memory cache]" Dec 10 16:30:31 crc kubenswrapper[4755]: I1210 16:30:31.609704 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7cfbl" Dec 10 16:30:31 crc kubenswrapper[4755]: I1210 16:30:31.741409 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4dfc11c-10fe-4dd0-8f19-d25bd8e87312-utilities\") pod \"a4dfc11c-10fe-4dd0-8f19-d25bd8e87312\" (UID: \"a4dfc11c-10fe-4dd0-8f19-d25bd8e87312\") " Dec 10 16:30:31 crc kubenswrapper[4755]: I1210 16:30:31.741683 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n22zd\" (UniqueName: \"kubernetes.io/projected/a4dfc11c-10fe-4dd0-8f19-d25bd8e87312-kube-api-access-n22zd\") pod \"a4dfc11c-10fe-4dd0-8f19-d25bd8e87312\" (UID: \"a4dfc11c-10fe-4dd0-8f19-d25bd8e87312\") " Dec 10 16:30:31 crc kubenswrapper[4755]: I1210 16:30:31.741743 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4dfc11c-10fe-4dd0-8f19-d25bd8e87312-catalog-content\") pod \"a4dfc11c-10fe-4dd0-8f19-d25bd8e87312\" (UID: \"a4dfc11c-10fe-4dd0-8f19-d25bd8e87312\") " Dec 10 16:30:31 crc kubenswrapper[4755]: I1210 16:30:31.742568 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4dfc11c-10fe-4dd0-8f19-d25bd8e87312-utilities" (OuterVolumeSpecName: "utilities") pod "a4dfc11c-10fe-4dd0-8f19-d25bd8e87312" (UID: "a4dfc11c-10fe-4dd0-8f19-d25bd8e87312"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 16:30:31 crc kubenswrapper[4755]: I1210 16:30:31.743095 4755 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4dfc11c-10fe-4dd0-8f19-d25bd8e87312-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 16:30:31 crc kubenswrapper[4755]: I1210 16:30:31.751085 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4dfc11c-10fe-4dd0-8f19-d25bd8e87312-kube-api-access-n22zd" (OuterVolumeSpecName: "kube-api-access-n22zd") pod "a4dfc11c-10fe-4dd0-8f19-d25bd8e87312" (UID: "a4dfc11c-10fe-4dd0-8f19-d25bd8e87312"). InnerVolumeSpecName "kube-api-access-n22zd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 16:30:31 crc kubenswrapper[4755]: I1210 16:30:31.806886 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4dfc11c-10fe-4dd0-8f19-d25bd8e87312-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a4dfc11c-10fe-4dd0-8f19-d25bd8e87312" (UID: "a4dfc11c-10fe-4dd0-8f19-d25bd8e87312"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 16:30:31 crc kubenswrapper[4755]: I1210 16:30:31.845629 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n22zd\" (UniqueName: \"kubernetes.io/projected/a4dfc11c-10fe-4dd0-8f19-d25bd8e87312-kube-api-access-n22zd\") on node \"crc\" DevicePath \"\"" Dec 10 16:30:31 crc kubenswrapper[4755]: I1210 16:30:31.845680 4755 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4dfc11c-10fe-4dd0-8f19-d25bd8e87312-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 16:30:32 crc kubenswrapper[4755]: I1210 16:30:32.005754 4755 generic.go:334] "Generic (PLEG): container finished" podID="a4dfc11c-10fe-4dd0-8f19-d25bd8e87312" containerID="31cc3fe74de9ea5a1c9ef598938ab7e1c7d851058a25490114dcafa6f32dd7a9" exitCode=0 Dec 10 16:30:32 crc kubenswrapper[4755]: I1210 16:30:32.005816 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7cfbl" event={"ID":"a4dfc11c-10fe-4dd0-8f19-d25bd8e87312","Type":"ContainerDied","Data":"31cc3fe74de9ea5a1c9ef598938ab7e1c7d851058a25490114dcafa6f32dd7a9"} Dec 10 16:30:32 crc kubenswrapper[4755]: I1210 16:30:32.005882 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7cfbl" event={"ID":"a4dfc11c-10fe-4dd0-8f19-d25bd8e87312","Type":"ContainerDied","Data":"b57be9c6c652caf2ac11d01b3ce9465be1e0785009439b48650ab555c78c6fbe"} Dec 10 16:30:32 crc kubenswrapper[4755]: I1210 16:30:32.005908 4755 scope.go:117] "RemoveContainer" containerID="31cc3fe74de9ea5a1c9ef598938ab7e1c7d851058a25490114dcafa6f32dd7a9" Dec 10 16:30:32 crc kubenswrapper[4755]: I1210 16:30:32.005910 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7cfbl" Dec 10 16:30:32 crc kubenswrapper[4755]: I1210 16:30:32.041012 4755 scope.go:117] "RemoveContainer" containerID="ddcdd793a32e77941b140b8a2558c8b5e1f03a3a8b09e515e1347578db8150da" Dec 10 16:30:32 crc kubenswrapper[4755]: I1210 16:30:32.050131 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7cfbl"] Dec 10 16:30:32 crc kubenswrapper[4755]: I1210 16:30:32.060638 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7cfbl"] Dec 10 16:30:32 crc kubenswrapper[4755]: I1210 16:30:32.074089 4755 scope.go:117] "RemoveContainer" containerID="ebc038f5d4f73bf64459fdf08a2234010c34596526cf15ef4c713ccd6a1a61fa" Dec 10 16:30:32 crc kubenswrapper[4755]: I1210 16:30:32.118814 4755 scope.go:117] "RemoveContainer" containerID="31cc3fe74de9ea5a1c9ef598938ab7e1c7d851058a25490114dcafa6f32dd7a9" Dec 10 16:30:32 crc kubenswrapper[4755]: E1210 16:30:32.119694 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31cc3fe74de9ea5a1c9ef598938ab7e1c7d851058a25490114dcafa6f32dd7a9\": container with ID starting with 31cc3fe74de9ea5a1c9ef598938ab7e1c7d851058a25490114dcafa6f32dd7a9 not found: ID does not exist" containerID="31cc3fe74de9ea5a1c9ef598938ab7e1c7d851058a25490114dcafa6f32dd7a9" Dec 10 16:30:32 crc kubenswrapper[4755]: I1210 16:30:32.119759 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31cc3fe74de9ea5a1c9ef598938ab7e1c7d851058a25490114dcafa6f32dd7a9"} err="failed to get container status \"31cc3fe74de9ea5a1c9ef598938ab7e1c7d851058a25490114dcafa6f32dd7a9\": rpc error: code = NotFound desc = could not find container \"31cc3fe74de9ea5a1c9ef598938ab7e1c7d851058a25490114dcafa6f32dd7a9\": container with ID starting with 31cc3fe74de9ea5a1c9ef598938ab7e1c7d851058a25490114dcafa6f32dd7a9 not found: ID does not exist" Dec 10 16:30:32 crc kubenswrapper[4755]: I1210 16:30:32.119789 4755 scope.go:117] "RemoveContainer" containerID="ddcdd793a32e77941b140b8a2558c8b5e1f03a3a8b09e515e1347578db8150da" Dec 10 16:30:32 crc kubenswrapper[4755]: E1210 16:30:32.120095 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ddcdd793a32e77941b140b8a2558c8b5e1f03a3a8b09e515e1347578db8150da\": container with ID starting with ddcdd793a32e77941b140b8a2558c8b5e1f03a3a8b09e515e1347578db8150da not found: ID does not exist" containerID="ddcdd793a32e77941b140b8a2558c8b5e1f03a3a8b09e515e1347578db8150da" Dec 10 16:30:32 crc kubenswrapper[4755]: I1210 16:30:32.120217 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ddcdd793a32e77941b140b8a2558c8b5e1f03a3a8b09e515e1347578db8150da"} err="failed to get container status \"ddcdd793a32e77941b140b8a2558c8b5e1f03a3a8b09e515e1347578db8150da\": rpc error: code = NotFound desc = could not find container \"ddcdd793a32e77941b140b8a2558c8b5e1f03a3a8b09e515e1347578db8150da\": container with ID starting with ddcdd793a32e77941b140b8a2558c8b5e1f03a3a8b09e515e1347578db8150da not found: ID does not exist" Dec 10 16:30:32 crc kubenswrapper[4755]: I1210 16:30:32.120314 4755 scope.go:117] "RemoveContainer" containerID="ebc038f5d4f73bf64459fdf08a2234010c34596526cf15ef4c713ccd6a1a61fa" Dec 10 16:30:32 crc kubenswrapper[4755]: E1210 16:30:32.120955 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ebc038f5d4f73bf64459fdf08a2234010c34596526cf15ef4c713ccd6a1a61fa\": container with ID starting with ebc038f5d4f73bf64459fdf08a2234010c34596526cf15ef4c713ccd6a1a61fa not found: ID does not exist" containerID="ebc038f5d4f73bf64459fdf08a2234010c34596526cf15ef4c713ccd6a1a61fa" Dec 10 16:30:32 crc kubenswrapper[4755]: I1210 16:30:32.120983 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebc038f5d4f73bf64459fdf08a2234010c34596526cf15ef4c713ccd6a1a61fa"} err="failed to get container status \"ebc038f5d4f73bf64459fdf08a2234010c34596526cf15ef4c713ccd6a1a61fa\": rpc error: code = NotFound desc = could not find container \"ebc038f5d4f73bf64459fdf08a2234010c34596526cf15ef4c713ccd6a1a61fa\": container with ID starting with ebc038f5d4f73bf64459fdf08a2234010c34596526cf15ef4c713ccd6a1a61fa not found: ID does not exist" Dec 10 16:30:33 crc kubenswrapper[4755]: I1210 16:30:33.769726 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4dfc11c-10fe-4dd0-8f19-d25bd8e87312" path="/var/lib/kubelet/pods/a4dfc11c-10fe-4dd0-8f19-d25bd8e87312/volumes" Dec 10 16:30:35 crc kubenswrapper[4755]: E1210 16:30:35.760166 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:30:37 crc kubenswrapper[4755]: E1210 16:30:37.761267 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:30:47 crc kubenswrapper[4755]: E1210 16:30:47.760780 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:30:48 crc kubenswrapper[4755]: E1210 16:30:48.759507 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:30:58 crc kubenswrapper[4755]: E1210 16:30:58.759882 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:31:02 crc kubenswrapper[4755]: E1210 16:31:02.827123 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:31:11 crc kubenswrapper[4755]: E1210 16:31:11.760424 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:31:14 crc kubenswrapper[4755]: E1210 16:31:14.760004 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:31:25 crc kubenswrapper[4755]: E1210 16:31:25.760045 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:31:28 crc kubenswrapper[4755]: E1210 16:31:28.760916 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:31:29 crc kubenswrapper[4755]: I1210 16:31:29.070713 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-gf7fs"] Dec 10 16:31:29 crc kubenswrapper[4755]: E1210 16:31:29.071256 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4dfc11c-10fe-4dd0-8f19-d25bd8e87312" containerName="extract-utilities" Dec 10 16:31:29 crc kubenswrapper[4755]: I1210 16:31:29.071277 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4dfc11c-10fe-4dd0-8f19-d25bd8e87312" containerName="extract-utilities" Dec 10 16:31:29 crc kubenswrapper[4755]: E1210 16:31:29.071293 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4dfc11c-10fe-4dd0-8f19-d25bd8e87312" containerName="extract-content" Dec 10 16:31:29 crc kubenswrapper[4755]: I1210 16:31:29.071299 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4dfc11c-10fe-4dd0-8f19-d25bd8e87312" containerName="extract-content" Dec 10 16:31:29 crc kubenswrapper[4755]: E1210 16:31:29.071314 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4dfc11c-10fe-4dd0-8f19-d25bd8e87312" containerName="registry-server" Dec 10 16:31:29 crc kubenswrapper[4755]: I1210 16:31:29.071320 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4dfc11c-10fe-4dd0-8f19-d25bd8e87312" containerName="registry-server" Dec 10 16:31:29 crc kubenswrapper[4755]: I1210 16:31:29.071546 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4dfc11c-10fe-4dd0-8f19-d25bd8e87312" containerName="registry-server" Dec 10 16:31:29 crc kubenswrapper[4755]: I1210 16:31:29.073244 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gf7fs" Dec 10 16:31:29 crc kubenswrapper[4755]: I1210 16:31:29.081228 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gf7fs"] Dec 10 16:31:29 crc kubenswrapper[4755]: I1210 16:31:29.274724 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14a64b2c-fbfa-4148-9ecb-3bddb492dfd9-utilities\") pod \"redhat-marketplace-gf7fs\" (UID: \"14a64b2c-fbfa-4148-9ecb-3bddb492dfd9\") " pod="openshift-marketplace/redhat-marketplace-gf7fs" Dec 10 16:31:29 crc kubenswrapper[4755]: I1210 16:31:29.274850 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14a64b2c-fbfa-4148-9ecb-3bddb492dfd9-catalog-content\") pod \"redhat-marketplace-gf7fs\" (UID: \"14a64b2c-fbfa-4148-9ecb-3bddb492dfd9\") " pod="openshift-marketplace/redhat-marketplace-gf7fs" Dec 10 16:31:29 crc kubenswrapper[4755]: I1210 16:31:29.274881 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67fkh\" (UniqueName: \"kubernetes.io/projected/14a64b2c-fbfa-4148-9ecb-3bddb492dfd9-kube-api-access-67fkh\") pod \"redhat-marketplace-gf7fs\" (UID: \"14a64b2c-fbfa-4148-9ecb-3bddb492dfd9\") " pod="openshift-marketplace/redhat-marketplace-gf7fs" Dec 10 16:31:29 crc kubenswrapper[4755]: I1210 16:31:29.377125 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14a64b2c-fbfa-4148-9ecb-3bddb492dfd9-utilities\") pod \"redhat-marketplace-gf7fs\" (UID: \"14a64b2c-fbfa-4148-9ecb-3bddb492dfd9\") " pod="openshift-marketplace/redhat-marketplace-gf7fs" Dec 10 16:31:29 crc kubenswrapper[4755]: I1210 16:31:29.377236 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14a64b2c-fbfa-4148-9ecb-3bddb492dfd9-catalog-content\") pod \"redhat-marketplace-gf7fs\" (UID: \"14a64b2c-fbfa-4148-9ecb-3bddb492dfd9\") " pod="openshift-marketplace/redhat-marketplace-gf7fs" Dec 10 16:31:29 crc kubenswrapper[4755]: I1210 16:31:29.377275 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67fkh\" (UniqueName: \"kubernetes.io/projected/14a64b2c-fbfa-4148-9ecb-3bddb492dfd9-kube-api-access-67fkh\") pod \"redhat-marketplace-gf7fs\" (UID: \"14a64b2c-fbfa-4148-9ecb-3bddb492dfd9\") " pod="openshift-marketplace/redhat-marketplace-gf7fs" Dec 10 16:31:29 crc kubenswrapper[4755]: I1210 16:31:29.378057 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14a64b2c-fbfa-4148-9ecb-3bddb492dfd9-utilities\") pod \"redhat-marketplace-gf7fs\" (UID: \"14a64b2c-fbfa-4148-9ecb-3bddb492dfd9\") " pod="openshift-marketplace/redhat-marketplace-gf7fs" Dec 10 16:31:29 crc kubenswrapper[4755]: I1210 16:31:29.378306 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14a64b2c-fbfa-4148-9ecb-3bddb492dfd9-catalog-content\") pod \"redhat-marketplace-gf7fs\" (UID: \"14a64b2c-fbfa-4148-9ecb-3bddb492dfd9\") " pod="openshift-marketplace/redhat-marketplace-gf7fs" Dec 10 16:31:29 crc kubenswrapper[4755]: I1210 16:31:29.398373 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67fkh\" (UniqueName: \"kubernetes.io/projected/14a64b2c-fbfa-4148-9ecb-3bddb492dfd9-kube-api-access-67fkh\") pod \"redhat-marketplace-gf7fs\" (UID: \"14a64b2c-fbfa-4148-9ecb-3bddb492dfd9\") " pod="openshift-marketplace/redhat-marketplace-gf7fs" Dec 10 16:31:29 crc kubenswrapper[4755]: I1210 16:31:29.696159 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gf7fs" Dec 10 16:31:30 crc kubenswrapper[4755]: I1210 16:31:30.225054 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gf7fs"] Dec 10 16:31:30 crc kubenswrapper[4755]: I1210 16:31:30.891805 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gf7fs" event={"ID":"14a64b2c-fbfa-4148-9ecb-3bddb492dfd9","Type":"ContainerStarted","Data":"33a1935bf71bd987714efe145cb38d497e64300f57ec895d176da0cf1e8b82af"} Dec 10 16:31:31 crc kubenswrapper[4755]: I1210 16:31:31.906873 4755 generic.go:334] "Generic (PLEG): container finished" podID="14a64b2c-fbfa-4148-9ecb-3bddb492dfd9" containerID="b2048eab7fcea81ff2abf927ecaeb2dbe3042056bce275772af10467f553652f" exitCode=0 Dec 10 16:31:31 crc kubenswrapper[4755]: I1210 16:31:31.906972 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gf7fs" event={"ID":"14a64b2c-fbfa-4148-9ecb-3bddb492dfd9","Type":"ContainerDied","Data":"b2048eab7fcea81ff2abf927ecaeb2dbe3042056bce275772af10467f553652f"} Dec 10 16:31:32 crc kubenswrapper[4755]: I1210 16:31:32.917491 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gf7fs" event={"ID":"14a64b2c-fbfa-4148-9ecb-3bddb492dfd9","Type":"ContainerStarted","Data":"2f1f65391d23ad6cfd085f5a8d7bfa4c89eaf45e1c6e0e403666f8cb6027c7ed"} Dec 10 16:31:33 crc kubenswrapper[4755]: I1210 16:31:33.927664 4755 generic.go:334] "Generic (PLEG): container finished" podID="14a64b2c-fbfa-4148-9ecb-3bddb492dfd9" containerID="2f1f65391d23ad6cfd085f5a8d7bfa4c89eaf45e1c6e0e403666f8cb6027c7ed" exitCode=0 Dec 10 16:31:33 crc kubenswrapper[4755]: I1210 16:31:33.927765 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gf7fs" event={"ID":"14a64b2c-fbfa-4148-9ecb-3bddb492dfd9","Type":"ContainerDied","Data":"2f1f65391d23ad6cfd085f5a8d7bfa4c89eaf45e1c6e0e403666f8cb6027c7ed"} Dec 10 16:31:35 crc kubenswrapper[4755]: I1210 16:31:35.949526 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gf7fs" event={"ID":"14a64b2c-fbfa-4148-9ecb-3bddb492dfd9","Type":"ContainerStarted","Data":"0379c5fb553660558c93a77c476d0bf2be62fd8d3c55fbf695183c84c01aadb8"} Dec 10 16:31:35 crc kubenswrapper[4755]: I1210 16:31:35.975549 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-gf7fs" podStartSLOduration=4.181033041 podStartE2EDuration="6.975531073s" podCreationTimestamp="2025-12-10 16:31:29 +0000 UTC" firstStartedPulling="2025-12-10 16:31:31.910455519 +0000 UTC m=+4088.511339141" lastFinishedPulling="2025-12-10 16:31:34.704953541 +0000 UTC m=+4091.305837173" observedRunningTime="2025-12-10 16:31:35.967076011 +0000 UTC m=+4092.567959643" watchObservedRunningTime="2025-12-10 16:31:35.975531073 +0000 UTC m=+4092.576414705" Dec 10 16:31:36 crc kubenswrapper[4755]: E1210 16:31:36.760560 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:31:39 crc kubenswrapper[4755]: I1210 16:31:39.696879 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-gf7fs" Dec 10 16:31:39 crc kubenswrapper[4755]: I1210 16:31:39.697495 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-gf7fs" Dec 10 16:31:39 crc kubenswrapper[4755]: I1210 16:31:39.754024 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-gf7fs" Dec 10 16:31:41 crc kubenswrapper[4755]: E1210 16:31:41.760782 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:31:47 crc kubenswrapper[4755]: E1210 16:31:47.760730 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:31:49 crc kubenswrapper[4755]: I1210 16:31:49.742823 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-gf7fs" Dec 10 16:31:49 crc kubenswrapper[4755]: I1210 16:31:49.797750 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gf7fs"] Dec 10 16:31:50 crc kubenswrapper[4755]: I1210 16:31:50.086999 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-gf7fs" podUID="14a64b2c-fbfa-4148-9ecb-3bddb492dfd9" containerName="registry-server" containerID="cri-o://0379c5fb553660558c93a77c476d0bf2be62fd8d3c55fbf695183c84c01aadb8" gracePeriod=2 Dec 10 16:31:51 crc kubenswrapper[4755]: I1210 16:31:51.106305 4755 generic.go:334] "Generic (PLEG): container finished" podID="14a64b2c-fbfa-4148-9ecb-3bddb492dfd9" containerID="0379c5fb553660558c93a77c476d0bf2be62fd8d3c55fbf695183c84c01aadb8" exitCode=0 Dec 10 16:31:51 crc kubenswrapper[4755]: I1210 16:31:51.106571 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gf7fs" event={"ID":"14a64b2c-fbfa-4148-9ecb-3bddb492dfd9","Type":"ContainerDied","Data":"0379c5fb553660558c93a77c476d0bf2be62fd8d3c55fbf695183c84c01aadb8"} Dec 10 16:31:51 crc kubenswrapper[4755]: I1210 16:31:51.326144 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gf7fs" Dec 10 16:31:51 crc kubenswrapper[4755]: I1210 16:31:51.475643 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14a64b2c-fbfa-4148-9ecb-3bddb492dfd9-catalog-content\") pod \"14a64b2c-fbfa-4148-9ecb-3bddb492dfd9\" (UID: \"14a64b2c-fbfa-4148-9ecb-3bddb492dfd9\") " Dec 10 16:31:51 crc kubenswrapper[4755]: I1210 16:31:51.475734 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-67fkh\" (UniqueName: \"kubernetes.io/projected/14a64b2c-fbfa-4148-9ecb-3bddb492dfd9-kube-api-access-67fkh\") pod \"14a64b2c-fbfa-4148-9ecb-3bddb492dfd9\" (UID: \"14a64b2c-fbfa-4148-9ecb-3bddb492dfd9\") " Dec 10 16:31:51 crc kubenswrapper[4755]: I1210 16:31:51.475771 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14a64b2c-fbfa-4148-9ecb-3bddb492dfd9-utilities\") pod \"14a64b2c-fbfa-4148-9ecb-3bddb492dfd9\" (UID: \"14a64b2c-fbfa-4148-9ecb-3bddb492dfd9\") " Dec 10 16:31:51 crc kubenswrapper[4755]: I1210 16:31:51.476986 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14a64b2c-fbfa-4148-9ecb-3bddb492dfd9-utilities" (OuterVolumeSpecName: "utilities") pod "14a64b2c-fbfa-4148-9ecb-3bddb492dfd9" (UID: "14a64b2c-fbfa-4148-9ecb-3bddb492dfd9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 16:31:51 crc kubenswrapper[4755]: I1210 16:31:51.483772 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14a64b2c-fbfa-4148-9ecb-3bddb492dfd9-kube-api-access-67fkh" (OuterVolumeSpecName: "kube-api-access-67fkh") pod "14a64b2c-fbfa-4148-9ecb-3bddb492dfd9" (UID: "14a64b2c-fbfa-4148-9ecb-3bddb492dfd9"). InnerVolumeSpecName "kube-api-access-67fkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 16:31:51 crc kubenswrapper[4755]: I1210 16:31:51.496123 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14a64b2c-fbfa-4148-9ecb-3bddb492dfd9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "14a64b2c-fbfa-4148-9ecb-3bddb492dfd9" (UID: "14a64b2c-fbfa-4148-9ecb-3bddb492dfd9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 16:31:51 crc kubenswrapper[4755]: I1210 16:31:51.579057 4755 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14a64b2c-fbfa-4148-9ecb-3bddb492dfd9-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 16:31:51 crc kubenswrapper[4755]: I1210 16:31:51.579084 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-67fkh\" (UniqueName: \"kubernetes.io/projected/14a64b2c-fbfa-4148-9ecb-3bddb492dfd9-kube-api-access-67fkh\") on node \"crc\" DevicePath \"\"" Dec 10 16:31:51 crc kubenswrapper[4755]: I1210 16:31:51.579109 4755 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14a64b2c-fbfa-4148-9ecb-3bddb492dfd9-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 16:31:52 crc kubenswrapper[4755]: I1210 16:31:52.118453 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gf7fs" event={"ID":"14a64b2c-fbfa-4148-9ecb-3bddb492dfd9","Type":"ContainerDied","Data":"33a1935bf71bd987714efe145cb38d497e64300f57ec895d176da0cf1e8b82af"} Dec 10 16:31:52 crc kubenswrapper[4755]: I1210 16:31:52.118536 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gf7fs" Dec 10 16:31:52 crc kubenswrapper[4755]: I1210 16:31:52.118856 4755 scope.go:117] "RemoveContainer" containerID="0379c5fb553660558c93a77c476d0bf2be62fd8d3c55fbf695183c84c01aadb8" Dec 10 16:31:52 crc kubenswrapper[4755]: I1210 16:31:52.145982 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gf7fs"] Dec 10 16:31:52 crc kubenswrapper[4755]: I1210 16:31:52.147570 4755 scope.go:117] "RemoveContainer" containerID="2f1f65391d23ad6cfd085f5a8d7bfa4c89eaf45e1c6e0e403666f8cb6027c7ed" Dec 10 16:31:52 crc kubenswrapper[4755]: I1210 16:31:52.159524 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-gf7fs"] Dec 10 16:31:52 crc kubenswrapper[4755]: I1210 16:31:52.173234 4755 scope.go:117] "RemoveContainer" containerID="b2048eab7fcea81ff2abf927ecaeb2dbe3042056bce275772af10467f553652f" Dec 10 16:31:52 crc kubenswrapper[4755]: E1210 16:31:52.759601 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:31:53 crc kubenswrapper[4755]: I1210 16:31:53.780137 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14a64b2c-fbfa-4148-9ecb-3bddb492dfd9" path="/var/lib/kubelet/pods/14a64b2c-fbfa-4148-9ecb-3bddb492dfd9/volumes" Dec 10 16:31:54 crc kubenswrapper[4755]: I1210 16:31:54.646789 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xmsrv"] Dec 10 16:31:54 crc kubenswrapper[4755]: E1210 16:31:54.647691 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14a64b2c-fbfa-4148-9ecb-3bddb492dfd9" containerName="extract-content" Dec 10 16:31:54 crc kubenswrapper[4755]: I1210 16:31:54.647715 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="14a64b2c-fbfa-4148-9ecb-3bddb492dfd9" containerName="extract-content" Dec 10 16:31:54 crc kubenswrapper[4755]: E1210 16:31:54.647734 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14a64b2c-fbfa-4148-9ecb-3bddb492dfd9" containerName="registry-server" Dec 10 16:31:54 crc kubenswrapper[4755]: I1210 16:31:54.647742 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="14a64b2c-fbfa-4148-9ecb-3bddb492dfd9" containerName="registry-server" Dec 10 16:31:54 crc kubenswrapper[4755]: E1210 16:31:54.647752 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14a64b2c-fbfa-4148-9ecb-3bddb492dfd9" containerName="extract-utilities" Dec 10 16:31:54 crc kubenswrapper[4755]: I1210 16:31:54.647760 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="14a64b2c-fbfa-4148-9ecb-3bddb492dfd9" containerName="extract-utilities" Dec 10 16:31:54 crc kubenswrapper[4755]: I1210 16:31:54.648024 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="14a64b2c-fbfa-4148-9ecb-3bddb492dfd9" containerName="registry-server" Dec 10 16:31:54 crc kubenswrapper[4755]: I1210 16:31:54.650080 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xmsrv" Dec 10 16:31:54 crc kubenswrapper[4755]: I1210 16:31:54.657435 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xmsrv"] Dec 10 16:31:54 crc kubenswrapper[4755]: I1210 16:31:54.760844 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a750d616-bbde-40c2-93f9-520346691c75-catalog-content\") pod \"certified-operators-xmsrv\" (UID: \"a750d616-bbde-40c2-93f9-520346691c75\") " pod="openshift-marketplace/certified-operators-xmsrv" Dec 10 16:31:54 crc kubenswrapper[4755]: I1210 16:31:54.760946 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzhw6\" (UniqueName: \"kubernetes.io/projected/a750d616-bbde-40c2-93f9-520346691c75-kube-api-access-tzhw6\") pod \"certified-operators-xmsrv\" (UID: \"a750d616-bbde-40c2-93f9-520346691c75\") " pod="openshift-marketplace/certified-operators-xmsrv" Dec 10 16:31:54 crc kubenswrapper[4755]: I1210 16:31:54.761703 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a750d616-bbde-40c2-93f9-520346691c75-utilities\") pod \"certified-operators-xmsrv\" (UID: \"a750d616-bbde-40c2-93f9-520346691c75\") " pod="openshift-marketplace/certified-operators-xmsrv" Dec 10 16:31:54 crc kubenswrapper[4755]: I1210 16:31:54.863876 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a750d616-bbde-40c2-93f9-520346691c75-utilities\") pod \"certified-operators-xmsrv\" (UID: \"a750d616-bbde-40c2-93f9-520346691c75\") " pod="openshift-marketplace/certified-operators-xmsrv" Dec 10 16:31:54 crc kubenswrapper[4755]: I1210 16:31:54.864053 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a750d616-bbde-40c2-93f9-520346691c75-catalog-content\") pod \"certified-operators-xmsrv\" (UID: \"a750d616-bbde-40c2-93f9-520346691c75\") " pod="openshift-marketplace/certified-operators-xmsrv" Dec 10 16:31:54 crc kubenswrapper[4755]: I1210 16:31:54.864144 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzhw6\" (UniqueName: \"kubernetes.io/projected/a750d616-bbde-40c2-93f9-520346691c75-kube-api-access-tzhw6\") pod \"certified-operators-xmsrv\" (UID: \"a750d616-bbde-40c2-93f9-520346691c75\") " pod="openshift-marketplace/certified-operators-xmsrv" Dec 10 16:31:54 crc kubenswrapper[4755]: I1210 16:31:54.864597 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a750d616-bbde-40c2-93f9-520346691c75-utilities\") pod \"certified-operators-xmsrv\" (UID: \"a750d616-bbde-40c2-93f9-520346691c75\") " pod="openshift-marketplace/certified-operators-xmsrv" Dec 10 16:31:54 crc kubenswrapper[4755]: I1210 16:31:54.864609 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a750d616-bbde-40c2-93f9-520346691c75-catalog-content\") pod \"certified-operators-xmsrv\" (UID: \"a750d616-bbde-40c2-93f9-520346691c75\") " pod="openshift-marketplace/certified-operators-xmsrv" Dec 10 16:31:54 crc kubenswrapper[4755]: I1210 16:31:54.896827 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzhw6\" (UniqueName: \"kubernetes.io/projected/a750d616-bbde-40c2-93f9-520346691c75-kube-api-access-tzhw6\") pod \"certified-operators-xmsrv\" (UID: \"a750d616-bbde-40c2-93f9-520346691c75\") " pod="openshift-marketplace/certified-operators-xmsrv" Dec 10 16:31:54 crc kubenswrapper[4755]: I1210 16:31:54.973678 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xmsrv" Dec 10 16:31:55 crc kubenswrapper[4755]: I1210 16:31:55.513511 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xmsrv"] Dec 10 16:31:55 crc kubenswrapper[4755]: W1210 16:31:55.518331 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda750d616_bbde_40c2_93f9_520346691c75.slice/crio-a0b2105255b8d469c1b7fe8cdaa786b76cd8e6ac1f0c425f4548be2de28d6c33 WatchSource:0}: Error finding container a0b2105255b8d469c1b7fe8cdaa786b76cd8e6ac1f0c425f4548be2de28d6c33: Status 404 returned error can't find the container with id a0b2105255b8d469c1b7fe8cdaa786b76cd8e6ac1f0c425f4548be2de28d6c33 Dec 10 16:31:56 crc kubenswrapper[4755]: I1210 16:31:56.167090 4755 generic.go:334] "Generic (PLEG): container finished" podID="a750d616-bbde-40c2-93f9-520346691c75" containerID="6ddf29fa596e588d8230abe45f35864e73f7b622f0298f6d226ac6396a53bd35" exitCode=0 Dec 10 16:31:56 crc kubenswrapper[4755]: I1210 16:31:56.167133 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xmsrv" event={"ID":"a750d616-bbde-40c2-93f9-520346691c75","Type":"ContainerDied","Data":"6ddf29fa596e588d8230abe45f35864e73f7b622f0298f6d226ac6396a53bd35"} Dec 10 16:31:56 crc kubenswrapper[4755]: I1210 16:31:56.167159 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xmsrv" event={"ID":"a750d616-bbde-40c2-93f9-520346691c75","Type":"ContainerStarted","Data":"a0b2105255b8d469c1b7fe8cdaa786b76cd8e6ac1f0c425f4548be2de28d6c33"} Dec 10 16:32:00 crc kubenswrapper[4755]: I1210 16:32:00.204983 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xmsrv" event={"ID":"a750d616-bbde-40c2-93f9-520346691c75","Type":"ContainerStarted","Data":"5ce16760bc506ae4ffb10598c412248ccf36415c4626b6a1e3badb70135ccc0e"} Dec 10 16:32:01 crc kubenswrapper[4755]: E1210 16:32:01.759874 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:32:03 crc kubenswrapper[4755]: I1210 16:32:03.245784 4755 generic.go:334] "Generic (PLEG): container finished" podID="a750d616-bbde-40c2-93f9-520346691c75" containerID="5ce16760bc506ae4ffb10598c412248ccf36415c4626b6a1e3badb70135ccc0e" exitCode=0 Dec 10 16:32:03 crc kubenswrapper[4755]: I1210 16:32:03.245867 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xmsrv" event={"ID":"a750d616-bbde-40c2-93f9-520346691c75","Type":"ContainerDied","Data":"5ce16760bc506ae4ffb10598c412248ccf36415c4626b6a1e3badb70135ccc0e"} Dec 10 16:32:04 crc kubenswrapper[4755]: E1210 16:32:04.758972 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:32:05 crc kubenswrapper[4755]: I1210 16:32:05.265644 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xmsrv" event={"ID":"a750d616-bbde-40c2-93f9-520346691c75","Type":"ContainerStarted","Data":"45cee2ef9e469efe83de22d67b4b944621bf4b600e838501942e2b90f4ec58b1"} Dec 10 16:32:10 crc kubenswrapper[4755]: I1210 16:32:10.359111 4755 patch_prober.go:28] interesting pod/machine-config-daemon-ggt8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 16:32:10 crc kubenswrapper[4755]: I1210 16:32:10.359734 4755 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 16:32:14 crc kubenswrapper[4755]: E1210 16:32:14.758892 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:32:14 crc kubenswrapper[4755]: I1210 16:32:14.974159 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xmsrv" Dec 10 16:32:14 crc kubenswrapper[4755]: I1210 16:32:14.974234 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xmsrv" Dec 10 16:32:15 crc kubenswrapper[4755]: I1210 16:32:15.023996 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xmsrv" Dec 10 16:32:15 crc kubenswrapper[4755]: I1210 16:32:15.046829 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xmsrv" podStartSLOduration=13.101269176 podStartE2EDuration="21.046808757s" podCreationTimestamp="2025-12-10 16:31:54 +0000 UTC" firstStartedPulling="2025-12-10 16:31:56.169653583 +0000 UTC m=+4112.770537215" lastFinishedPulling="2025-12-10 16:32:04.115193164 +0000 UTC m=+4120.716076796" observedRunningTime="2025-12-10 16:32:05.285747554 +0000 UTC m=+4121.886631206" watchObservedRunningTime="2025-12-10 16:32:15.046808757 +0000 UTC m=+4131.647692389" Dec 10 16:32:15 crc kubenswrapper[4755]: I1210 16:32:15.415093 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xmsrv" Dec 10 16:32:15 crc kubenswrapper[4755]: I1210 16:32:15.474525 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xmsrv"] Dec 10 16:32:17 crc kubenswrapper[4755]: I1210 16:32:17.394054 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xmsrv" podUID="a750d616-bbde-40c2-93f9-520346691c75" containerName="registry-server" containerID="cri-o://45cee2ef9e469efe83de22d67b4b944621bf4b600e838501942e2b90f4ec58b1" gracePeriod=2 Dec 10 16:32:18 crc kubenswrapper[4755]: I1210 16:32:18.404110 4755 generic.go:334] "Generic (PLEG): container finished" podID="a750d616-bbde-40c2-93f9-520346691c75" containerID="45cee2ef9e469efe83de22d67b4b944621bf4b600e838501942e2b90f4ec58b1" exitCode=0 Dec 10 16:32:18 crc kubenswrapper[4755]: I1210 16:32:18.404317 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xmsrv" event={"ID":"a750d616-bbde-40c2-93f9-520346691c75","Type":"ContainerDied","Data":"45cee2ef9e469efe83de22d67b4b944621bf4b600e838501942e2b90f4ec58b1"} Dec 10 16:32:18 crc kubenswrapper[4755]: I1210 16:32:18.404534 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xmsrv" event={"ID":"a750d616-bbde-40c2-93f9-520346691c75","Type":"ContainerDied","Data":"a0b2105255b8d469c1b7fe8cdaa786b76cd8e6ac1f0c425f4548be2de28d6c33"} Dec 10 16:32:18 crc kubenswrapper[4755]: I1210 16:32:18.404549 4755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a0b2105255b8d469c1b7fe8cdaa786b76cd8e6ac1f0c425f4548be2de28d6c33" Dec 10 16:32:18 crc kubenswrapper[4755]: I1210 16:32:18.479961 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xmsrv" Dec 10 16:32:18 crc kubenswrapper[4755]: I1210 16:32:18.604947 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a750d616-bbde-40c2-93f9-520346691c75-utilities\") pod \"a750d616-bbde-40c2-93f9-520346691c75\" (UID: \"a750d616-bbde-40c2-93f9-520346691c75\") " Dec 10 16:32:18 crc kubenswrapper[4755]: I1210 16:32:18.605004 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a750d616-bbde-40c2-93f9-520346691c75-catalog-content\") pod \"a750d616-bbde-40c2-93f9-520346691c75\" (UID: \"a750d616-bbde-40c2-93f9-520346691c75\") " Dec 10 16:32:18 crc kubenswrapper[4755]: I1210 16:32:18.605136 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tzhw6\" (UniqueName: \"kubernetes.io/projected/a750d616-bbde-40c2-93f9-520346691c75-kube-api-access-tzhw6\") pod \"a750d616-bbde-40c2-93f9-520346691c75\" (UID: \"a750d616-bbde-40c2-93f9-520346691c75\") " Dec 10 16:32:18 crc kubenswrapper[4755]: I1210 16:32:18.605726 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a750d616-bbde-40c2-93f9-520346691c75-utilities" (OuterVolumeSpecName: "utilities") pod "a750d616-bbde-40c2-93f9-520346691c75" (UID: "a750d616-bbde-40c2-93f9-520346691c75"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 16:32:18 crc kubenswrapper[4755]: I1210 16:32:18.611561 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a750d616-bbde-40c2-93f9-520346691c75-kube-api-access-tzhw6" (OuterVolumeSpecName: "kube-api-access-tzhw6") pod "a750d616-bbde-40c2-93f9-520346691c75" (UID: "a750d616-bbde-40c2-93f9-520346691c75"). InnerVolumeSpecName "kube-api-access-tzhw6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 16:32:18 crc kubenswrapper[4755]: I1210 16:32:18.654921 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a750d616-bbde-40c2-93f9-520346691c75-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a750d616-bbde-40c2-93f9-520346691c75" (UID: "a750d616-bbde-40c2-93f9-520346691c75"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 16:32:18 crc kubenswrapper[4755]: I1210 16:32:18.707906 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tzhw6\" (UniqueName: \"kubernetes.io/projected/a750d616-bbde-40c2-93f9-520346691c75-kube-api-access-tzhw6\") on node \"crc\" DevicePath \"\"" Dec 10 16:32:18 crc kubenswrapper[4755]: I1210 16:32:18.707949 4755 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a750d616-bbde-40c2-93f9-520346691c75-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 16:32:18 crc kubenswrapper[4755]: I1210 16:32:18.707959 4755 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a750d616-bbde-40c2-93f9-520346691c75-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 16:32:19 crc kubenswrapper[4755]: I1210 16:32:19.415530 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xmsrv" Dec 10 16:32:19 crc kubenswrapper[4755]: I1210 16:32:19.448699 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xmsrv"] Dec 10 16:32:19 crc kubenswrapper[4755]: I1210 16:32:19.466818 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xmsrv"] Dec 10 16:32:19 crc kubenswrapper[4755]: E1210 16:32:19.761072 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:32:19 crc kubenswrapper[4755]: I1210 16:32:19.772369 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a750d616-bbde-40c2-93f9-520346691c75" path="/var/lib/kubelet/pods/a750d616-bbde-40c2-93f9-520346691c75/volumes" Dec 10 16:32:27 crc kubenswrapper[4755]: E1210 16:32:27.759967 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:32:31 crc kubenswrapper[4755]: E1210 16:32:31.760250 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:32:40 crc kubenswrapper[4755]: I1210 16:32:40.358766 4755 patch_prober.go:28] interesting pod/machine-config-daemon-ggt8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 16:32:40 crc kubenswrapper[4755]: I1210 16:32:40.359384 4755 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 16:32:40 crc kubenswrapper[4755]: E1210 16:32:40.760637 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:32:42 crc kubenswrapper[4755]: E1210 16:32:42.760781 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:32:52 crc kubenswrapper[4755]: E1210 16:32:52.761859 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:32:55 crc kubenswrapper[4755]: E1210 16:32:55.763972 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:33:06 crc kubenswrapper[4755]: E1210 16:33:06.761125 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:33:10 crc kubenswrapper[4755]: I1210 16:33:10.358850 4755 patch_prober.go:28] interesting pod/machine-config-daemon-ggt8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 16:33:10 crc kubenswrapper[4755]: I1210 16:33:10.359377 4755 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 16:33:10 crc kubenswrapper[4755]: I1210 16:33:10.359417 4755 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" Dec 10 16:33:10 crc kubenswrapper[4755]: I1210 16:33:10.360224 4755 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2a9a1a5649a7241f53a4cb79d7a0bc610b813c7bcddded9cc76a86dcecf742ad"} pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 10 16:33:10 crc kubenswrapper[4755]: I1210 16:33:10.360299 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" containerName="machine-config-daemon" containerID="cri-o://2a9a1a5649a7241f53a4cb79d7a0bc610b813c7bcddded9cc76a86dcecf742ad" gracePeriod=600 Dec 10 16:33:10 crc kubenswrapper[4755]: E1210 16:33:10.760170 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:33:10 crc kubenswrapper[4755]: I1210 16:33:10.948108 4755 generic.go:334] "Generic (PLEG): container finished" podID="b132a8b9-1c99-414d-8773-229bf36b305d" containerID="2a9a1a5649a7241f53a4cb79d7a0bc610b813c7bcddded9cc76a86dcecf742ad" exitCode=0 Dec 10 16:33:10 crc kubenswrapper[4755]: I1210 16:33:10.948152 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" event={"ID":"b132a8b9-1c99-414d-8773-229bf36b305d","Type":"ContainerDied","Data":"2a9a1a5649a7241f53a4cb79d7a0bc610b813c7bcddded9cc76a86dcecf742ad"} Dec 10 16:33:10 crc kubenswrapper[4755]: I1210 16:33:10.948447 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" event={"ID":"b132a8b9-1c99-414d-8773-229bf36b305d","Type":"ContainerStarted","Data":"a0141776b31c2a932cec3bfb5dc79420825b6de67ef3162cf9228f8d7fa8df5a"} Dec 10 16:33:10 crc kubenswrapper[4755]: I1210 16:33:10.948525 4755 scope.go:117] "RemoveContainer" containerID="3a32840bbc4b33bf990552b48c16e0d53e66710a5880db985795b1072a3ba36c" Dec 10 16:33:21 crc kubenswrapper[4755]: E1210 16:33:21.760027 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:33:24 crc kubenswrapper[4755]: E1210 16:33:24.760046 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:33:33 crc kubenswrapper[4755]: E1210 16:33:33.765928 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:33:37 crc kubenswrapper[4755]: E1210 16:33:37.760247 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:33:45 crc kubenswrapper[4755]: E1210 16:33:45.759113 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:33:51 crc kubenswrapper[4755]: E1210 16:33:51.760740 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:33:59 crc kubenswrapper[4755]: E1210 16:33:59.760603 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:34:04 crc kubenswrapper[4755]: E1210 16:34:04.760207 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:34:10 crc kubenswrapper[4755]: E1210 16:34:10.759408 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:34:16 crc kubenswrapper[4755]: E1210 16:34:16.759923 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:34:22 crc kubenswrapper[4755]: E1210 16:34:22.759915 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:34:29 crc kubenswrapper[4755]: E1210 16:34:29.760053 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:34:33 crc kubenswrapper[4755]: E1210 16:34:33.767155 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:34:43 crc kubenswrapper[4755]: E1210 16:34:43.766747 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:34:46 crc kubenswrapper[4755]: E1210 16:34:46.876530 4755 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 10 16:34:46 crc kubenswrapper[4755]: E1210 16:34:46.876911 4755 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 10 16:34:46 crc kubenswrapper[4755]: E1210 16:34:46.877107 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mz4t5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-jfc28_openstack(998863b6-4f48-4c8b-8011-a40377686b99): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 10 16:34:46 crc kubenswrapper[4755]: E1210 16:34:46.878356 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:34:55 crc kubenswrapper[4755]: E1210 16:34:55.760277 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:35:00 crc kubenswrapper[4755]: E1210 16:35:00.760984 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:35:09 crc kubenswrapper[4755]: E1210 16:35:09.760405 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:35:10 crc kubenswrapper[4755]: I1210 16:35:10.359730 4755 patch_prober.go:28] interesting pod/machine-config-daemon-ggt8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 16:35:10 crc kubenswrapper[4755]: I1210 16:35:10.359791 4755 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 16:35:13 crc kubenswrapper[4755]: E1210 16:35:13.769864 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:35:24 crc kubenswrapper[4755]: I1210 16:35:24.762437 4755 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 10 16:35:24 crc kubenswrapper[4755]: E1210 16:35:24.872190 4755 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 10 16:35:24 crc kubenswrapper[4755]: E1210 16:35:24.872300 4755 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 10 16:35:24 crc kubenswrapper[4755]: E1210 16:35:24.872550 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5d4h5b7hfbh5ddh688h9ch55bh7chf6h5ddh68ch94h69h5c5h596h59bh569hfchc4h676hcbh64dhdbh57fh75h5c9h98h59ch679h566h77h9cq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hw9gj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(6d104bea-ecdc-4fe1-9861-fb1a19fce845): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 10 16:35:24 crc kubenswrapper[4755]: E1210 16:35:24.873795 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:35:28 crc kubenswrapper[4755]: E1210 16:35:28.760523 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:35:39 crc kubenswrapper[4755]: E1210 16:35:39.759831 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:35:40 crc kubenswrapper[4755]: I1210 16:35:40.360057 4755 patch_prober.go:28] interesting pod/machine-config-daemon-ggt8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 16:35:40 crc kubenswrapper[4755]: I1210 16:35:40.360140 4755 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 16:35:40 crc kubenswrapper[4755]: E1210 16:35:40.759135 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:35:51 crc kubenswrapper[4755]: E1210 16:35:51.759810 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:35:51 crc kubenswrapper[4755]: E1210 16:35:51.759832 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:36:02 crc kubenswrapper[4755]: E1210 16:36:02.760203 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:36:06 crc kubenswrapper[4755]: E1210 16:36:06.759890 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:36:10 crc kubenswrapper[4755]: I1210 16:36:10.358904 4755 patch_prober.go:28] interesting pod/machine-config-daemon-ggt8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 16:36:10 crc kubenswrapper[4755]: I1210 16:36:10.359203 4755 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 16:36:10 crc kubenswrapper[4755]: I1210 16:36:10.359245 4755 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" Dec 10 16:36:10 crc kubenswrapper[4755]: I1210 16:36:10.360079 4755 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a0141776b31c2a932cec3bfb5dc79420825b6de67ef3162cf9228f8d7fa8df5a"} pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 10 16:36:10 crc kubenswrapper[4755]: I1210 16:36:10.360176 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" containerName="machine-config-daemon" containerID="cri-o://a0141776b31c2a932cec3bfb5dc79420825b6de67ef3162cf9228f8d7fa8df5a" gracePeriod=600 Dec 10 16:36:10 crc kubenswrapper[4755]: E1210 16:36:10.483045 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:36:10 crc kubenswrapper[4755]: I1210 16:36:10.817984 4755 generic.go:334] "Generic (PLEG): container finished" podID="b132a8b9-1c99-414d-8773-229bf36b305d" containerID="a0141776b31c2a932cec3bfb5dc79420825b6de67ef3162cf9228f8d7fa8df5a" exitCode=0 Dec 10 16:36:10 crc kubenswrapper[4755]: I1210 16:36:10.818052 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" event={"ID":"b132a8b9-1c99-414d-8773-229bf36b305d","Type":"ContainerDied","Data":"a0141776b31c2a932cec3bfb5dc79420825b6de67ef3162cf9228f8d7fa8df5a"} Dec 10 16:36:10 crc kubenswrapper[4755]: I1210 16:36:10.818169 4755 scope.go:117] "RemoveContainer" containerID="2a9a1a5649a7241f53a4cb79d7a0bc610b813c7bcddded9cc76a86dcecf742ad" Dec 10 16:36:10 crc kubenswrapper[4755]: I1210 16:36:10.819101 4755 scope.go:117] "RemoveContainer" containerID="a0141776b31c2a932cec3bfb5dc79420825b6de67ef3162cf9228f8d7fa8df5a" Dec 10 16:36:10 crc kubenswrapper[4755]: E1210 16:36:10.819423 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:36:13 crc kubenswrapper[4755]: E1210 16:36:13.769402 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:36:16 crc kubenswrapper[4755]: I1210 16:36:16.877301 4755 generic.go:334] "Generic (PLEG): container finished" podID="48fe9944-e282-45c9-b9b2-6716af358188" containerID="3115dafb5df3819821b3e9e18a98268295e89c720cd1771241373d6bc211363a" exitCode=2 Dec 10 16:36:16 crc kubenswrapper[4755]: I1210 16:36:16.877424 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pq5sx" event={"ID":"48fe9944-e282-45c9-b9b2-6716af358188","Type":"ContainerDied","Data":"3115dafb5df3819821b3e9e18a98268295e89c720cd1771241373d6bc211363a"} Dec 10 16:36:17 crc kubenswrapper[4755]: E1210 16:36:17.759432 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:36:18 crc kubenswrapper[4755]: I1210 16:36:18.897825 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pq5sx" event={"ID":"48fe9944-e282-45c9-b9b2-6716af358188","Type":"ContainerDied","Data":"d481b0c8c9d5cb766b918c910c2766511d4b67fad078f3be518bfc03e340fa11"} Dec 10 16:36:18 crc kubenswrapper[4755]: I1210 16:36:18.898181 4755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d481b0c8c9d5cb766b918c910c2766511d4b67fad078f3be518bfc03e340fa11" Dec 10 16:36:18 crc kubenswrapper[4755]: I1210 16:36:18.916935 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pq5sx" Dec 10 16:36:19 crc kubenswrapper[4755]: I1210 16:36:19.014043 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nwhm8\" (UniqueName: \"kubernetes.io/projected/48fe9944-e282-45c9-b9b2-6716af358188-kube-api-access-nwhm8\") pod \"48fe9944-e282-45c9-b9b2-6716af358188\" (UID: \"48fe9944-e282-45c9-b9b2-6716af358188\") " Dec 10 16:36:19 crc kubenswrapper[4755]: I1210 16:36:19.014143 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/48fe9944-e282-45c9-b9b2-6716af358188-inventory\") pod \"48fe9944-e282-45c9-b9b2-6716af358188\" (UID: \"48fe9944-e282-45c9-b9b2-6716af358188\") " Dec 10 16:36:19 crc kubenswrapper[4755]: I1210 16:36:19.014207 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/48fe9944-e282-45c9-b9b2-6716af358188-ssh-key\") pod \"48fe9944-e282-45c9-b9b2-6716af358188\" (UID: \"48fe9944-e282-45c9-b9b2-6716af358188\") " Dec 10 16:36:19 crc kubenswrapper[4755]: I1210 16:36:19.019292 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48fe9944-e282-45c9-b9b2-6716af358188-kube-api-access-nwhm8" (OuterVolumeSpecName: "kube-api-access-nwhm8") pod "48fe9944-e282-45c9-b9b2-6716af358188" (UID: "48fe9944-e282-45c9-b9b2-6716af358188"). InnerVolumeSpecName "kube-api-access-nwhm8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 16:36:19 crc kubenswrapper[4755]: I1210 16:36:19.043144 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48fe9944-e282-45c9-b9b2-6716af358188-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "48fe9944-e282-45c9-b9b2-6716af358188" (UID: "48fe9944-e282-45c9-b9b2-6716af358188"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 16:36:19 crc kubenswrapper[4755]: I1210 16:36:19.043629 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48fe9944-e282-45c9-b9b2-6716af358188-inventory" (OuterVolumeSpecName: "inventory") pod "48fe9944-e282-45c9-b9b2-6716af358188" (UID: "48fe9944-e282-45c9-b9b2-6716af358188"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 16:36:19 crc kubenswrapper[4755]: I1210 16:36:19.115805 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nwhm8\" (UniqueName: \"kubernetes.io/projected/48fe9944-e282-45c9-b9b2-6716af358188-kube-api-access-nwhm8\") on node \"crc\" DevicePath \"\"" Dec 10 16:36:19 crc kubenswrapper[4755]: I1210 16:36:19.115837 4755 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/48fe9944-e282-45c9-b9b2-6716af358188-inventory\") on node \"crc\" DevicePath \"\"" Dec 10 16:36:19 crc kubenswrapper[4755]: I1210 16:36:19.115846 4755 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/48fe9944-e282-45c9-b9b2-6716af358188-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 10 16:36:19 crc kubenswrapper[4755]: I1210 16:36:19.906395 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pq5sx" Dec 10 16:36:24 crc kubenswrapper[4755]: E1210 16:36:24.761517 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:36:25 crc kubenswrapper[4755]: I1210 16:36:25.758323 4755 scope.go:117] "RemoveContainer" containerID="a0141776b31c2a932cec3bfb5dc79420825b6de67ef3162cf9228f8d7fa8df5a" Dec 10 16:36:25 crc kubenswrapper[4755]: E1210 16:36:25.758698 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:36:28 crc kubenswrapper[4755]: E1210 16:36:28.759664 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:36:39 crc kubenswrapper[4755]: I1210 16:36:39.758033 4755 scope.go:117] "RemoveContainer" containerID="a0141776b31c2a932cec3bfb5dc79420825b6de67ef3162cf9228f8d7fa8df5a" Dec 10 16:36:39 crc kubenswrapper[4755]: E1210 16:36:39.759077 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:36:39 crc kubenswrapper[4755]: E1210 16:36:39.762650 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:36:40 crc kubenswrapper[4755]: E1210 16:36:40.759963 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:36:51 crc kubenswrapper[4755]: E1210 16:36:51.759596 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:36:52 crc kubenswrapper[4755]: E1210 16:36:52.760498 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:36:54 crc kubenswrapper[4755]: I1210 16:36:54.758355 4755 scope.go:117] "RemoveContainer" containerID="a0141776b31c2a932cec3bfb5dc79420825b6de67ef3162cf9228f8d7fa8df5a" Dec 10 16:36:54 crc kubenswrapper[4755]: E1210 16:36:54.758977 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:37:04 crc kubenswrapper[4755]: E1210 16:37:04.759516 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:37:06 crc kubenswrapper[4755]: I1210 16:37:06.758409 4755 scope.go:117] "RemoveContainer" containerID="a0141776b31c2a932cec3bfb5dc79420825b6de67ef3162cf9228f8d7fa8df5a" Dec 10 16:37:06 crc kubenswrapper[4755]: E1210 16:37:06.758942 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:37:06 crc kubenswrapper[4755]: E1210 16:37:06.760626 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:37:15 crc kubenswrapper[4755]: E1210 16:37:15.759769 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:37:19 crc kubenswrapper[4755]: I1210 16:37:19.758411 4755 scope.go:117] "RemoveContainer" containerID="a0141776b31c2a932cec3bfb5dc79420825b6de67ef3162cf9228f8d7fa8df5a" Dec 10 16:37:19 crc kubenswrapper[4755]: E1210 16:37:19.759383 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:37:21 crc kubenswrapper[4755]: E1210 16:37:21.763751 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:37:27 crc kubenswrapper[4755]: E1210 16:37:27.760687 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:37:32 crc kubenswrapper[4755]: I1210 16:37:32.758290 4755 scope.go:117] "RemoveContainer" containerID="a0141776b31c2a932cec3bfb5dc79420825b6de67ef3162cf9228f8d7fa8df5a" Dec 10 16:37:32 crc kubenswrapper[4755]: E1210 16:37:32.760834 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:37:35 crc kubenswrapper[4755]: E1210 16:37:35.762818 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:37:41 crc kubenswrapper[4755]: E1210 16:37:41.760458 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:37:44 crc kubenswrapper[4755]: I1210 16:37:44.758110 4755 scope.go:117] "RemoveContainer" containerID="a0141776b31c2a932cec3bfb5dc79420825b6de67ef3162cf9228f8d7fa8df5a" Dec 10 16:37:44 crc kubenswrapper[4755]: E1210 16:37:44.759152 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:37:46 crc kubenswrapper[4755]: E1210 16:37:46.759848 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:37:53 crc kubenswrapper[4755]: E1210 16:37:53.788798 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:37:58 crc kubenswrapper[4755]: I1210 16:37:58.758179 4755 scope.go:117] "RemoveContainer" containerID="a0141776b31c2a932cec3bfb5dc79420825b6de67ef3162cf9228f8d7fa8df5a" Dec 10 16:37:58 crc kubenswrapper[4755]: E1210 16:37:58.759030 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:38:00 crc kubenswrapper[4755]: E1210 16:38:00.760448 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:38:05 crc kubenswrapper[4755]: E1210 16:38:05.760695 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:38:08 crc kubenswrapper[4755]: I1210 16:38:08.154157 4755 scope.go:117] "RemoveContainer" containerID="45cee2ef9e469efe83de22d67b4b944621bf4b600e838501942e2b90f4ec58b1" Dec 10 16:38:08 crc kubenswrapper[4755]: I1210 16:38:08.178095 4755 scope.go:117] "RemoveContainer" containerID="6ddf29fa596e588d8230abe45f35864e73f7b622f0298f6d226ac6396a53bd35" Dec 10 16:38:08 crc kubenswrapper[4755]: I1210 16:38:08.206297 4755 scope.go:117] "RemoveContainer" containerID="5ce16760bc506ae4ffb10598c412248ccf36415c4626b6a1e3badb70135ccc0e" Dec 10 16:38:09 crc kubenswrapper[4755]: I1210 16:38:09.770150 4755 scope.go:117] "RemoveContainer" containerID="a0141776b31c2a932cec3bfb5dc79420825b6de67ef3162cf9228f8d7fa8df5a" Dec 10 16:38:09 crc kubenswrapper[4755]: E1210 16:38:09.771147 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:38:12 crc kubenswrapper[4755]: E1210 16:38:12.760798 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:38:16 crc kubenswrapper[4755]: E1210 16:38:16.759778 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:38:17 crc kubenswrapper[4755]: I1210 16:38:17.924531 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bscxc"] Dec 10 16:38:17 crc kubenswrapper[4755]: E1210 16:38:17.924991 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48fe9944-e282-45c9-b9b2-6716af358188" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 10 16:38:17 crc kubenswrapper[4755]: I1210 16:38:17.925005 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="48fe9944-e282-45c9-b9b2-6716af358188" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 10 16:38:17 crc kubenswrapper[4755]: E1210 16:38:17.925037 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a750d616-bbde-40c2-93f9-520346691c75" containerName="extract-content" Dec 10 16:38:17 crc kubenswrapper[4755]: I1210 16:38:17.925043 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="a750d616-bbde-40c2-93f9-520346691c75" containerName="extract-content" Dec 10 16:38:17 crc kubenswrapper[4755]: E1210 16:38:17.925063 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a750d616-bbde-40c2-93f9-520346691c75" containerName="extract-utilities" Dec 10 16:38:17 crc kubenswrapper[4755]: I1210 16:38:17.925070 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="a750d616-bbde-40c2-93f9-520346691c75" containerName="extract-utilities" Dec 10 16:38:17 crc kubenswrapper[4755]: E1210 16:38:17.925080 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a750d616-bbde-40c2-93f9-520346691c75" containerName="registry-server" Dec 10 16:38:17 crc kubenswrapper[4755]: I1210 16:38:17.925086 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="a750d616-bbde-40c2-93f9-520346691c75" containerName="registry-server" Dec 10 16:38:17 crc kubenswrapper[4755]: I1210 16:38:17.925289 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="a750d616-bbde-40c2-93f9-520346691c75" containerName="registry-server" Dec 10 16:38:17 crc kubenswrapper[4755]: I1210 16:38:17.925327 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="48fe9944-e282-45c9-b9b2-6716af358188" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 10 16:38:17 crc kubenswrapper[4755]: I1210 16:38:17.927202 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bscxc" Dec 10 16:38:17 crc kubenswrapper[4755]: I1210 16:38:17.943286 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bscxc"] Dec 10 16:38:18 crc kubenswrapper[4755]: I1210 16:38:18.022544 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/861092a4-bdf5-4b5a-ae80-d63b63d88818-catalog-content\") pod \"redhat-operators-bscxc\" (UID: \"861092a4-bdf5-4b5a-ae80-d63b63d88818\") " pod="openshift-marketplace/redhat-operators-bscxc" Dec 10 16:38:18 crc kubenswrapper[4755]: I1210 16:38:18.022851 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/861092a4-bdf5-4b5a-ae80-d63b63d88818-utilities\") pod \"redhat-operators-bscxc\" (UID: \"861092a4-bdf5-4b5a-ae80-d63b63d88818\") " pod="openshift-marketplace/redhat-operators-bscxc" Dec 10 16:38:18 crc kubenswrapper[4755]: I1210 16:38:18.023103 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q68xz\" (UniqueName: \"kubernetes.io/projected/861092a4-bdf5-4b5a-ae80-d63b63d88818-kube-api-access-q68xz\") pod \"redhat-operators-bscxc\" (UID: \"861092a4-bdf5-4b5a-ae80-d63b63d88818\") " pod="openshift-marketplace/redhat-operators-bscxc" Dec 10 16:38:18 crc kubenswrapper[4755]: I1210 16:38:18.125185 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q68xz\" (UniqueName: \"kubernetes.io/projected/861092a4-bdf5-4b5a-ae80-d63b63d88818-kube-api-access-q68xz\") pod \"redhat-operators-bscxc\" (UID: \"861092a4-bdf5-4b5a-ae80-d63b63d88818\") " pod="openshift-marketplace/redhat-operators-bscxc" Dec 10 16:38:18 crc kubenswrapper[4755]: I1210 16:38:18.125564 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/861092a4-bdf5-4b5a-ae80-d63b63d88818-catalog-content\") pod \"redhat-operators-bscxc\" (UID: \"861092a4-bdf5-4b5a-ae80-d63b63d88818\") " pod="openshift-marketplace/redhat-operators-bscxc" Dec 10 16:38:18 crc kubenswrapper[4755]: I1210 16:38:18.125710 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/861092a4-bdf5-4b5a-ae80-d63b63d88818-utilities\") pod \"redhat-operators-bscxc\" (UID: \"861092a4-bdf5-4b5a-ae80-d63b63d88818\") " pod="openshift-marketplace/redhat-operators-bscxc" Dec 10 16:38:18 crc kubenswrapper[4755]: I1210 16:38:18.126114 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/861092a4-bdf5-4b5a-ae80-d63b63d88818-utilities\") pod \"redhat-operators-bscxc\" (UID: \"861092a4-bdf5-4b5a-ae80-d63b63d88818\") " pod="openshift-marketplace/redhat-operators-bscxc" Dec 10 16:38:18 crc kubenswrapper[4755]: I1210 16:38:18.126145 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/861092a4-bdf5-4b5a-ae80-d63b63d88818-catalog-content\") pod \"redhat-operators-bscxc\" (UID: \"861092a4-bdf5-4b5a-ae80-d63b63d88818\") " pod="openshift-marketplace/redhat-operators-bscxc" Dec 10 16:38:18 crc kubenswrapper[4755]: I1210 16:38:18.148685 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q68xz\" (UniqueName: \"kubernetes.io/projected/861092a4-bdf5-4b5a-ae80-d63b63d88818-kube-api-access-q68xz\") pod \"redhat-operators-bscxc\" (UID: \"861092a4-bdf5-4b5a-ae80-d63b63d88818\") " pod="openshift-marketplace/redhat-operators-bscxc" Dec 10 16:38:18 crc kubenswrapper[4755]: I1210 16:38:18.258956 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bscxc" Dec 10 16:38:18 crc kubenswrapper[4755]: I1210 16:38:18.710081 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bscxc"] Dec 10 16:38:19 crc kubenswrapper[4755]: I1210 16:38:19.385992 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bscxc" event={"ID":"861092a4-bdf5-4b5a-ae80-d63b63d88818","Type":"ContainerStarted","Data":"9c07520d2787d5e2f4c72a32542401c7b5a15c73ee9ccf33a34c2879ce58be44"} Dec 10 16:38:20 crc kubenswrapper[4755]: I1210 16:38:20.398391 4755 generic.go:334] "Generic (PLEG): container finished" podID="861092a4-bdf5-4b5a-ae80-d63b63d88818" containerID="fab1d2f216e4979901cb63d4f07b88cc47b644e57af7a6f34b4dfdcc91a5fbf4" exitCode=0 Dec 10 16:38:20 crc kubenswrapper[4755]: I1210 16:38:20.398482 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bscxc" event={"ID":"861092a4-bdf5-4b5a-ae80-d63b63d88818","Type":"ContainerDied","Data":"fab1d2f216e4979901cb63d4f07b88cc47b644e57af7a6f34b4dfdcc91a5fbf4"} Dec 10 16:38:22 crc kubenswrapper[4755]: I1210 16:38:22.422663 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bscxc" event={"ID":"861092a4-bdf5-4b5a-ae80-d63b63d88818","Type":"ContainerStarted","Data":"36b9e5baf8d6e553ef465be6f1ff3203f7063c7318aeb6595e12459d27ddd859"} Dec 10 16:38:23 crc kubenswrapper[4755]: I1210 16:38:23.436278 4755 generic.go:334] "Generic (PLEG): container finished" podID="861092a4-bdf5-4b5a-ae80-d63b63d88818" containerID="36b9e5baf8d6e553ef465be6f1ff3203f7063c7318aeb6595e12459d27ddd859" exitCode=0 Dec 10 16:38:23 crc kubenswrapper[4755]: I1210 16:38:23.436343 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bscxc" event={"ID":"861092a4-bdf5-4b5a-ae80-d63b63d88818","Type":"ContainerDied","Data":"36b9e5baf8d6e553ef465be6f1ff3203f7063c7318aeb6595e12459d27ddd859"} Dec 10 16:38:24 crc kubenswrapper[4755]: I1210 16:38:24.450529 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bscxc" event={"ID":"861092a4-bdf5-4b5a-ae80-d63b63d88818","Type":"ContainerStarted","Data":"a4f7f36bb50037136435a6f3782baa9e9ab2ff6d9aac5c1302236087dccdb467"} Dec 10 16:38:24 crc kubenswrapper[4755]: I1210 16:38:24.470669 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-bscxc" podStartSLOduration=3.996823127 podStartE2EDuration="7.470654188s" podCreationTimestamp="2025-12-10 16:38:17 +0000 UTC" firstStartedPulling="2025-12-10 16:38:20.400439059 +0000 UTC m=+4497.001322691" lastFinishedPulling="2025-12-10 16:38:23.87427012 +0000 UTC m=+4500.475153752" observedRunningTime="2025-12-10 16:38:24.466502031 +0000 UTC m=+4501.067385693" watchObservedRunningTime="2025-12-10 16:38:24.470654188 +0000 UTC m=+4501.071537820" Dec 10 16:38:24 crc kubenswrapper[4755]: I1210 16:38:24.757744 4755 scope.go:117] "RemoveContainer" containerID="a0141776b31c2a932cec3bfb5dc79420825b6de67ef3162cf9228f8d7fa8df5a" Dec 10 16:38:24 crc kubenswrapper[4755]: E1210 16:38:24.758021 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:38:26 crc kubenswrapper[4755]: E1210 16:38:26.759967 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:38:28 crc kubenswrapper[4755]: I1210 16:38:28.259348 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-bscxc" Dec 10 16:38:28 crc kubenswrapper[4755]: I1210 16:38:28.259427 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bscxc" Dec 10 16:38:29 crc kubenswrapper[4755]: I1210 16:38:29.393876 4755 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bscxc" podUID="861092a4-bdf5-4b5a-ae80-d63b63d88818" containerName="registry-server" probeResult="failure" output=< Dec 10 16:38:29 crc kubenswrapper[4755]: timeout: failed to connect service ":50051" within 1s Dec 10 16:38:29 crc kubenswrapper[4755]: > Dec 10 16:38:31 crc kubenswrapper[4755]: E1210 16:38:31.760171 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:38:36 crc kubenswrapper[4755]: I1210 16:38:36.757448 4755 scope.go:117] "RemoveContainer" containerID="a0141776b31c2a932cec3bfb5dc79420825b6de67ef3162cf9228f8d7fa8df5a" Dec 10 16:38:36 crc kubenswrapper[4755]: E1210 16:38:36.758214 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:38:38 crc kubenswrapper[4755]: I1210 16:38:38.312210 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-bscxc" Dec 10 16:38:38 crc kubenswrapper[4755]: I1210 16:38:38.365631 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-bscxc" Dec 10 16:38:38 crc kubenswrapper[4755]: I1210 16:38:38.551597 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bscxc"] Dec 10 16:38:38 crc kubenswrapper[4755]: E1210 16:38:38.761822 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:38:39 crc kubenswrapper[4755]: I1210 16:38:39.596894 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-bscxc" podUID="861092a4-bdf5-4b5a-ae80-d63b63d88818" containerName="registry-server" containerID="cri-o://a4f7f36bb50037136435a6f3782baa9e9ab2ff6d9aac5c1302236087dccdb467" gracePeriod=2 Dec 10 16:38:40 crc kubenswrapper[4755]: I1210 16:38:40.144992 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bscxc" Dec 10 16:38:40 crc kubenswrapper[4755]: I1210 16:38:40.247570 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/861092a4-bdf5-4b5a-ae80-d63b63d88818-utilities\") pod \"861092a4-bdf5-4b5a-ae80-d63b63d88818\" (UID: \"861092a4-bdf5-4b5a-ae80-d63b63d88818\") " Dec 10 16:38:40 crc kubenswrapper[4755]: I1210 16:38:40.247653 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q68xz\" (UniqueName: \"kubernetes.io/projected/861092a4-bdf5-4b5a-ae80-d63b63d88818-kube-api-access-q68xz\") pod \"861092a4-bdf5-4b5a-ae80-d63b63d88818\" (UID: \"861092a4-bdf5-4b5a-ae80-d63b63d88818\") " Dec 10 16:38:40 crc kubenswrapper[4755]: I1210 16:38:40.247829 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/861092a4-bdf5-4b5a-ae80-d63b63d88818-catalog-content\") pod \"861092a4-bdf5-4b5a-ae80-d63b63d88818\" (UID: \"861092a4-bdf5-4b5a-ae80-d63b63d88818\") " Dec 10 16:38:40 crc kubenswrapper[4755]: I1210 16:38:40.248774 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/861092a4-bdf5-4b5a-ae80-d63b63d88818-utilities" (OuterVolumeSpecName: "utilities") pod "861092a4-bdf5-4b5a-ae80-d63b63d88818" (UID: "861092a4-bdf5-4b5a-ae80-d63b63d88818"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 16:38:40 crc kubenswrapper[4755]: I1210 16:38:40.255710 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/861092a4-bdf5-4b5a-ae80-d63b63d88818-kube-api-access-q68xz" (OuterVolumeSpecName: "kube-api-access-q68xz") pod "861092a4-bdf5-4b5a-ae80-d63b63d88818" (UID: "861092a4-bdf5-4b5a-ae80-d63b63d88818"). InnerVolumeSpecName "kube-api-access-q68xz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 16:38:40 crc kubenswrapper[4755]: I1210 16:38:40.353975 4755 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/861092a4-bdf5-4b5a-ae80-d63b63d88818-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 16:38:40 crc kubenswrapper[4755]: I1210 16:38:40.354014 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q68xz\" (UniqueName: \"kubernetes.io/projected/861092a4-bdf5-4b5a-ae80-d63b63d88818-kube-api-access-q68xz\") on node \"crc\" DevicePath \"\"" Dec 10 16:38:40 crc kubenswrapper[4755]: I1210 16:38:40.381059 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/861092a4-bdf5-4b5a-ae80-d63b63d88818-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "861092a4-bdf5-4b5a-ae80-d63b63d88818" (UID: "861092a4-bdf5-4b5a-ae80-d63b63d88818"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 16:38:40 crc kubenswrapper[4755]: I1210 16:38:40.456293 4755 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/861092a4-bdf5-4b5a-ae80-d63b63d88818-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 16:38:40 crc kubenswrapper[4755]: I1210 16:38:40.613292 4755 generic.go:334] "Generic (PLEG): container finished" podID="861092a4-bdf5-4b5a-ae80-d63b63d88818" containerID="a4f7f36bb50037136435a6f3782baa9e9ab2ff6d9aac5c1302236087dccdb467" exitCode=0 Dec 10 16:38:40 crc kubenswrapper[4755]: I1210 16:38:40.613347 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bscxc" event={"ID":"861092a4-bdf5-4b5a-ae80-d63b63d88818","Type":"ContainerDied","Data":"a4f7f36bb50037136435a6f3782baa9e9ab2ff6d9aac5c1302236087dccdb467"} Dec 10 16:38:40 crc kubenswrapper[4755]: I1210 16:38:40.613381 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bscxc" event={"ID":"861092a4-bdf5-4b5a-ae80-d63b63d88818","Type":"ContainerDied","Data":"9c07520d2787d5e2f4c72a32542401c7b5a15c73ee9ccf33a34c2879ce58be44"} Dec 10 16:38:40 crc kubenswrapper[4755]: I1210 16:38:40.613391 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bscxc" Dec 10 16:38:40 crc kubenswrapper[4755]: I1210 16:38:40.613403 4755 scope.go:117] "RemoveContainer" containerID="a4f7f36bb50037136435a6f3782baa9e9ab2ff6d9aac5c1302236087dccdb467" Dec 10 16:38:40 crc kubenswrapper[4755]: I1210 16:38:40.668027 4755 scope.go:117] "RemoveContainer" containerID="36b9e5baf8d6e553ef465be6f1ff3203f7063c7318aeb6595e12459d27ddd859" Dec 10 16:38:40 crc kubenswrapper[4755]: I1210 16:38:40.675968 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bscxc"] Dec 10 16:38:40 crc kubenswrapper[4755]: I1210 16:38:40.695071 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-bscxc"] Dec 10 16:38:40 crc kubenswrapper[4755]: I1210 16:38:40.702740 4755 scope.go:117] "RemoveContainer" containerID="fab1d2f216e4979901cb63d4f07b88cc47b644e57af7a6f34b4dfdcc91a5fbf4" Dec 10 16:38:41 crc kubenswrapper[4755]: I1210 16:38:41.252213 4755 scope.go:117] "RemoveContainer" containerID="a4f7f36bb50037136435a6f3782baa9e9ab2ff6d9aac5c1302236087dccdb467" Dec 10 16:38:41 crc kubenswrapper[4755]: E1210 16:38:41.253061 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4f7f36bb50037136435a6f3782baa9e9ab2ff6d9aac5c1302236087dccdb467\": container with ID starting with a4f7f36bb50037136435a6f3782baa9e9ab2ff6d9aac5c1302236087dccdb467 not found: ID does not exist" containerID="a4f7f36bb50037136435a6f3782baa9e9ab2ff6d9aac5c1302236087dccdb467" Dec 10 16:38:41 crc kubenswrapper[4755]: I1210 16:38:41.253115 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4f7f36bb50037136435a6f3782baa9e9ab2ff6d9aac5c1302236087dccdb467"} err="failed to get container status \"a4f7f36bb50037136435a6f3782baa9e9ab2ff6d9aac5c1302236087dccdb467\": rpc error: code = NotFound desc = could not find container \"a4f7f36bb50037136435a6f3782baa9e9ab2ff6d9aac5c1302236087dccdb467\": container with ID starting with a4f7f36bb50037136435a6f3782baa9e9ab2ff6d9aac5c1302236087dccdb467 not found: ID does not exist" Dec 10 16:38:41 crc kubenswrapper[4755]: I1210 16:38:41.253145 4755 scope.go:117] "RemoveContainer" containerID="36b9e5baf8d6e553ef465be6f1ff3203f7063c7318aeb6595e12459d27ddd859" Dec 10 16:38:41 crc kubenswrapper[4755]: E1210 16:38:41.253430 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36b9e5baf8d6e553ef465be6f1ff3203f7063c7318aeb6595e12459d27ddd859\": container with ID starting with 36b9e5baf8d6e553ef465be6f1ff3203f7063c7318aeb6595e12459d27ddd859 not found: ID does not exist" containerID="36b9e5baf8d6e553ef465be6f1ff3203f7063c7318aeb6595e12459d27ddd859" Dec 10 16:38:41 crc kubenswrapper[4755]: I1210 16:38:41.253480 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36b9e5baf8d6e553ef465be6f1ff3203f7063c7318aeb6595e12459d27ddd859"} err="failed to get container status \"36b9e5baf8d6e553ef465be6f1ff3203f7063c7318aeb6595e12459d27ddd859\": rpc error: code = NotFound desc = could not find container \"36b9e5baf8d6e553ef465be6f1ff3203f7063c7318aeb6595e12459d27ddd859\": container with ID starting with 36b9e5baf8d6e553ef465be6f1ff3203f7063c7318aeb6595e12459d27ddd859 not found: ID does not exist" Dec 10 16:38:41 crc kubenswrapper[4755]: I1210 16:38:41.253501 4755 scope.go:117] "RemoveContainer" containerID="fab1d2f216e4979901cb63d4f07b88cc47b644e57af7a6f34b4dfdcc91a5fbf4" Dec 10 16:38:41 crc kubenswrapper[4755]: E1210 16:38:41.253717 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fab1d2f216e4979901cb63d4f07b88cc47b644e57af7a6f34b4dfdcc91a5fbf4\": container with ID starting with fab1d2f216e4979901cb63d4f07b88cc47b644e57af7a6f34b4dfdcc91a5fbf4 not found: ID does not exist" containerID="fab1d2f216e4979901cb63d4f07b88cc47b644e57af7a6f34b4dfdcc91a5fbf4" Dec 10 16:38:41 crc kubenswrapper[4755]: I1210 16:38:41.253745 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fab1d2f216e4979901cb63d4f07b88cc47b644e57af7a6f34b4dfdcc91a5fbf4"} err="failed to get container status \"fab1d2f216e4979901cb63d4f07b88cc47b644e57af7a6f34b4dfdcc91a5fbf4\": rpc error: code = NotFound desc = could not find container \"fab1d2f216e4979901cb63d4f07b88cc47b644e57af7a6f34b4dfdcc91a5fbf4\": container with ID starting with fab1d2f216e4979901cb63d4f07b88cc47b644e57af7a6f34b4dfdcc91a5fbf4 not found: ID does not exist" Dec 10 16:38:41 crc kubenswrapper[4755]: I1210 16:38:41.769764 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="861092a4-bdf5-4b5a-ae80-d63b63d88818" path="/var/lib/kubelet/pods/861092a4-bdf5-4b5a-ae80-d63b63d88818/volumes" Dec 10 16:38:46 crc kubenswrapper[4755]: E1210 16:38:46.759533 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:38:47 crc kubenswrapper[4755]: I1210 16:38:47.758116 4755 scope.go:117] "RemoveContainer" containerID="a0141776b31c2a932cec3bfb5dc79420825b6de67ef3162cf9228f8d7fa8df5a" Dec 10 16:38:47 crc kubenswrapper[4755]: E1210 16:38:47.758560 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:38:49 crc kubenswrapper[4755]: E1210 16:38:49.760590 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:38:59 crc kubenswrapper[4755]: E1210 16:38:59.759687 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:39:00 crc kubenswrapper[4755]: E1210 16:39:00.760268 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:39:01 crc kubenswrapper[4755]: I1210 16:39:01.757722 4755 scope.go:117] "RemoveContainer" containerID="a0141776b31c2a932cec3bfb5dc79420825b6de67ef3162cf9228f8d7fa8df5a" Dec 10 16:39:01 crc kubenswrapper[4755]: E1210 16:39:01.758045 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:39:11 crc kubenswrapper[4755]: E1210 16:39:11.760220 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:39:15 crc kubenswrapper[4755]: I1210 16:39:15.758639 4755 scope.go:117] "RemoveContainer" containerID="a0141776b31c2a932cec3bfb5dc79420825b6de67ef3162cf9228f8d7fa8df5a" Dec 10 16:39:15 crc kubenswrapper[4755]: E1210 16:39:15.759405 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:39:15 crc kubenswrapper[4755]: E1210 16:39:15.761636 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:39:25 crc kubenswrapper[4755]: E1210 16:39:25.760934 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:39:27 crc kubenswrapper[4755]: I1210 16:39:27.758186 4755 scope.go:117] "RemoveContainer" containerID="a0141776b31c2a932cec3bfb5dc79420825b6de67ef3162cf9228f8d7fa8df5a" Dec 10 16:39:27 crc kubenswrapper[4755]: E1210 16:39:27.758824 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:39:28 crc kubenswrapper[4755]: E1210 16:39:28.760963 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:39:36 crc kubenswrapper[4755]: E1210 16:39:36.759915 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:39:40 crc kubenswrapper[4755]: I1210 16:39:40.758514 4755 scope.go:117] "RemoveContainer" containerID="a0141776b31c2a932cec3bfb5dc79420825b6de67ef3162cf9228f8d7fa8df5a" Dec 10 16:39:40 crc kubenswrapper[4755]: E1210 16:39:40.760357 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:39:41 crc kubenswrapper[4755]: E1210 16:39:41.761084 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:39:51 crc kubenswrapper[4755]: E1210 16:39:51.760352 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:39:54 crc kubenswrapper[4755]: I1210 16:39:54.758343 4755 scope.go:117] "RemoveContainer" containerID="a0141776b31c2a932cec3bfb5dc79420825b6de67ef3162cf9228f8d7fa8df5a" Dec 10 16:39:54 crc kubenswrapper[4755]: E1210 16:39:54.759124 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:39:54 crc kubenswrapper[4755]: E1210 16:39:54.852647 4755 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 10 16:39:54 crc kubenswrapper[4755]: E1210 16:39:54.852734 4755 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 10 16:39:54 crc kubenswrapper[4755]: E1210 16:39:54.852915 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mz4t5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-jfc28_openstack(998863b6-4f48-4c8b-8011-a40377686b99): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 10 16:39:54 crc kubenswrapper[4755]: E1210 16:39:54.854814 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:40:02 crc kubenswrapper[4755]: E1210 16:40:02.761251 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:40:06 crc kubenswrapper[4755]: E1210 16:40:06.760847 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:40:09 crc kubenswrapper[4755]: I1210 16:40:09.758203 4755 scope.go:117] "RemoveContainer" containerID="a0141776b31c2a932cec3bfb5dc79420825b6de67ef3162cf9228f8d7fa8df5a" Dec 10 16:40:09 crc kubenswrapper[4755]: E1210 16:40:09.759995 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:40:17 crc kubenswrapper[4755]: E1210 16:40:17.761389 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:40:20 crc kubenswrapper[4755]: E1210 16:40:20.760564 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:40:20 crc kubenswrapper[4755]: I1210 16:40:20.810513 4755 trace.go:236] Trace[1792299211]: "Calculate volume metrics of prometheus-metric-storage-db for pod openstack/prometheus-metric-storage-0" (10-Dec-2025 16:40:19.742) (total time: 1068ms): Dec 10 16:40:20 crc kubenswrapper[4755]: Trace[1792299211]: [1.068144898s] [1.068144898s] END Dec 10 16:40:23 crc kubenswrapper[4755]: I1210 16:40:23.776945 4755 scope.go:117] "RemoveContainer" containerID="a0141776b31c2a932cec3bfb5dc79420825b6de67ef3162cf9228f8d7fa8df5a" Dec 10 16:40:23 crc kubenswrapper[4755]: E1210 16:40:23.778124 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:40:31 crc kubenswrapper[4755]: I1210 16:40:31.761045 4755 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 10 16:40:31 crc kubenswrapper[4755]: E1210 16:40:31.870292 4755 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 10 16:40:31 crc kubenswrapper[4755]: E1210 16:40:31.870397 4755 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 10 16:40:31 crc kubenswrapper[4755]: E1210 16:40:31.870557 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5d4h5b7hfbh5ddh688h9ch55bh7chf6h5ddh68ch94h69h5c5h596h59bh569hfchc4h676hcbh64dhdbh57fh75h5c9h98h59ch679h566h77h9cq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hw9gj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(6d104bea-ecdc-4fe1-9861-fb1a19fce845): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 10 16:40:31 crc kubenswrapper[4755]: E1210 16:40:31.871806 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:40:32 crc kubenswrapper[4755]: E1210 16:40:32.760668 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:40:37 crc kubenswrapper[4755]: I1210 16:40:37.758083 4755 scope.go:117] "RemoveContainer" containerID="a0141776b31c2a932cec3bfb5dc79420825b6de67ef3162cf9228f8d7fa8df5a" Dec 10 16:40:37 crc kubenswrapper[4755]: E1210 16:40:37.758832 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:40:45 crc kubenswrapper[4755]: E1210 16:40:45.760646 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:40:46 crc kubenswrapper[4755]: E1210 16:40:46.759924 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:40:51 crc kubenswrapper[4755]: I1210 16:40:51.758312 4755 scope.go:117] "RemoveContainer" containerID="a0141776b31c2a932cec3bfb5dc79420825b6de67ef3162cf9228f8d7fa8df5a" Dec 10 16:40:51 crc kubenswrapper[4755]: E1210 16:40:51.761227 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:40:56 crc kubenswrapper[4755]: E1210 16:40:56.761372 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:40:59 crc kubenswrapper[4755]: E1210 16:40:59.759791 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:41:05 crc kubenswrapper[4755]: I1210 16:41:05.760459 4755 scope.go:117] "RemoveContainer" containerID="a0141776b31c2a932cec3bfb5dc79420825b6de67ef3162cf9228f8d7fa8df5a" Dec 10 16:41:05 crc kubenswrapper[4755]: E1210 16:41:05.761340 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:41:06 crc kubenswrapper[4755]: I1210 16:41:06.434795 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bddqv"] Dec 10 16:41:06 crc kubenswrapper[4755]: E1210 16:41:06.435614 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="861092a4-bdf5-4b5a-ae80-d63b63d88818" containerName="registry-server" Dec 10 16:41:06 crc kubenswrapper[4755]: I1210 16:41:06.435712 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="861092a4-bdf5-4b5a-ae80-d63b63d88818" containerName="registry-server" Dec 10 16:41:06 crc kubenswrapper[4755]: E1210 16:41:06.435799 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="861092a4-bdf5-4b5a-ae80-d63b63d88818" containerName="extract-utilities" Dec 10 16:41:06 crc kubenswrapper[4755]: I1210 16:41:06.435865 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="861092a4-bdf5-4b5a-ae80-d63b63d88818" containerName="extract-utilities" Dec 10 16:41:06 crc kubenswrapper[4755]: E1210 16:41:06.435947 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="861092a4-bdf5-4b5a-ae80-d63b63d88818" containerName="extract-content" Dec 10 16:41:06 crc kubenswrapper[4755]: I1210 16:41:06.436009 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="861092a4-bdf5-4b5a-ae80-d63b63d88818" containerName="extract-content" Dec 10 16:41:06 crc kubenswrapper[4755]: I1210 16:41:06.436345 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="861092a4-bdf5-4b5a-ae80-d63b63d88818" containerName="registry-server" Dec 10 16:41:06 crc kubenswrapper[4755]: I1210 16:41:06.440598 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bddqv" Dec 10 16:41:06 crc kubenswrapper[4755]: I1210 16:41:06.452579 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xg85q\" (UniqueName: \"kubernetes.io/projected/98434dbb-7819-46d6-ac5f-c34c5ec3f1fd-kube-api-access-xg85q\") pod \"community-operators-bddqv\" (UID: \"98434dbb-7819-46d6-ac5f-c34c5ec3f1fd\") " pod="openshift-marketplace/community-operators-bddqv" Dec 10 16:41:06 crc kubenswrapper[4755]: I1210 16:41:06.452873 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98434dbb-7819-46d6-ac5f-c34c5ec3f1fd-catalog-content\") pod \"community-operators-bddqv\" (UID: \"98434dbb-7819-46d6-ac5f-c34c5ec3f1fd\") " pod="openshift-marketplace/community-operators-bddqv" Dec 10 16:41:06 crc kubenswrapper[4755]: I1210 16:41:06.453408 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98434dbb-7819-46d6-ac5f-c34c5ec3f1fd-utilities\") pod \"community-operators-bddqv\" (UID: \"98434dbb-7819-46d6-ac5f-c34c5ec3f1fd\") " pod="openshift-marketplace/community-operators-bddqv" Dec 10 16:41:06 crc kubenswrapper[4755]: I1210 16:41:06.464922 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bddqv"] Dec 10 16:41:06 crc kubenswrapper[4755]: I1210 16:41:06.555751 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98434dbb-7819-46d6-ac5f-c34c5ec3f1fd-catalog-content\") pod \"community-operators-bddqv\" (UID: \"98434dbb-7819-46d6-ac5f-c34c5ec3f1fd\") " pod="openshift-marketplace/community-operators-bddqv" Dec 10 16:41:06 crc kubenswrapper[4755]: I1210 16:41:06.555990 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98434dbb-7819-46d6-ac5f-c34c5ec3f1fd-utilities\") pod \"community-operators-bddqv\" (UID: \"98434dbb-7819-46d6-ac5f-c34c5ec3f1fd\") " pod="openshift-marketplace/community-operators-bddqv" Dec 10 16:41:06 crc kubenswrapper[4755]: I1210 16:41:06.556165 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xg85q\" (UniqueName: \"kubernetes.io/projected/98434dbb-7819-46d6-ac5f-c34c5ec3f1fd-kube-api-access-xg85q\") pod \"community-operators-bddqv\" (UID: \"98434dbb-7819-46d6-ac5f-c34c5ec3f1fd\") " pod="openshift-marketplace/community-operators-bddqv" Dec 10 16:41:06 crc kubenswrapper[4755]: I1210 16:41:06.556647 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98434dbb-7819-46d6-ac5f-c34c5ec3f1fd-catalog-content\") pod \"community-operators-bddqv\" (UID: \"98434dbb-7819-46d6-ac5f-c34c5ec3f1fd\") " pod="openshift-marketplace/community-operators-bddqv" Dec 10 16:41:06 crc kubenswrapper[4755]: I1210 16:41:06.557083 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98434dbb-7819-46d6-ac5f-c34c5ec3f1fd-utilities\") pod \"community-operators-bddqv\" (UID: \"98434dbb-7819-46d6-ac5f-c34c5ec3f1fd\") " pod="openshift-marketplace/community-operators-bddqv" Dec 10 16:41:06 crc kubenswrapper[4755]: I1210 16:41:06.579674 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xg85q\" (UniqueName: \"kubernetes.io/projected/98434dbb-7819-46d6-ac5f-c34c5ec3f1fd-kube-api-access-xg85q\") pod \"community-operators-bddqv\" (UID: \"98434dbb-7819-46d6-ac5f-c34c5ec3f1fd\") " pod="openshift-marketplace/community-operators-bddqv" Dec 10 16:41:06 crc kubenswrapper[4755]: I1210 16:41:06.771541 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bddqv" Dec 10 16:41:07 crc kubenswrapper[4755]: I1210 16:41:07.337574 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bddqv"] Dec 10 16:41:07 crc kubenswrapper[4755]: I1210 16:41:07.434112 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bddqv" event={"ID":"98434dbb-7819-46d6-ac5f-c34c5ec3f1fd","Type":"ContainerStarted","Data":"7b5428f50e5f8b583098986ed2213c8f4654c298cbc3c0f8a0b2fa23851c2e8d"} Dec 10 16:41:08 crc kubenswrapper[4755]: I1210 16:41:08.445565 4755 generic.go:334] "Generic (PLEG): container finished" podID="98434dbb-7819-46d6-ac5f-c34c5ec3f1fd" containerID="3e922be820cb93212cc653897143324fd0ea733ce67cc9e55fc51a19d9a1aa35" exitCode=0 Dec 10 16:41:08 crc kubenswrapper[4755]: I1210 16:41:08.445693 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bddqv" event={"ID":"98434dbb-7819-46d6-ac5f-c34c5ec3f1fd","Type":"ContainerDied","Data":"3e922be820cb93212cc653897143324fd0ea733ce67cc9e55fc51a19d9a1aa35"} Dec 10 16:41:09 crc kubenswrapper[4755]: I1210 16:41:09.458128 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bddqv" event={"ID":"98434dbb-7819-46d6-ac5f-c34c5ec3f1fd","Type":"ContainerStarted","Data":"b953a07947f417034fdba9843f27a5aa183e3c9d36bcd8e7fe7fcd1d86b5e24a"} Dec 10 16:41:09 crc kubenswrapper[4755]: E1210 16:41:09.761994 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:41:10 crc kubenswrapper[4755]: I1210 16:41:10.468130 4755 generic.go:334] "Generic (PLEG): container finished" podID="98434dbb-7819-46d6-ac5f-c34c5ec3f1fd" containerID="b953a07947f417034fdba9843f27a5aa183e3c9d36bcd8e7fe7fcd1d86b5e24a" exitCode=0 Dec 10 16:41:10 crc kubenswrapper[4755]: I1210 16:41:10.468175 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bddqv" event={"ID":"98434dbb-7819-46d6-ac5f-c34c5ec3f1fd","Type":"ContainerDied","Data":"b953a07947f417034fdba9843f27a5aa183e3c9d36bcd8e7fe7fcd1d86b5e24a"} Dec 10 16:41:10 crc kubenswrapper[4755]: E1210 16:41:10.758844 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:41:11 crc kubenswrapper[4755]: I1210 16:41:11.482742 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bddqv" event={"ID":"98434dbb-7819-46d6-ac5f-c34c5ec3f1fd","Type":"ContainerStarted","Data":"391cc33493c333a660ac57fa60231394bfc2552e298cc070d8535022cfa5aa3b"} Dec 10 16:41:11 crc kubenswrapper[4755]: I1210 16:41:11.505488 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bddqv" podStartSLOduration=2.978363131 podStartE2EDuration="5.505452702s" podCreationTimestamp="2025-12-10 16:41:06 +0000 UTC" firstStartedPulling="2025-12-10 16:41:08.447785985 +0000 UTC m=+4665.048669637" lastFinishedPulling="2025-12-10 16:41:10.974875566 +0000 UTC m=+4667.575759208" observedRunningTime="2025-12-10 16:41:11.504372332 +0000 UTC m=+4668.105255974" watchObservedRunningTime="2025-12-10 16:41:11.505452702 +0000 UTC m=+4668.106336334" Dec 10 16:41:16 crc kubenswrapper[4755]: I1210 16:41:16.772367 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bddqv" Dec 10 16:41:16 crc kubenswrapper[4755]: I1210 16:41:16.773200 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bddqv" Dec 10 16:41:16 crc kubenswrapper[4755]: I1210 16:41:16.847894 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bddqv" Dec 10 16:41:17 crc kubenswrapper[4755]: I1210 16:41:17.604963 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bddqv" Dec 10 16:41:17 crc kubenswrapper[4755]: I1210 16:41:17.666773 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bddqv"] Dec 10 16:41:18 crc kubenswrapper[4755]: I1210 16:41:18.757251 4755 scope.go:117] "RemoveContainer" containerID="a0141776b31c2a932cec3bfb5dc79420825b6de67ef3162cf9228f8d7fa8df5a" Dec 10 16:41:19 crc kubenswrapper[4755]: I1210 16:41:19.573161 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" event={"ID":"b132a8b9-1c99-414d-8773-229bf36b305d","Type":"ContainerStarted","Data":"079523fb0e18a86eddd800cf00287e3feddaac93af0f36614f19e6b4a4894ff7"} Dec 10 16:41:19 crc kubenswrapper[4755]: I1210 16:41:19.574624 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bddqv" podUID="98434dbb-7819-46d6-ac5f-c34c5ec3f1fd" containerName="registry-server" containerID="cri-o://391cc33493c333a660ac57fa60231394bfc2552e298cc070d8535022cfa5aa3b" gracePeriod=2 Dec 10 16:41:20 crc kubenswrapper[4755]: I1210 16:41:20.586819 4755 generic.go:334] "Generic (PLEG): container finished" podID="98434dbb-7819-46d6-ac5f-c34c5ec3f1fd" containerID="391cc33493c333a660ac57fa60231394bfc2552e298cc070d8535022cfa5aa3b" exitCode=0 Dec 10 16:41:20 crc kubenswrapper[4755]: I1210 16:41:20.586891 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bddqv" event={"ID":"98434dbb-7819-46d6-ac5f-c34c5ec3f1fd","Type":"ContainerDied","Data":"391cc33493c333a660ac57fa60231394bfc2552e298cc070d8535022cfa5aa3b"} Dec 10 16:41:20 crc kubenswrapper[4755]: I1210 16:41:20.587332 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bddqv" event={"ID":"98434dbb-7819-46d6-ac5f-c34c5ec3f1fd","Type":"ContainerDied","Data":"7b5428f50e5f8b583098986ed2213c8f4654c298cbc3c0f8a0b2fa23851c2e8d"} Dec 10 16:41:20 crc kubenswrapper[4755]: I1210 16:41:20.587355 4755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7b5428f50e5f8b583098986ed2213c8f4654c298cbc3c0f8a0b2fa23851c2e8d" Dec 10 16:41:20 crc kubenswrapper[4755]: I1210 16:41:20.607516 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bddqv" Dec 10 16:41:20 crc kubenswrapper[4755]: I1210 16:41:20.778328 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98434dbb-7819-46d6-ac5f-c34c5ec3f1fd-catalog-content\") pod \"98434dbb-7819-46d6-ac5f-c34c5ec3f1fd\" (UID: \"98434dbb-7819-46d6-ac5f-c34c5ec3f1fd\") " Dec 10 16:41:20 crc kubenswrapper[4755]: I1210 16:41:20.778402 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98434dbb-7819-46d6-ac5f-c34c5ec3f1fd-utilities\") pod \"98434dbb-7819-46d6-ac5f-c34c5ec3f1fd\" (UID: \"98434dbb-7819-46d6-ac5f-c34c5ec3f1fd\") " Dec 10 16:41:20 crc kubenswrapper[4755]: I1210 16:41:20.779836 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/98434dbb-7819-46d6-ac5f-c34c5ec3f1fd-utilities" (OuterVolumeSpecName: "utilities") pod "98434dbb-7819-46d6-ac5f-c34c5ec3f1fd" (UID: "98434dbb-7819-46d6-ac5f-c34c5ec3f1fd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 16:41:20 crc kubenswrapper[4755]: I1210 16:41:20.779923 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xg85q\" (UniqueName: \"kubernetes.io/projected/98434dbb-7819-46d6-ac5f-c34c5ec3f1fd-kube-api-access-xg85q\") pod \"98434dbb-7819-46d6-ac5f-c34c5ec3f1fd\" (UID: \"98434dbb-7819-46d6-ac5f-c34c5ec3f1fd\") " Dec 10 16:41:20 crc kubenswrapper[4755]: I1210 16:41:20.781978 4755 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98434dbb-7819-46d6-ac5f-c34c5ec3f1fd-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 16:41:20 crc kubenswrapper[4755]: I1210 16:41:20.788804 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98434dbb-7819-46d6-ac5f-c34c5ec3f1fd-kube-api-access-xg85q" (OuterVolumeSpecName: "kube-api-access-xg85q") pod "98434dbb-7819-46d6-ac5f-c34c5ec3f1fd" (UID: "98434dbb-7819-46d6-ac5f-c34c5ec3f1fd"). InnerVolumeSpecName "kube-api-access-xg85q". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 16:41:20 crc kubenswrapper[4755]: I1210 16:41:20.883657 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xg85q\" (UniqueName: \"kubernetes.io/projected/98434dbb-7819-46d6-ac5f-c34c5ec3f1fd-kube-api-access-xg85q\") on node \"crc\" DevicePath \"\"" Dec 10 16:41:20 crc kubenswrapper[4755]: I1210 16:41:20.886024 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/98434dbb-7819-46d6-ac5f-c34c5ec3f1fd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "98434dbb-7819-46d6-ac5f-c34c5ec3f1fd" (UID: "98434dbb-7819-46d6-ac5f-c34c5ec3f1fd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 16:41:20 crc kubenswrapper[4755]: I1210 16:41:20.985683 4755 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98434dbb-7819-46d6-ac5f-c34c5ec3f1fd-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 16:41:21 crc kubenswrapper[4755]: I1210 16:41:21.598807 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bddqv" Dec 10 16:41:21 crc kubenswrapper[4755]: I1210 16:41:21.638952 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bddqv"] Dec 10 16:41:21 crc kubenswrapper[4755]: I1210 16:41:21.651885 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bddqv"] Dec 10 16:41:21 crc kubenswrapper[4755]: I1210 16:41:21.776932 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98434dbb-7819-46d6-ac5f-c34c5ec3f1fd" path="/var/lib/kubelet/pods/98434dbb-7819-46d6-ac5f-c34c5ec3f1fd/volumes" Dec 10 16:41:23 crc kubenswrapper[4755]: E1210 16:41:23.770695 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:41:23 crc kubenswrapper[4755]: E1210 16:41:23.770997 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:41:36 crc kubenswrapper[4755]: I1210 16:41:36.040941 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mhhqh"] Dec 10 16:41:36 crc kubenswrapper[4755]: E1210 16:41:36.042583 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98434dbb-7819-46d6-ac5f-c34c5ec3f1fd" containerName="extract-utilities" Dec 10 16:41:36 crc kubenswrapper[4755]: I1210 16:41:36.042614 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="98434dbb-7819-46d6-ac5f-c34c5ec3f1fd" containerName="extract-utilities" Dec 10 16:41:36 crc kubenswrapper[4755]: E1210 16:41:36.042678 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98434dbb-7819-46d6-ac5f-c34c5ec3f1fd" containerName="extract-content" Dec 10 16:41:36 crc kubenswrapper[4755]: I1210 16:41:36.042702 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="98434dbb-7819-46d6-ac5f-c34c5ec3f1fd" containerName="extract-content" Dec 10 16:41:36 crc kubenswrapper[4755]: E1210 16:41:36.042751 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98434dbb-7819-46d6-ac5f-c34c5ec3f1fd" containerName="registry-server" Dec 10 16:41:36 crc kubenswrapper[4755]: I1210 16:41:36.042765 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="98434dbb-7819-46d6-ac5f-c34c5ec3f1fd" containerName="registry-server" Dec 10 16:41:36 crc kubenswrapper[4755]: I1210 16:41:36.043209 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="98434dbb-7819-46d6-ac5f-c34c5ec3f1fd" containerName="registry-server" Dec 10 16:41:36 crc kubenswrapper[4755]: I1210 16:41:36.045232 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mhhqh" Dec 10 16:41:36 crc kubenswrapper[4755]: I1210 16:41:36.049407 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-74mg7" Dec 10 16:41:36 crc kubenswrapper[4755]: I1210 16:41:36.049567 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 10 16:41:36 crc kubenswrapper[4755]: I1210 16:41:36.049764 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 10 16:41:36 crc kubenswrapper[4755]: I1210 16:41:36.053582 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 10 16:41:36 crc kubenswrapper[4755]: I1210 16:41:36.054806 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mhhqh"] Dec 10 16:41:36 crc kubenswrapper[4755]: I1210 16:41:36.081522 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7v4f5\" (UniqueName: \"kubernetes.io/projected/72c64052-d330-4d83-a2b5-37e7c7233934-kube-api-access-7v4f5\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mhhqh\" (UID: \"72c64052-d330-4d83-a2b5-37e7c7233934\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mhhqh" Dec 10 16:41:36 crc kubenswrapper[4755]: I1210 16:41:36.081642 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/72c64052-d330-4d83-a2b5-37e7c7233934-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mhhqh\" (UID: \"72c64052-d330-4d83-a2b5-37e7c7233934\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mhhqh" Dec 10 16:41:36 crc kubenswrapper[4755]: I1210 16:41:36.081746 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/72c64052-d330-4d83-a2b5-37e7c7233934-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mhhqh\" (UID: \"72c64052-d330-4d83-a2b5-37e7c7233934\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mhhqh" Dec 10 16:41:36 crc kubenswrapper[4755]: I1210 16:41:36.184125 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/72c64052-d330-4d83-a2b5-37e7c7233934-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mhhqh\" (UID: \"72c64052-d330-4d83-a2b5-37e7c7233934\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mhhqh" Dec 10 16:41:36 crc kubenswrapper[4755]: I1210 16:41:36.184515 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7v4f5\" (UniqueName: \"kubernetes.io/projected/72c64052-d330-4d83-a2b5-37e7c7233934-kube-api-access-7v4f5\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mhhqh\" (UID: \"72c64052-d330-4d83-a2b5-37e7c7233934\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mhhqh" Dec 10 16:41:36 crc kubenswrapper[4755]: I1210 16:41:36.184726 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/72c64052-d330-4d83-a2b5-37e7c7233934-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mhhqh\" (UID: \"72c64052-d330-4d83-a2b5-37e7c7233934\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mhhqh" Dec 10 16:41:36 crc kubenswrapper[4755]: I1210 16:41:36.190800 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/72c64052-d330-4d83-a2b5-37e7c7233934-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mhhqh\" (UID: \"72c64052-d330-4d83-a2b5-37e7c7233934\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mhhqh" Dec 10 16:41:36 crc kubenswrapper[4755]: I1210 16:41:36.199508 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/72c64052-d330-4d83-a2b5-37e7c7233934-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mhhqh\" (UID: \"72c64052-d330-4d83-a2b5-37e7c7233934\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mhhqh" Dec 10 16:41:36 crc kubenswrapper[4755]: I1210 16:41:36.203875 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7v4f5\" (UniqueName: \"kubernetes.io/projected/72c64052-d330-4d83-a2b5-37e7c7233934-kube-api-access-7v4f5\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mhhqh\" (UID: \"72c64052-d330-4d83-a2b5-37e7c7233934\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mhhqh" Dec 10 16:41:36 crc kubenswrapper[4755]: I1210 16:41:36.376646 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mhhqh" Dec 10 16:41:36 crc kubenswrapper[4755]: E1210 16:41:36.760077 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:41:36 crc kubenswrapper[4755]: I1210 16:41:36.982090 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mhhqh"] Dec 10 16:41:37 crc kubenswrapper[4755]: I1210 16:41:37.784759 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mhhqh" event={"ID":"72c64052-d330-4d83-a2b5-37e7c7233934","Type":"ContainerStarted","Data":"2705c5195afaf9387114de92846d6b0b2a8456fa63c2a3df63c4b1bf6309a8f6"} Dec 10 16:41:38 crc kubenswrapper[4755]: E1210 16:41:38.760420 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:41:40 crc kubenswrapper[4755]: I1210 16:41:40.814922 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mhhqh" event={"ID":"72c64052-d330-4d83-a2b5-37e7c7233934","Type":"ContainerStarted","Data":"5dc7fe05696e3fa363a1b1a40a69b258947ed8e306aee99ce19a00a9438ed032"} Dec 10 16:41:40 crc kubenswrapper[4755]: I1210 16:41:40.832897 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mhhqh" podStartSLOduration=1.633914051 podStartE2EDuration="4.832876512s" podCreationTimestamp="2025-12-10 16:41:36 +0000 UTC" firstStartedPulling="2025-12-10 16:41:36.980016155 +0000 UTC m=+4693.580899807" lastFinishedPulling="2025-12-10 16:41:40.178978636 +0000 UTC m=+4696.779862268" observedRunningTime="2025-12-10 16:41:40.828103738 +0000 UTC m=+4697.428987370" watchObservedRunningTime="2025-12-10 16:41:40.832876512 +0000 UTC m=+4697.433760144" Dec 10 16:41:47 crc kubenswrapper[4755]: E1210 16:41:47.762550 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:41:49 crc kubenswrapper[4755]: E1210 16:41:49.761595 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:41:59 crc kubenswrapper[4755]: E1210 16:41:59.759633 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:42:01 crc kubenswrapper[4755]: E1210 16:42:01.761306 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:42:11 crc kubenswrapper[4755]: E1210 16:42:11.758961 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:42:16 crc kubenswrapper[4755]: E1210 16:42:16.760003 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:42:23 crc kubenswrapper[4755]: E1210 16:42:23.785748 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:42:27 crc kubenswrapper[4755]: E1210 16:42:27.762397 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:42:37 crc kubenswrapper[4755]: E1210 16:42:37.761131 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:42:41 crc kubenswrapper[4755]: E1210 16:42:41.760537 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:42:47 crc kubenswrapper[4755]: I1210 16:42:47.520537 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-v9jnp"] Dec 10 16:42:47 crc kubenswrapper[4755]: I1210 16:42:47.524046 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v9jnp" Dec 10 16:42:47 crc kubenswrapper[4755]: I1210 16:42:47.550246 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-v9jnp"] Dec 10 16:42:47 crc kubenswrapper[4755]: I1210 16:42:47.696734 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndfxw\" (UniqueName: \"kubernetes.io/projected/812a5533-62ee-43d6-88e9-8f004b78f7f1-kube-api-access-ndfxw\") pod \"redhat-marketplace-v9jnp\" (UID: \"812a5533-62ee-43d6-88e9-8f004b78f7f1\") " pod="openshift-marketplace/redhat-marketplace-v9jnp" Dec 10 16:42:47 crc kubenswrapper[4755]: I1210 16:42:47.697051 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/812a5533-62ee-43d6-88e9-8f004b78f7f1-catalog-content\") pod \"redhat-marketplace-v9jnp\" (UID: \"812a5533-62ee-43d6-88e9-8f004b78f7f1\") " pod="openshift-marketplace/redhat-marketplace-v9jnp" Dec 10 16:42:47 crc kubenswrapper[4755]: I1210 16:42:47.697181 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/812a5533-62ee-43d6-88e9-8f004b78f7f1-utilities\") pod \"redhat-marketplace-v9jnp\" (UID: \"812a5533-62ee-43d6-88e9-8f004b78f7f1\") " pod="openshift-marketplace/redhat-marketplace-v9jnp" Dec 10 16:42:47 crc kubenswrapper[4755]: I1210 16:42:47.798545 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/812a5533-62ee-43d6-88e9-8f004b78f7f1-utilities\") pod \"redhat-marketplace-v9jnp\" (UID: \"812a5533-62ee-43d6-88e9-8f004b78f7f1\") " pod="openshift-marketplace/redhat-marketplace-v9jnp" Dec 10 16:42:47 crc kubenswrapper[4755]: I1210 16:42:47.798629 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndfxw\" (UniqueName: \"kubernetes.io/projected/812a5533-62ee-43d6-88e9-8f004b78f7f1-kube-api-access-ndfxw\") pod \"redhat-marketplace-v9jnp\" (UID: \"812a5533-62ee-43d6-88e9-8f004b78f7f1\") " pod="openshift-marketplace/redhat-marketplace-v9jnp" Dec 10 16:42:47 crc kubenswrapper[4755]: I1210 16:42:47.798793 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/812a5533-62ee-43d6-88e9-8f004b78f7f1-catalog-content\") pod \"redhat-marketplace-v9jnp\" (UID: \"812a5533-62ee-43d6-88e9-8f004b78f7f1\") " pod="openshift-marketplace/redhat-marketplace-v9jnp" Dec 10 16:42:47 crc kubenswrapper[4755]: I1210 16:42:47.799103 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/812a5533-62ee-43d6-88e9-8f004b78f7f1-utilities\") pod \"redhat-marketplace-v9jnp\" (UID: \"812a5533-62ee-43d6-88e9-8f004b78f7f1\") " pod="openshift-marketplace/redhat-marketplace-v9jnp" Dec 10 16:42:47 crc kubenswrapper[4755]: I1210 16:42:47.799193 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/812a5533-62ee-43d6-88e9-8f004b78f7f1-catalog-content\") pod \"redhat-marketplace-v9jnp\" (UID: \"812a5533-62ee-43d6-88e9-8f004b78f7f1\") " pod="openshift-marketplace/redhat-marketplace-v9jnp" Dec 10 16:42:47 crc kubenswrapper[4755]: I1210 16:42:47.828264 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndfxw\" (UniqueName: \"kubernetes.io/projected/812a5533-62ee-43d6-88e9-8f004b78f7f1-kube-api-access-ndfxw\") pod \"redhat-marketplace-v9jnp\" (UID: \"812a5533-62ee-43d6-88e9-8f004b78f7f1\") " pod="openshift-marketplace/redhat-marketplace-v9jnp" Dec 10 16:42:47 crc kubenswrapper[4755]: I1210 16:42:47.873608 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v9jnp" Dec 10 16:42:48 crc kubenswrapper[4755]: I1210 16:42:48.358233 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-v9jnp"] Dec 10 16:42:48 crc kubenswrapper[4755]: W1210 16:42:48.368881 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod812a5533_62ee_43d6_88e9_8f004b78f7f1.slice/crio-39e31cf6018f24517bcad5c19fa0843363cc997b627f4d9198cf352c2cb59807 WatchSource:0}: Error finding container 39e31cf6018f24517bcad5c19fa0843363cc997b627f4d9198cf352c2cb59807: Status 404 returned error can't find the container with id 39e31cf6018f24517bcad5c19fa0843363cc997b627f4d9198cf352c2cb59807 Dec 10 16:42:48 crc kubenswrapper[4755]: I1210 16:42:48.537968 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v9jnp" event={"ID":"812a5533-62ee-43d6-88e9-8f004b78f7f1","Type":"ContainerStarted","Data":"39e31cf6018f24517bcad5c19fa0843363cc997b627f4d9198cf352c2cb59807"} Dec 10 16:42:49 crc kubenswrapper[4755]: I1210 16:42:49.552214 4755 generic.go:334] "Generic (PLEG): container finished" podID="812a5533-62ee-43d6-88e9-8f004b78f7f1" containerID="f79364777c9be3479a9b0a849763ab341a29dcc43c620b31e5c1f12ecf3bd214" exitCode=0 Dec 10 16:42:49 crc kubenswrapper[4755]: I1210 16:42:49.552289 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v9jnp" event={"ID":"812a5533-62ee-43d6-88e9-8f004b78f7f1","Type":"ContainerDied","Data":"f79364777c9be3479a9b0a849763ab341a29dcc43c620b31e5c1f12ecf3bd214"} Dec 10 16:42:50 crc kubenswrapper[4755]: I1210 16:42:50.641007 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-t8p57"] Dec 10 16:42:50 crc kubenswrapper[4755]: I1210 16:42:50.644857 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t8p57" Dec 10 16:42:50 crc kubenswrapper[4755]: I1210 16:42:50.661559 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t8p57"] Dec 10 16:42:50 crc kubenswrapper[4755]: I1210 16:42:50.689334 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a9bc23c-d723-428d-bfeb-79cdc46ed52a-catalog-content\") pod \"certified-operators-t8p57\" (UID: \"9a9bc23c-d723-428d-bfeb-79cdc46ed52a\") " pod="openshift-marketplace/certified-operators-t8p57" Dec 10 16:42:50 crc kubenswrapper[4755]: I1210 16:42:50.689728 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ld9f9\" (UniqueName: \"kubernetes.io/projected/9a9bc23c-d723-428d-bfeb-79cdc46ed52a-kube-api-access-ld9f9\") pod \"certified-operators-t8p57\" (UID: \"9a9bc23c-d723-428d-bfeb-79cdc46ed52a\") " pod="openshift-marketplace/certified-operators-t8p57" Dec 10 16:42:50 crc kubenswrapper[4755]: I1210 16:42:50.689936 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a9bc23c-d723-428d-bfeb-79cdc46ed52a-utilities\") pod \"certified-operators-t8p57\" (UID: \"9a9bc23c-d723-428d-bfeb-79cdc46ed52a\") " pod="openshift-marketplace/certified-operators-t8p57" Dec 10 16:42:50 crc kubenswrapper[4755]: I1210 16:42:50.792212 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a9bc23c-d723-428d-bfeb-79cdc46ed52a-utilities\") pod \"certified-operators-t8p57\" (UID: \"9a9bc23c-d723-428d-bfeb-79cdc46ed52a\") " pod="openshift-marketplace/certified-operators-t8p57" Dec 10 16:42:50 crc kubenswrapper[4755]: I1210 16:42:50.792374 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a9bc23c-d723-428d-bfeb-79cdc46ed52a-catalog-content\") pod \"certified-operators-t8p57\" (UID: \"9a9bc23c-d723-428d-bfeb-79cdc46ed52a\") " pod="openshift-marketplace/certified-operators-t8p57" Dec 10 16:42:50 crc kubenswrapper[4755]: I1210 16:42:50.792444 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ld9f9\" (UniqueName: \"kubernetes.io/projected/9a9bc23c-d723-428d-bfeb-79cdc46ed52a-kube-api-access-ld9f9\") pod \"certified-operators-t8p57\" (UID: \"9a9bc23c-d723-428d-bfeb-79cdc46ed52a\") " pod="openshift-marketplace/certified-operators-t8p57" Dec 10 16:42:50 crc kubenswrapper[4755]: I1210 16:42:50.793155 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a9bc23c-d723-428d-bfeb-79cdc46ed52a-catalog-content\") pod \"certified-operators-t8p57\" (UID: \"9a9bc23c-d723-428d-bfeb-79cdc46ed52a\") " pod="openshift-marketplace/certified-operators-t8p57" Dec 10 16:42:50 crc kubenswrapper[4755]: I1210 16:42:50.793208 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a9bc23c-d723-428d-bfeb-79cdc46ed52a-utilities\") pod \"certified-operators-t8p57\" (UID: \"9a9bc23c-d723-428d-bfeb-79cdc46ed52a\") " pod="openshift-marketplace/certified-operators-t8p57" Dec 10 16:42:50 crc kubenswrapper[4755]: I1210 16:42:50.824540 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ld9f9\" (UniqueName: \"kubernetes.io/projected/9a9bc23c-d723-428d-bfeb-79cdc46ed52a-kube-api-access-ld9f9\") pod \"certified-operators-t8p57\" (UID: \"9a9bc23c-d723-428d-bfeb-79cdc46ed52a\") " pod="openshift-marketplace/certified-operators-t8p57" Dec 10 16:42:50 crc kubenswrapper[4755]: I1210 16:42:50.984973 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t8p57" Dec 10 16:42:51 crc kubenswrapper[4755]: W1210 16:42:51.545157 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9a9bc23c_d723_428d_bfeb_79cdc46ed52a.slice/crio-2fbc256c7eebb951771a7de3361bbd66a28de0cfb406ea402096fbbff3ae9208 WatchSource:0}: Error finding container 2fbc256c7eebb951771a7de3361bbd66a28de0cfb406ea402096fbbff3ae9208: Status 404 returned error can't find the container with id 2fbc256c7eebb951771a7de3361bbd66a28de0cfb406ea402096fbbff3ae9208 Dec 10 16:42:51 crc kubenswrapper[4755]: I1210 16:42:51.565342 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t8p57"] Dec 10 16:42:51 crc kubenswrapper[4755]: I1210 16:42:51.575400 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v9jnp" event={"ID":"812a5533-62ee-43d6-88e9-8f004b78f7f1","Type":"ContainerStarted","Data":"7c89e0c694ae15cc34f94868ea929e6f97c730586cc1e21ff979eef6d596704e"} Dec 10 16:42:51 crc kubenswrapper[4755]: I1210 16:42:51.578171 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t8p57" event={"ID":"9a9bc23c-d723-428d-bfeb-79cdc46ed52a","Type":"ContainerStarted","Data":"2fbc256c7eebb951771a7de3361bbd66a28de0cfb406ea402096fbbff3ae9208"} Dec 10 16:42:51 crc kubenswrapper[4755]: E1210 16:42:51.760147 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:42:52 crc kubenswrapper[4755]: I1210 16:42:52.591190 4755 generic.go:334] "Generic (PLEG): container finished" podID="812a5533-62ee-43d6-88e9-8f004b78f7f1" containerID="7c89e0c694ae15cc34f94868ea929e6f97c730586cc1e21ff979eef6d596704e" exitCode=0 Dec 10 16:42:52 crc kubenswrapper[4755]: I1210 16:42:52.591849 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v9jnp" event={"ID":"812a5533-62ee-43d6-88e9-8f004b78f7f1","Type":"ContainerDied","Data":"7c89e0c694ae15cc34f94868ea929e6f97c730586cc1e21ff979eef6d596704e"} Dec 10 16:42:52 crc kubenswrapper[4755]: I1210 16:42:52.596057 4755 generic.go:334] "Generic (PLEG): container finished" podID="9a9bc23c-d723-428d-bfeb-79cdc46ed52a" containerID="29e24a2f029820115d7d33490661162c7d60fd641751dc68002ab7ac6c0e8669" exitCode=0 Dec 10 16:42:52 crc kubenswrapper[4755]: I1210 16:42:52.596143 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t8p57" event={"ID":"9a9bc23c-d723-428d-bfeb-79cdc46ed52a","Type":"ContainerDied","Data":"29e24a2f029820115d7d33490661162c7d60fd641751dc68002ab7ac6c0e8669"} Dec 10 16:42:53 crc kubenswrapper[4755]: I1210 16:42:53.609296 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v9jnp" event={"ID":"812a5533-62ee-43d6-88e9-8f004b78f7f1","Type":"ContainerStarted","Data":"7f6a5f1235ee15537cf8c29c65d6a6521be4b53a0794b2a696c6afcbe4c871d1"} Dec 10 16:42:53 crc kubenswrapper[4755]: I1210 16:42:53.632127 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-v9jnp" podStartSLOduration=3.136003847 podStartE2EDuration="6.632107922s" podCreationTimestamp="2025-12-10 16:42:47 +0000 UTC" firstStartedPulling="2025-12-10 16:42:49.555049162 +0000 UTC m=+4766.155932814" lastFinishedPulling="2025-12-10 16:42:53.051153237 +0000 UTC m=+4769.652036889" observedRunningTime="2025-12-10 16:42:53.628745398 +0000 UTC m=+4770.229629030" watchObservedRunningTime="2025-12-10 16:42:53.632107922 +0000 UTC m=+4770.232991554" Dec 10 16:42:54 crc kubenswrapper[4755]: I1210 16:42:54.621457 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t8p57" event={"ID":"9a9bc23c-d723-428d-bfeb-79cdc46ed52a","Type":"ContainerStarted","Data":"8f7fd3ab33c777256f840884b47c41436ab978ab9b6faf1b75f606a71125ebf2"} Dec 10 16:42:54 crc kubenswrapper[4755]: E1210 16:42:54.759911 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:42:55 crc kubenswrapper[4755]: I1210 16:42:55.665596 4755 generic.go:334] "Generic (PLEG): container finished" podID="9a9bc23c-d723-428d-bfeb-79cdc46ed52a" containerID="8f7fd3ab33c777256f840884b47c41436ab978ab9b6faf1b75f606a71125ebf2" exitCode=0 Dec 10 16:42:55 crc kubenswrapper[4755]: I1210 16:42:55.666853 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t8p57" event={"ID":"9a9bc23c-d723-428d-bfeb-79cdc46ed52a","Type":"ContainerDied","Data":"8f7fd3ab33c777256f840884b47c41436ab978ab9b6faf1b75f606a71125ebf2"} Dec 10 16:42:56 crc kubenswrapper[4755]: I1210 16:42:56.677634 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t8p57" event={"ID":"9a9bc23c-d723-428d-bfeb-79cdc46ed52a","Type":"ContainerStarted","Data":"a967277b6f3351c65ee183eadcf1364434c06e1e0548271a0a9181b794309a55"} Dec 10 16:42:56 crc kubenswrapper[4755]: I1210 16:42:56.705722 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-t8p57" podStartSLOduration=3.115058364 podStartE2EDuration="6.705692184s" podCreationTimestamp="2025-12-10 16:42:50 +0000 UTC" firstStartedPulling="2025-12-10 16:42:52.598620944 +0000 UTC m=+4769.199504606" lastFinishedPulling="2025-12-10 16:42:56.189254794 +0000 UTC m=+4772.790138426" observedRunningTime="2025-12-10 16:42:56.698246576 +0000 UTC m=+4773.299130208" watchObservedRunningTime="2025-12-10 16:42:56.705692184 +0000 UTC m=+4773.306575836" Dec 10 16:42:57 crc kubenswrapper[4755]: I1210 16:42:57.873964 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-v9jnp" Dec 10 16:42:57 crc kubenswrapper[4755]: I1210 16:42:57.874225 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-v9jnp" Dec 10 16:42:57 crc kubenswrapper[4755]: I1210 16:42:57.923767 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-v9jnp" Dec 10 16:42:58 crc kubenswrapper[4755]: I1210 16:42:58.760228 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-v9jnp" Dec 10 16:43:00 crc kubenswrapper[4755]: I1210 16:43:00.008229 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-v9jnp"] Dec 10 16:43:00 crc kubenswrapper[4755]: I1210 16:43:00.724170 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-v9jnp" podUID="812a5533-62ee-43d6-88e9-8f004b78f7f1" containerName="registry-server" containerID="cri-o://7f6a5f1235ee15537cf8c29c65d6a6521be4b53a0794b2a696c6afcbe4c871d1" gracePeriod=2 Dec 10 16:43:00 crc kubenswrapper[4755]: I1210 16:43:00.985938 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-t8p57" Dec 10 16:43:00 crc kubenswrapper[4755]: I1210 16:43:00.992996 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-t8p57" Dec 10 16:43:01 crc kubenswrapper[4755]: I1210 16:43:01.047688 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-t8p57" Dec 10 16:43:01 crc kubenswrapper[4755]: I1210 16:43:01.735004 4755 generic.go:334] "Generic (PLEG): container finished" podID="812a5533-62ee-43d6-88e9-8f004b78f7f1" containerID="7f6a5f1235ee15537cf8c29c65d6a6521be4b53a0794b2a696c6afcbe4c871d1" exitCode=0 Dec 10 16:43:01 crc kubenswrapper[4755]: I1210 16:43:01.735058 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v9jnp" event={"ID":"812a5533-62ee-43d6-88e9-8f004b78f7f1","Type":"ContainerDied","Data":"7f6a5f1235ee15537cf8c29c65d6a6521be4b53a0794b2a696c6afcbe4c871d1"} Dec 10 16:43:01 crc kubenswrapper[4755]: I1210 16:43:01.735278 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v9jnp" event={"ID":"812a5533-62ee-43d6-88e9-8f004b78f7f1","Type":"ContainerDied","Data":"39e31cf6018f24517bcad5c19fa0843363cc997b627f4d9198cf352c2cb59807"} Dec 10 16:43:01 crc kubenswrapper[4755]: I1210 16:43:01.735293 4755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="39e31cf6018f24517bcad5c19fa0843363cc997b627f4d9198cf352c2cb59807" Dec 10 16:43:01 crc kubenswrapper[4755]: I1210 16:43:01.840443 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-t8p57" Dec 10 16:43:01 crc kubenswrapper[4755]: I1210 16:43:01.881610 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v9jnp" Dec 10 16:43:02 crc kubenswrapper[4755]: I1210 16:43:02.037716 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ndfxw\" (UniqueName: \"kubernetes.io/projected/812a5533-62ee-43d6-88e9-8f004b78f7f1-kube-api-access-ndfxw\") pod \"812a5533-62ee-43d6-88e9-8f004b78f7f1\" (UID: \"812a5533-62ee-43d6-88e9-8f004b78f7f1\") " Dec 10 16:43:02 crc kubenswrapper[4755]: I1210 16:43:02.038140 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/812a5533-62ee-43d6-88e9-8f004b78f7f1-catalog-content\") pod \"812a5533-62ee-43d6-88e9-8f004b78f7f1\" (UID: \"812a5533-62ee-43d6-88e9-8f004b78f7f1\") " Dec 10 16:43:02 crc kubenswrapper[4755]: I1210 16:43:02.038250 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/812a5533-62ee-43d6-88e9-8f004b78f7f1-utilities\") pod \"812a5533-62ee-43d6-88e9-8f004b78f7f1\" (UID: \"812a5533-62ee-43d6-88e9-8f004b78f7f1\") " Dec 10 16:43:02 crc kubenswrapper[4755]: I1210 16:43:02.040198 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/812a5533-62ee-43d6-88e9-8f004b78f7f1-utilities" (OuterVolumeSpecName: "utilities") pod "812a5533-62ee-43d6-88e9-8f004b78f7f1" (UID: "812a5533-62ee-43d6-88e9-8f004b78f7f1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 16:43:02 crc kubenswrapper[4755]: I1210 16:43:02.065679 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/812a5533-62ee-43d6-88e9-8f004b78f7f1-kube-api-access-ndfxw" (OuterVolumeSpecName: "kube-api-access-ndfxw") pod "812a5533-62ee-43d6-88e9-8f004b78f7f1" (UID: "812a5533-62ee-43d6-88e9-8f004b78f7f1"). InnerVolumeSpecName "kube-api-access-ndfxw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 16:43:02 crc kubenswrapper[4755]: I1210 16:43:02.084743 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/812a5533-62ee-43d6-88e9-8f004b78f7f1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "812a5533-62ee-43d6-88e9-8f004b78f7f1" (UID: "812a5533-62ee-43d6-88e9-8f004b78f7f1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 16:43:02 crc kubenswrapper[4755]: I1210 16:43:02.141570 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ndfxw\" (UniqueName: \"kubernetes.io/projected/812a5533-62ee-43d6-88e9-8f004b78f7f1-kube-api-access-ndfxw\") on node \"crc\" DevicePath \"\"" Dec 10 16:43:02 crc kubenswrapper[4755]: I1210 16:43:02.141621 4755 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/812a5533-62ee-43d6-88e9-8f004b78f7f1-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 16:43:02 crc kubenswrapper[4755]: I1210 16:43:02.141635 4755 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/812a5533-62ee-43d6-88e9-8f004b78f7f1-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 16:43:02 crc kubenswrapper[4755]: I1210 16:43:02.745965 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v9jnp" Dec 10 16:43:02 crc kubenswrapper[4755]: I1210 16:43:02.812049 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-v9jnp"] Dec 10 16:43:02 crc kubenswrapper[4755]: I1210 16:43:02.823233 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-v9jnp"] Dec 10 16:43:03 crc kubenswrapper[4755]: I1210 16:43:03.420196 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-t8p57"] Dec 10 16:43:03 crc kubenswrapper[4755]: I1210 16:43:03.769399 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="812a5533-62ee-43d6-88e9-8f004b78f7f1" path="/var/lib/kubelet/pods/812a5533-62ee-43d6-88e9-8f004b78f7f1/volumes" Dec 10 16:43:04 crc kubenswrapper[4755]: E1210 16:43:04.759072 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:43:04 crc kubenswrapper[4755]: I1210 16:43:04.766860 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-t8p57" podUID="9a9bc23c-d723-428d-bfeb-79cdc46ed52a" containerName="registry-server" containerID="cri-o://a967277b6f3351c65ee183eadcf1364434c06e1e0548271a0a9181b794309a55" gracePeriod=2 Dec 10 16:43:05 crc kubenswrapper[4755]: I1210 16:43:05.226796 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t8p57" Dec 10 16:43:05 crc kubenswrapper[4755]: I1210 16:43:05.332559 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a9bc23c-d723-428d-bfeb-79cdc46ed52a-utilities\") pod \"9a9bc23c-d723-428d-bfeb-79cdc46ed52a\" (UID: \"9a9bc23c-d723-428d-bfeb-79cdc46ed52a\") " Dec 10 16:43:05 crc kubenswrapper[4755]: I1210 16:43:05.332656 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a9bc23c-d723-428d-bfeb-79cdc46ed52a-catalog-content\") pod \"9a9bc23c-d723-428d-bfeb-79cdc46ed52a\" (UID: \"9a9bc23c-d723-428d-bfeb-79cdc46ed52a\") " Dec 10 16:43:05 crc kubenswrapper[4755]: I1210 16:43:05.332834 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ld9f9\" (UniqueName: \"kubernetes.io/projected/9a9bc23c-d723-428d-bfeb-79cdc46ed52a-kube-api-access-ld9f9\") pod \"9a9bc23c-d723-428d-bfeb-79cdc46ed52a\" (UID: \"9a9bc23c-d723-428d-bfeb-79cdc46ed52a\") " Dec 10 16:43:05 crc kubenswrapper[4755]: I1210 16:43:05.333428 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a9bc23c-d723-428d-bfeb-79cdc46ed52a-utilities" (OuterVolumeSpecName: "utilities") pod "9a9bc23c-d723-428d-bfeb-79cdc46ed52a" (UID: "9a9bc23c-d723-428d-bfeb-79cdc46ed52a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 16:43:05 crc kubenswrapper[4755]: I1210 16:43:05.337990 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a9bc23c-d723-428d-bfeb-79cdc46ed52a-kube-api-access-ld9f9" (OuterVolumeSpecName: "kube-api-access-ld9f9") pod "9a9bc23c-d723-428d-bfeb-79cdc46ed52a" (UID: "9a9bc23c-d723-428d-bfeb-79cdc46ed52a"). InnerVolumeSpecName "kube-api-access-ld9f9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 16:43:05 crc kubenswrapper[4755]: I1210 16:43:05.385796 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a9bc23c-d723-428d-bfeb-79cdc46ed52a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9a9bc23c-d723-428d-bfeb-79cdc46ed52a" (UID: "9a9bc23c-d723-428d-bfeb-79cdc46ed52a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 16:43:05 crc kubenswrapper[4755]: I1210 16:43:05.435531 4755 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a9bc23c-d723-428d-bfeb-79cdc46ed52a-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 16:43:05 crc kubenswrapper[4755]: I1210 16:43:05.435737 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ld9f9\" (UniqueName: \"kubernetes.io/projected/9a9bc23c-d723-428d-bfeb-79cdc46ed52a-kube-api-access-ld9f9\") on node \"crc\" DevicePath \"\"" Dec 10 16:43:05 crc kubenswrapper[4755]: I1210 16:43:05.435768 4755 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a9bc23c-d723-428d-bfeb-79cdc46ed52a-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 16:43:05 crc kubenswrapper[4755]: I1210 16:43:05.778554 4755 generic.go:334] "Generic (PLEG): container finished" podID="9a9bc23c-d723-428d-bfeb-79cdc46ed52a" containerID="a967277b6f3351c65ee183eadcf1364434c06e1e0548271a0a9181b794309a55" exitCode=0 Dec 10 16:43:05 crc kubenswrapper[4755]: I1210 16:43:05.778602 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t8p57" event={"ID":"9a9bc23c-d723-428d-bfeb-79cdc46ed52a","Type":"ContainerDied","Data":"a967277b6f3351c65ee183eadcf1364434c06e1e0548271a0a9181b794309a55"} Dec 10 16:43:05 crc kubenswrapper[4755]: I1210 16:43:05.778632 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t8p57" event={"ID":"9a9bc23c-d723-428d-bfeb-79cdc46ed52a","Type":"ContainerDied","Data":"2fbc256c7eebb951771a7de3361bbd66a28de0cfb406ea402096fbbff3ae9208"} Dec 10 16:43:05 crc kubenswrapper[4755]: I1210 16:43:05.778691 4755 scope.go:117] "RemoveContainer" containerID="a967277b6f3351c65ee183eadcf1364434c06e1e0548271a0a9181b794309a55" Dec 10 16:43:05 crc kubenswrapper[4755]: I1210 16:43:05.778831 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t8p57" Dec 10 16:43:05 crc kubenswrapper[4755]: I1210 16:43:05.807113 4755 scope.go:117] "RemoveContainer" containerID="8f7fd3ab33c777256f840884b47c41436ab978ab9b6faf1b75f606a71125ebf2" Dec 10 16:43:05 crc kubenswrapper[4755]: I1210 16:43:05.828882 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-t8p57"] Dec 10 16:43:05 crc kubenswrapper[4755]: I1210 16:43:05.838919 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-t8p57"] Dec 10 16:43:05 crc kubenswrapper[4755]: I1210 16:43:05.840409 4755 scope.go:117] "RemoveContainer" containerID="29e24a2f029820115d7d33490661162c7d60fd641751dc68002ab7ac6c0e8669" Dec 10 16:43:05 crc kubenswrapper[4755]: I1210 16:43:05.881283 4755 scope.go:117] "RemoveContainer" containerID="a967277b6f3351c65ee183eadcf1364434c06e1e0548271a0a9181b794309a55" Dec 10 16:43:05 crc kubenswrapper[4755]: E1210 16:43:05.884535 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a967277b6f3351c65ee183eadcf1364434c06e1e0548271a0a9181b794309a55\": container with ID starting with a967277b6f3351c65ee183eadcf1364434c06e1e0548271a0a9181b794309a55 not found: ID does not exist" containerID="a967277b6f3351c65ee183eadcf1364434c06e1e0548271a0a9181b794309a55" Dec 10 16:43:05 crc kubenswrapper[4755]: I1210 16:43:05.884596 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a967277b6f3351c65ee183eadcf1364434c06e1e0548271a0a9181b794309a55"} err="failed to get container status \"a967277b6f3351c65ee183eadcf1364434c06e1e0548271a0a9181b794309a55\": rpc error: code = NotFound desc = could not find container \"a967277b6f3351c65ee183eadcf1364434c06e1e0548271a0a9181b794309a55\": container with ID starting with a967277b6f3351c65ee183eadcf1364434c06e1e0548271a0a9181b794309a55 not found: ID does not exist" Dec 10 16:43:05 crc kubenswrapper[4755]: I1210 16:43:05.884637 4755 scope.go:117] "RemoveContainer" containerID="8f7fd3ab33c777256f840884b47c41436ab978ab9b6faf1b75f606a71125ebf2" Dec 10 16:43:05 crc kubenswrapper[4755]: E1210 16:43:05.885224 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f7fd3ab33c777256f840884b47c41436ab978ab9b6faf1b75f606a71125ebf2\": container with ID starting with 8f7fd3ab33c777256f840884b47c41436ab978ab9b6faf1b75f606a71125ebf2 not found: ID does not exist" containerID="8f7fd3ab33c777256f840884b47c41436ab978ab9b6faf1b75f606a71125ebf2" Dec 10 16:43:05 crc kubenswrapper[4755]: I1210 16:43:05.885281 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f7fd3ab33c777256f840884b47c41436ab978ab9b6faf1b75f606a71125ebf2"} err="failed to get container status \"8f7fd3ab33c777256f840884b47c41436ab978ab9b6faf1b75f606a71125ebf2\": rpc error: code = NotFound desc = could not find container \"8f7fd3ab33c777256f840884b47c41436ab978ab9b6faf1b75f606a71125ebf2\": container with ID starting with 8f7fd3ab33c777256f840884b47c41436ab978ab9b6faf1b75f606a71125ebf2 not found: ID does not exist" Dec 10 16:43:05 crc kubenswrapper[4755]: I1210 16:43:05.885316 4755 scope.go:117] "RemoveContainer" containerID="29e24a2f029820115d7d33490661162c7d60fd641751dc68002ab7ac6c0e8669" Dec 10 16:43:05 crc kubenswrapper[4755]: E1210 16:43:05.885797 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29e24a2f029820115d7d33490661162c7d60fd641751dc68002ab7ac6c0e8669\": container with ID starting with 29e24a2f029820115d7d33490661162c7d60fd641751dc68002ab7ac6c0e8669 not found: ID does not exist" containerID="29e24a2f029820115d7d33490661162c7d60fd641751dc68002ab7ac6c0e8669" Dec 10 16:43:05 crc kubenswrapper[4755]: I1210 16:43:05.885821 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29e24a2f029820115d7d33490661162c7d60fd641751dc68002ab7ac6c0e8669"} err="failed to get container status \"29e24a2f029820115d7d33490661162c7d60fd641751dc68002ab7ac6c0e8669\": rpc error: code = NotFound desc = could not find container \"29e24a2f029820115d7d33490661162c7d60fd641751dc68002ab7ac6c0e8669\": container with ID starting with 29e24a2f029820115d7d33490661162c7d60fd641751dc68002ab7ac6c0e8669 not found: ID does not exist" Dec 10 16:43:07 crc kubenswrapper[4755]: I1210 16:43:07.768616 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a9bc23c-d723-428d-bfeb-79cdc46ed52a" path="/var/lib/kubelet/pods/9a9bc23c-d723-428d-bfeb-79cdc46ed52a/volumes" Dec 10 16:43:08 crc kubenswrapper[4755]: E1210 16:43:08.759569 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:43:18 crc kubenswrapper[4755]: E1210 16:43:18.763271 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:43:23 crc kubenswrapper[4755]: E1210 16:43:23.777731 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:43:30 crc kubenswrapper[4755]: E1210 16:43:30.759609 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:43:35 crc kubenswrapper[4755]: E1210 16:43:35.760923 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:43:40 crc kubenswrapper[4755]: I1210 16:43:40.359148 4755 patch_prober.go:28] interesting pod/machine-config-daemon-ggt8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 16:43:40 crc kubenswrapper[4755]: I1210 16:43:40.359837 4755 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 16:43:45 crc kubenswrapper[4755]: E1210 16:43:45.760270 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:43:46 crc kubenswrapper[4755]: E1210 16:43:46.759620 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:43:57 crc kubenswrapper[4755]: E1210 16:43:57.765911 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:44:00 crc kubenswrapper[4755]: E1210 16:44:00.759568 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:44:10 crc kubenswrapper[4755]: I1210 16:44:10.359356 4755 patch_prober.go:28] interesting pod/machine-config-daemon-ggt8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 16:44:10 crc kubenswrapper[4755]: I1210 16:44:10.360669 4755 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 16:44:12 crc kubenswrapper[4755]: E1210 16:44:12.760052 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:44:15 crc kubenswrapper[4755]: E1210 16:44:15.764594 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:44:25 crc kubenswrapper[4755]: E1210 16:44:25.761149 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:44:29 crc kubenswrapper[4755]: E1210 16:44:29.762805 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:44:37 crc kubenswrapper[4755]: E1210 16:44:37.760633 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:44:40 crc kubenswrapper[4755]: I1210 16:44:40.359028 4755 patch_prober.go:28] interesting pod/machine-config-daemon-ggt8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 16:44:40 crc kubenswrapper[4755]: I1210 16:44:40.360199 4755 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 16:44:40 crc kubenswrapper[4755]: I1210 16:44:40.360357 4755 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" Dec 10 16:44:40 crc kubenswrapper[4755]: I1210 16:44:40.361289 4755 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"079523fb0e18a86eddd800cf00287e3feddaac93af0f36614f19e6b4a4894ff7"} pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 10 16:44:40 crc kubenswrapper[4755]: I1210 16:44:40.361539 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" containerName="machine-config-daemon" containerID="cri-o://079523fb0e18a86eddd800cf00287e3feddaac93af0f36614f19e6b4a4894ff7" gracePeriod=600 Dec 10 16:44:40 crc kubenswrapper[4755]: I1210 16:44:40.893241 4755 generic.go:334] "Generic (PLEG): container finished" podID="b132a8b9-1c99-414d-8773-229bf36b305d" containerID="079523fb0e18a86eddd800cf00287e3feddaac93af0f36614f19e6b4a4894ff7" exitCode=0 Dec 10 16:44:40 crc kubenswrapper[4755]: I1210 16:44:40.893707 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" event={"ID":"b132a8b9-1c99-414d-8773-229bf36b305d","Type":"ContainerDied","Data":"079523fb0e18a86eddd800cf00287e3feddaac93af0f36614f19e6b4a4894ff7"} Dec 10 16:44:40 crc kubenswrapper[4755]: I1210 16:44:40.893748 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" event={"ID":"b132a8b9-1c99-414d-8773-229bf36b305d","Type":"ContainerStarted","Data":"c02a5ae0f2b694ce1165db44430b56590e65e71f05780d61256f730ed3b1326e"} Dec 10 16:44:40 crc kubenswrapper[4755]: I1210 16:44:40.893823 4755 scope.go:117] "RemoveContainer" containerID="a0141776b31c2a932cec3bfb5dc79420825b6de67ef3162cf9228f8d7fa8df5a" Dec 10 16:44:43 crc kubenswrapper[4755]: E1210 16:44:43.774674 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:44:51 crc kubenswrapper[4755]: E1210 16:44:51.760644 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:44:54 crc kubenswrapper[4755]: E1210 16:44:54.762579 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:45:00 crc kubenswrapper[4755]: I1210 16:45:00.153595 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29423085-9h2pv"] Dec 10 16:45:00 crc kubenswrapper[4755]: E1210 16:45:00.154368 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a9bc23c-d723-428d-bfeb-79cdc46ed52a" containerName="extract-utilities" Dec 10 16:45:00 crc kubenswrapper[4755]: I1210 16:45:00.154380 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a9bc23c-d723-428d-bfeb-79cdc46ed52a" containerName="extract-utilities" Dec 10 16:45:00 crc kubenswrapper[4755]: E1210 16:45:00.154389 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="812a5533-62ee-43d6-88e9-8f004b78f7f1" containerName="registry-server" Dec 10 16:45:00 crc kubenswrapper[4755]: I1210 16:45:00.154396 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="812a5533-62ee-43d6-88e9-8f004b78f7f1" containerName="registry-server" Dec 10 16:45:00 crc kubenswrapper[4755]: E1210 16:45:00.154405 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="812a5533-62ee-43d6-88e9-8f004b78f7f1" containerName="extract-utilities" Dec 10 16:45:00 crc kubenswrapper[4755]: I1210 16:45:00.154413 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="812a5533-62ee-43d6-88e9-8f004b78f7f1" containerName="extract-utilities" Dec 10 16:45:00 crc kubenswrapper[4755]: E1210 16:45:00.154432 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a9bc23c-d723-428d-bfeb-79cdc46ed52a" containerName="registry-server" Dec 10 16:45:00 crc kubenswrapper[4755]: I1210 16:45:00.154438 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a9bc23c-d723-428d-bfeb-79cdc46ed52a" containerName="registry-server" Dec 10 16:45:00 crc kubenswrapper[4755]: E1210 16:45:00.154459 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a9bc23c-d723-428d-bfeb-79cdc46ed52a" containerName="extract-content" Dec 10 16:45:00 crc kubenswrapper[4755]: I1210 16:45:00.154482 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a9bc23c-d723-428d-bfeb-79cdc46ed52a" containerName="extract-content" Dec 10 16:45:00 crc kubenswrapper[4755]: E1210 16:45:00.154490 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="812a5533-62ee-43d6-88e9-8f004b78f7f1" containerName="extract-content" Dec 10 16:45:00 crc kubenswrapper[4755]: I1210 16:45:00.154496 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="812a5533-62ee-43d6-88e9-8f004b78f7f1" containerName="extract-content" Dec 10 16:45:00 crc kubenswrapper[4755]: I1210 16:45:00.154681 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="812a5533-62ee-43d6-88e9-8f004b78f7f1" containerName="registry-server" Dec 10 16:45:00 crc kubenswrapper[4755]: I1210 16:45:00.154709 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a9bc23c-d723-428d-bfeb-79cdc46ed52a" containerName="registry-server" Dec 10 16:45:00 crc kubenswrapper[4755]: I1210 16:45:00.155644 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29423085-9h2pv" Dec 10 16:45:00 crc kubenswrapper[4755]: I1210 16:45:00.158207 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 10 16:45:00 crc kubenswrapper[4755]: I1210 16:45:00.158661 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 10 16:45:00 crc kubenswrapper[4755]: I1210 16:45:00.171694 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29423085-9h2pv"] Dec 10 16:45:00 crc kubenswrapper[4755]: I1210 16:45:00.266555 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f178fcfe-0f83-42b1-9b53-5c08a92dc60f-config-volume\") pod \"collect-profiles-29423085-9h2pv\" (UID: \"f178fcfe-0f83-42b1-9b53-5c08a92dc60f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423085-9h2pv" Dec 10 16:45:00 crc kubenswrapper[4755]: I1210 16:45:00.266729 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rc7q\" (UniqueName: \"kubernetes.io/projected/f178fcfe-0f83-42b1-9b53-5c08a92dc60f-kube-api-access-5rc7q\") pod \"collect-profiles-29423085-9h2pv\" (UID: \"f178fcfe-0f83-42b1-9b53-5c08a92dc60f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423085-9h2pv" Dec 10 16:45:00 crc kubenswrapper[4755]: I1210 16:45:00.266781 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f178fcfe-0f83-42b1-9b53-5c08a92dc60f-secret-volume\") pod \"collect-profiles-29423085-9h2pv\" (UID: \"f178fcfe-0f83-42b1-9b53-5c08a92dc60f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423085-9h2pv" Dec 10 16:45:00 crc kubenswrapper[4755]: I1210 16:45:00.368335 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f178fcfe-0f83-42b1-9b53-5c08a92dc60f-config-volume\") pod \"collect-profiles-29423085-9h2pv\" (UID: \"f178fcfe-0f83-42b1-9b53-5c08a92dc60f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423085-9h2pv" Dec 10 16:45:00 crc kubenswrapper[4755]: I1210 16:45:00.368516 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rc7q\" (UniqueName: \"kubernetes.io/projected/f178fcfe-0f83-42b1-9b53-5c08a92dc60f-kube-api-access-5rc7q\") pod \"collect-profiles-29423085-9h2pv\" (UID: \"f178fcfe-0f83-42b1-9b53-5c08a92dc60f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423085-9h2pv" Dec 10 16:45:00 crc kubenswrapper[4755]: I1210 16:45:00.368591 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f178fcfe-0f83-42b1-9b53-5c08a92dc60f-secret-volume\") pod \"collect-profiles-29423085-9h2pv\" (UID: \"f178fcfe-0f83-42b1-9b53-5c08a92dc60f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423085-9h2pv" Dec 10 16:45:00 crc kubenswrapper[4755]: I1210 16:45:00.372157 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f178fcfe-0f83-42b1-9b53-5c08a92dc60f-config-volume\") pod \"collect-profiles-29423085-9h2pv\" (UID: \"f178fcfe-0f83-42b1-9b53-5c08a92dc60f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423085-9h2pv" Dec 10 16:45:00 crc kubenswrapper[4755]: I1210 16:45:00.376252 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f178fcfe-0f83-42b1-9b53-5c08a92dc60f-secret-volume\") pod \"collect-profiles-29423085-9h2pv\" (UID: \"f178fcfe-0f83-42b1-9b53-5c08a92dc60f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423085-9h2pv" Dec 10 16:45:00 crc kubenswrapper[4755]: I1210 16:45:00.386401 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rc7q\" (UniqueName: \"kubernetes.io/projected/f178fcfe-0f83-42b1-9b53-5c08a92dc60f-kube-api-access-5rc7q\") pod \"collect-profiles-29423085-9h2pv\" (UID: \"f178fcfe-0f83-42b1-9b53-5c08a92dc60f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423085-9h2pv" Dec 10 16:45:00 crc kubenswrapper[4755]: I1210 16:45:00.527784 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29423085-9h2pv" Dec 10 16:45:01 crc kubenswrapper[4755]: W1210 16:45:01.013063 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf178fcfe_0f83_42b1_9b53_5c08a92dc60f.slice/crio-821c4c9b4a2f381f3c644e926f89f08d97f5816dd9798aec7e6a9d063bf3e0bc WatchSource:0}: Error finding container 821c4c9b4a2f381f3c644e926f89f08d97f5816dd9798aec7e6a9d063bf3e0bc: Status 404 returned error can't find the container with id 821c4c9b4a2f381f3c644e926f89f08d97f5816dd9798aec7e6a9d063bf3e0bc Dec 10 16:45:01 crc kubenswrapper[4755]: I1210 16:45:01.015792 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29423085-9h2pv"] Dec 10 16:45:01 crc kubenswrapper[4755]: I1210 16:45:01.119917 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29423085-9h2pv" event={"ID":"f178fcfe-0f83-42b1-9b53-5c08a92dc60f","Type":"ContainerStarted","Data":"821c4c9b4a2f381f3c644e926f89f08d97f5816dd9798aec7e6a9d063bf3e0bc"} Dec 10 16:45:02 crc kubenswrapper[4755]: I1210 16:45:02.134397 4755 generic.go:334] "Generic (PLEG): container finished" podID="f178fcfe-0f83-42b1-9b53-5c08a92dc60f" containerID="64f3e289d62cafd24e32043de824cc71c925f1adbaf095539bcd0eec4de96fc1" exitCode=0 Dec 10 16:45:02 crc kubenswrapper[4755]: I1210 16:45:02.135537 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29423085-9h2pv" event={"ID":"f178fcfe-0f83-42b1-9b53-5c08a92dc60f","Type":"ContainerDied","Data":"64f3e289d62cafd24e32043de824cc71c925f1adbaf095539bcd0eec4de96fc1"} Dec 10 16:45:03 crc kubenswrapper[4755]: I1210 16:45:03.532988 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29423085-9h2pv" Dec 10 16:45:03 crc kubenswrapper[4755]: I1210 16:45:03.669595 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f178fcfe-0f83-42b1-9b53-5c08a92dc60f-secret-volume\") pod \"f178fcfe-0f83-42b1-9b53-5c08a92dc60f\" (UID: \"f178fcfe-0f83-42b1-9b53-5c08a92dc60f\") " Dec 10 16:45:03 crc kubenswrapper[4755]: I1210 16:45:03.669735 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5rc7q\" (UniqueName: \"kubernetes.io/projected/f178fcfe-0f83-42b1-9b53-5c08a92dc60f-kube-api-access-5rc7q\") pod \"f178fcfe-0f83-42b1-9b53-5c08a92dc60f\" (UID: \"f178fcfe-0f83-42b1-9b53-5c08a92dc60f\") " Dec 10 16:45:03 crc kubenswrapper[4755]: I1210 16:45:03.669854 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f178fcfe-0f83-42b1-9b53-5c08a92dc60f-config-volume\") pod \"f178fcfe-0f83-42b1-9b53-5c08a92dc60f\" (UID: \"f178fcfe-0f83-42b1-9b53-5c08a92dc60f\") " Dec 10 16:45:03 crc kubenswrapper[4755]: I1210 16:45:03.670519 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f178fcfe-0f83-42b1-9b53-5c08a92dc60f-config-volume" (OuterVolumeSpecName: "config-volume") pod "f178fcfe-0f83-42b1-9b53-5c08a92dc60f" (UID: "f178fcfe-0f83-42b1-9b53-5c08a92dc60f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 16:45:03 crc kubenswrapper[4755]: I1210 16:45:03.671045 4755 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f178fcfe-0f83-42b1-9b53-5c08a92dc60f-config-volume\") on node \"crc\" DevicePath \"\"" Dec 10 16:45:03 crc kubenswrapper[4755]: I1210 16:45:03.676996 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f178fcfe-0f83-42b1-9b53-5c08a92dc60f-kube-api-access-5rc7q" (OuterVolumeSpecName: "kube-api-access-5rc7q") pod "f178fcfe-0f83-42b1-9b53-5c08a92dc60f" (UID: "f178fcfe-0f83-42b1-9b53-5c08a92dc60f"). InnerVolumeSpecName "kube-api-access-5rc7q". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 16:45:03 crc kubenswrapper[4755]: I1210 16:45:03.677788 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f178fcfe-0f83-42b1-9b53-5c08a92dc60f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f178fcfe-0f83-42b1-9b53-5c08a92dc60f" (UID: "f178fcfe-0f83-42b1-9b53-5c08a92dc60f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 16:45:03 crc kubenswrapper[4755]: I1210 16:45:03.772418 4755 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f178fcfe-0f83-42b1-9b53-5c08a92dc60f-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 10 16:45:03 crc kubenswrapper[4755]: I1210 16:45:03.772676 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5rc7q\" (UniqueName: \"kubernetes.io/projected/f178fcfe-0f83-42b1-9b53-5c08a92dc60f-kube-api-access-5rc7q\") on node \"crc\" DevicePath \"\"" Dec 10 16:45:04 crc kubenswrapper[4755]: I1210 16:45:04.160485 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29423085-9h2pv" event={"ID":"f178fcfe-0f83-42b1-9b53-5c08a92dc60f","Type":"ContainerDied","Data":"821c4c9b4a2f381f3c644e926f89f08d97f5816dd9798aec7e6a9d063bf3e0bc"} Dec 10 16:45:04 crc kubenswrapper[4755]: I1210 16:45:04.160552 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29423085-9h2pv" Dec 10 16:45:04 crc kubenswrapper[4755]: I1210 16:45:04.160532 4755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="821c4c9b4a2f381f3c644e926f89f08d97f5816dd9798aec7e6a9d063bf3e0bc" Dec 10 16:45:04 crc kubenswrapper[4755]: I1210 16:45:04.627065 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29423040-hf4vs"] Dec 10 16:45:04 crc kubenswrapper[4755]: I1210 16:45:04.638297 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29423040-hf4vs"] Dec 10 16:45:05 crc kubenswrapper[4755]: I1210 16:45:05.777682 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1680331d-d48e-4757-aee4-fab91fecff27" path="/var/lib/kubelet/pods/1680331d-d48e-4757-aee4-fab91fecff27/volumes" Dec 10 16:45:06 crc kubenswrapper[4755]: E1210 16:45:06.760801 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:45:08 crc kubenswrapper[4755]: I1210 16:45:08.468428 4755 scope.go:117] "RemoveContainer" containerID="ef23a65e007a36f295d36364d5cbaa01c03adff2ce69f056f5aa6c2e3316a04c" Dec 10 16:45:09 crc kubenswrapper[4755]: E1210 16:45:09.854454 4755 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 10 16:45:09 crc kubenswrapper[4755]: E1210 16:45:09.854943 4755 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 10 16:45:09 crc kubenswrapper[4755]: E1210 16:45:09.855102 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mz4t5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-jfc28_openstack(998863b6-4f48-4c8b-8011-a40377686b99): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 10 16:45:09 crc kubenswrapper[4755]: E1210 16:45:09.856619 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:45:18 crc kubenswrapper[4755]: E1210 16:45:18.761060 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:45:23 crc kubenswrapper[4755]: E1210 16:45:23.789066 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:45:31 crc kubenswrapper[4755]: E1210 16:45:31.760412 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:45:37 crc kubenswrapper[4755]: E1210 16:45:37.762794 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:45:44 crc kubenswrapper[4755]: I1210 16:45:44.760034 4755 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 10 16:45:44 crc kubenswrapper[4755]: E1210 16:45:44.851361 4755 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 10 16:45:44 crc kubenswrapper[4755]: E1210 16:45:44.851421 4755 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 10 16:45:44 crc kubenswrapper[4755]: E1210 16:45:44.851577 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5d4h5b7hfbh5ddh688h9ch55bh7chf6h5ddh68ch94h69h5c5h596h59bh569hfchc4h676hcbh64dhdbh57fh75h5c9h98h59ch679h566h77h9cq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hw9gj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(6d104bea-ecdc-4fe1-9861-fb1a19fce845): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 10 16:45:44 crc kubenswrapper[4755]: E1210 16:45:44.852800 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:45:50 crc kubenswrapper[4755]: E1210 16:45:50.760088 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:45:57 crc kubenswrapper[4755]: E1210 16:45:57.762325 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:46:05 crc kubenswrapper[4755]: E1210 16:46:05.762753 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:46:11 crc kubenswrapper[4755]: E1210 16:46:11.761305 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:46:17 crc kubenswrapper[4755]: E1210 16:46:17.759185 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:46:24 crc kubenswrapper[4755]: E1210 16:46:24.761728 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:46:31 crc kubenswrapper[4755]: E1210 16:46:31.760927 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:46:38 crc kubenswrapper[4755]: E1210 16:46:38.760855 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:46:40 crc kubenswrapper[4755]: I1210 16:46:40.360082 4755 patch_prober.go:28] interesting pod/machine-config-daemon-ggt8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 16:46:40 crc kubenswrapper[4755]: I1210 16:46:40.360443 4755 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 16:46:46 crc kubenswrapper[4755]: E1210 16:46:46.759856 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:46:50 crc kubenswrapper[4755]: E1210 16:46:50.760701 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:47:00 crc kubenswrapper[4755]: E1210 16:47:00.760172 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:47:03 crc kubenswrapper[4755]: E1210 16:47:03.770933 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:47:08 crc kubenswrapper[4755]: I1210 16:47:08.572298 4755 scope.go:117] "RemoveContainer" containerID="3e922be820cb93212cc653897143324fd0ea733ce67cc9e55fc51a19d9a1aa35" Dec 10 16:47:10 crc kubenswrapper[4755]: I1210 16:47:10.359906 4755 patch_prober.go:28] interesting pod/machine-config-daemon-ggt8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 16:47:10 crc kubenswrapper[4755]: I1210 16:47:10.360256 4755 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 16:47:13 crc kubenswrapper[4755]: E1210 16:47:13.775691 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:47:18 crc kubenswrapper[4755]: E1210 16:47:18.760188 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:47:26 crc kubenswrapper[4755]: E1210 16:47:26.759368 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:47:32 crc kubenswrapper[4755]: E1210 16:47:32.761336 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:47:40 crc kubenswrapper[4755]: I1210 16:47:40.359413 4755 patch_prober.go:28] interesting pod/machine-config-daemon-ggt8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 16:47:40 crc kubenswrapper[4755]: I1210 16:47:40.360049 4755 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 16:47:40 crc kubenswrapper[4755]: I1210 16:47:40.360103 4755 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" Dec 10 16:47:40 crc kubenswrapper[4755]: I1210 16:47:40.361189 4755 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c02a5ae0f2b694ce1165db44430b56590e65e71f05780d61256f730ed3b1326e"} pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 10 16:47:40 crc kubenswrapper[4755]: I1210 16:47:40.361300 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" containerName="machine-config-daemon" containerID="cri-o://c02a5ae0f2b694ce1165db44430b56590e65e71f05780d61256f730ed3b1326e" gracePeriod=600 Dec 10 16:47:40 crc kubenswrapper[4755]: E1210 16:47:40.759122 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:47:41 crc kubenswrapper[4755]: E1210 16:47:41.034690 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:47:41 crc kubenswrapper[4755]: I1210 16:47:41.900144 4755 generic.go:334] "Generic (PLEG): container finished" podID="b132a8b9-1c99-414d-8773-229bf36b305d" containerID="c02a5ae0f2b694ce1165db44430b56590e65e71f05780d61256f730ed3b1326e" exitCode=0 Dec 10 16:47:41 crc kubenswrapper[4755]: I1210 16:47:41.900212 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" event={"ID":"b132a8b9-1c99-414d-8773-229bf36b305d","Type":"ContainerDied","Data":"c02a5ae0f2b694ce1165db44430b56590e65e71f05780d61256f730ed3b1326e"} Dec 10 16:47:41 crc kubenswrapper[4755]: I1210 16:47:41.900746 4755 scope.go:117] "RemoveContainer" containerID="079523fb0e18a86eddd800cf00287e3feddaac93af0f36614f19e6b4a4894ff7" Dec 10 16:47:41 crc kubenswrapper[4755]: I1210 16:47:41.901790 4755 scope.go:117] "RemoveContainer" containerID="c02a5ae0f2b694ce1165db44430b56590e65e71f05780d61256f730ed3b1326e" Dec 10 16:47:41 crc kubenswrapper[4755]: E1210 16:47:41.904277 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:47:44 crc kubenswrapper[4755]: E1210 16:47:44.760233 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:47:53 crc kubenswrapper[4755]: E1210 16:47:53.760715 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:47:53 crc kubenswrapper[4755]: I1210 16:47:53.765268 4755 scope.go:117] "RemoveContainer" containerID="c02a5ae0f2b694ce1165db44430b56590e65e71f05780d61256f730ed3b1326e" Dec 10 16:47:53 crc kubenswrapper[4755]: E1210 16:47:53.765606 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:47:57 crc kubenswrapper[4755]: E1210 16:47:57.760285 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:48:05 crc kubenswrapper[4755]: I1210 16:48:05.758088 4755 scope.go:117] "RemoveContainer" containerID="c02a5ae0f2b694ce1165db44430b56590e65e71f05780d61256f730ed3b1326e" Dec 10 16:48:05 crc kubenswrapper[4755]: E1210 16:48:05.759160 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:48:06 crc kubenswrapper[4755]: I1210 16:48:06.173985 4755 generic.go:334] "Generic (PLEG): container finished" podID="72c64052-d330-4d83-a2b5-37e7c7233934" containerID="5dc7fe05696e3fa363a1b1a40a69b258947ed8e306aee99ce19a00a9438ed032" exitCode=2 Dec 10 16:48:06 crc kubenswrapper[4755]: I1210 16:48:06.174063 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mhhqh" event={"ID":"72c64052-d330-4d83-a2b5-37e7c7233934","Type":"ContainerDied","Data":"5dc7fe05696e3fa363a1b1a40a69b258947ed8e306aee99ce19a00a9438ed032"} Dec 10 16:48:07 crc kubenswrapper[4755]: E1210 16:48:07.760159 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:48:07 crc kubenswrapper[4755]: I1210 16:48:07.831049 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mhhqh" Dec 10 16:48:07 crc kubenswrapper[4755]: I1210 16:48:07.909656 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/72c64052-d330-4d83-a2b5-37e7c7233934-ssh-key\") pod \"72c64052-d330-4d83-a2b5-37e7c7233934\" (UID: \"72c64052-d330-4d83-a2b5-37e7c7233934\") " Dec 10 16:48:07 crc kubenswrapper[4755]: I1210 16:48:07.909807 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7v4f5\" (UniqueName: \"kubernetes.io/projected/72c64052-d330-4d83-a2b5-37e7c7233934-kube-api-access-7v4f5\") pod \"72c64052-d330-4d83-a2b5-37e7c7233934\" (UID: \"72c64052-d330-4d83-a2b5-37e7c7233934\") " Dec 10 16:48:07 crc kubenswrapper[4755]: I1210 16:48:07.910072 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/72c64052-d330-4d83-a2b5-37e7c7233934-inventory\") pod \"72c64052-d330-4d83-a2b5-37e7c7233934\" (UID: \"72c64052-d330-4d83-a2b5-37e7c7233934\") " Dec 10 16:48:07 crc kubenswrapper[4755]: I1210 16:48:07.916219 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72c64052-d330-4d83-a2b5-37e7c7233934-kube-api-access-7v4f5" (OuterVolumeSpecName: "kube-api-access-7v4f5") pod "72c64052-d330-4d83-a2b5-37e7c7233934" (UID: "72c64052-d330-4d83-a2b5-37e7c7233934"). InnerVolumeSpecName "kube-api-access-7v4f5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 16:48:07 crc kubenswrapper[4755]: I1210 16:48:07.943715 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72c64052-d330-4d83-a2b5-37e7c7233934-inventory" (OuterVolumeSpecName: "inventory") pod "72c64052-d330-4d83-a2b5-37e7c7233934" (UID: "72c64052-d330-4d83-a2b5-37e7c7233934"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 16:48:07 crc kubenswrapper[4755]: I1210 16:48:07.948153 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72c64052-d330-4d83-a2b5-37e7c7233934-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "72c64052-d330-4d83-a2b5-37e7c7233934" (UID: "72c64052-d330-4d83-a2b5-37e7c7233934"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 16:48:08 crc kubenswrapper[4755]: I1210 16:48:08.013395 4755 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/72c64052-d330-4d83-a2b5-37e7c7233934-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 10 16:48:08 crc kubenswrapper[4755]: I1210 16:48:08.013434 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7v4f5\" (UniqueName: \"kubernetes.io/projected/72c64052-d330-4d83-a2b5-37e7c7233934-kube-api-access-7v4f5\") on node \"crc\" DevicePath \"\"" Dec 10 16:48:08 crc kubenswrapper[4755]: I1210 16:48:08.013445 4755 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/72c64052-d330-4d83-a2b5-37e7c7233934-inventory\") on node \"crc\" DevicePath \"\"" Dec 10 16:48:08 crc kubenswrapper[4755]: I1210 16:48:08.197838 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mhhqh" event={"ID":"72c64052-d330-4d83-a2b5-37e7c7233934","Type":"ContainerDied","Data":"2705c5195afaf9387114de92846d6b0b2a8456fa63c2a3df63c4b1bf6309a8f6"} Dec 10 16:48:08 crc kubenswrapper[4755]: I1210 16:48:08.198082 4755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2705c5195afaf9387114de92846d6b0b2a8456fa63c2a3df63c4b1bf6309a8f6" Dec 10 16:48:08 crc kubenswrapper[4755]: I1210 16:48:08.197920 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mhhqh" Dec 10 16:48:08 crc kubenswrapper[4755]: I1210 16:48:08.619207 4755 scope.go:117] "RemoveContainer" containerID="b953a07947f417034fdba9843f27a5aa183e3c9d36bcd8e7fe7fcd1d86b5e24a" Dec 10 16:48:08 crc kubenswrapper[4755]: I1210 16:48:08.660305 4755 scope.go:117] "RemoveContainer" containerID="391cc33493c333a660ac57fa60231394bfc2552e298cc070d8535022cfa5aa3b" Dec 10 16:48:11 crc kubenswrapper[4755]: E1210 16:48:11.760560 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:48:20 crc kubenswrapper[4755]: I1210 16:48:20.758206 4755 scope.go:117] "RemoveContainer" containerID="c02a5ae0f2b694ce1165db44430b56590e65e71f05780d61256f730ed3b1326e" Dec 10 16:48:20 crc kubenswrapper[4755]: E1210 16:48:20.759526 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:48:21 crc kubenswrapper[4755]: E1210 16:48:21.761363 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:48:23 crc kubenswrapper[4755]: E1210 16:48:23.759744 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:48:35 crc kubenswrapper[4755]: I1210 16:48:35.758319 4755 scope.go:117] "RemoveContainer" containerID="c02a5ae0f2b694ce1165db44430b56590e65e71f05780d61256f730ed3b1326e" Dec 10 16:48:35 crc kubenswrapper[4755]: E1210 16:48:35.759098 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:48:35 crc kubenswrapper[4755]: E1210 16:48:35.760054 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:48:37 crc kubenswrapper[4755]: E1210 16:48:37.761545 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:48:48 crc kubenswrapper[4755]: E1210 16:48:48.759568 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:48:48 crc kubenswrapper[4755]: E1210 16:48:48.760091 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:48:50 crc kubenswrapper[4755]: I1210 16:48:50.757905 4755 scope.go:117] "RemoveContainer" containerID="c02a5ae0f2b694ce1165db44430b56590e65e71f05780d61256f730ed3b1326e" Dec 10 16:48:50 crc kubenswrapper[4755]: E1210 16:48:50.758461 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:48:57 crc kubenswrapper[4755]: I1210 16:48:57.239752 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-swgtc/must-gather-hksrt"] Dec 10 16:48:57 crc kubenswrapper[4755]: E1210 16:48:57.240988 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72c64052-d330-4d83-a2b5-37e7c7233934" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 10 16:48:57 crc kubenswrapper[4755]: I1210 16:48:57.241003 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="72c64052-d330-4d83-a2b5-37e7c7233934" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 10 16:48:57 crc kubenswrapper[4755]: E1210 16:48:57.241026 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f178fcfe-0f83-42b1-9b53-5c08a92dc60f" containerName="collect-profiles" Dec 10 16:48:57 crc kubenswrapper[4755]: I1210 16:48:57.241032 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="f178fcfe-0f83-42b1-9b53-5c08a92dc60f" containerName="collect-profiles" Dec 10 16:48:57 crc kubenswrapper[4755]: I1210 16:48:57.241237 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="f178fcfe-0f83-42b1-9b53-5c08a92dc60f" containerName="collect-profiles" Dec 10 16:48:57 crc kubenswrapper[4755]: I1210 16:48:57.241258 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="72c64052-d330-4d83-a2b5-37e7c7233934" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 10 16:48:57 crc kubenswrapper[4755]: I1210 16:48:57.243774 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-swgtc/must-gather-hksrt" Dec 10 16:48:57 crc kubenswrapper[4755]: I1210 16:48:57.249198 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-swgtc"/"openshift-service-ca.crt" Dec 10 16:48:57 crc kubenswrapper[4755]: I1210 16:48:57.249221 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-swgtc"/"kube-root-ca.crt" Dec 10 16:48:57 crc kubenswrapper[4755]: I1210 16:48:57.249347 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-swgtc"/"default-dockercfg-dfqkp" Dec 10 16:48:57 crc kubenswrapper[4755]: I1210 16:48:57.270255 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-swgtc/must-gather-hksrt"] Dec 10 16:48:57 crc kubenswrapper[4755]: I1210 16:48:57.362418 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a429ed81-4dbb-4907-91dd-9987257da152-must-gather-output\") pod \"must-gather-hksrt\" (UID: \"a429ed81-4dbb-4907-91dd-9987257da152\") " pod="openshift-must-gather-swgtc/must-gather-hksrt" Dec 10 16:48:57 crc kubenswrapper[4755]: I1210 16:48:57.362551 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7cn9\" (UniqueName: \"kubernetes.io/projected/a429ed81-4dbb-4907-91dd-9987257da152-kube-api-access-j7cn9\") pod \"must-gather-hksrt\" (UID: \"a429ed81-4dbb-4907-91dd-9987257da152\") " pod="openshift-must-gather-swgtc/must-gather-hksrt" Dec 10 16:48:57 crc kubenswrapper[4755]: I1210 16:48:57.464894 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a429ed81-4dbb-4907-91dd-9987257da152-must-gather-output\") pod \"must-gather-hksrt\" (UID: \"a429ed81-4dbb-4907-91dd-9987257da152\") " pod="openshift-must-gather-swgtc/must-gather-hksrt" Dec 10 16:48:57 crc kubenswrapper[4755]: I1210 16:48:57.464956 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7cn9\" (UniqueName: \"kubernetes.io/projected/a429ed81-4dbb-4907-91dd-9987257da152-kube-api-access-j7cn9\") pod \"must-gather-hksrt\" (UID: \"a429ed81-4dbb-4907-91dd-9987257da152\") " pod="openshift-must-gather-swgtc/must-gather-hksrt" Dec 10 16:48:57 crc kubenswrapper[4755]: I1210 16:48:57.465339 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a429ed81-4dbb-4907-91dd-9987257da152-must-gather-output\") pod \"must-gather-hksrt\" (UID: \"a429ed81-4dbb-4907-91dd-9987257da152\") " pod="openshift-must-gather-swgtc/must-gather-hksrt" Dec 10 16:48:57 crc kubenswrapper[4755]: I1210 16:48:57.496187 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7cn9\" (UniqueName: \"kubernetes.io/projected/a429ed81-4dbb-4907-91dd-9987257da152-kube-api-access-j7cn9\") pod \"must-gather-hksrt\" (UID: \"a429ed81-4dbb-4907-91dd-9987257da152\") " pod="openshift-must-gather-swgtc/must-gather-hksrt" Dec 10 16:48:57 crc kubenswrapper[4755]: I1210 16:48:57.570017 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-swgtc/must-gather-hksrt" Dec 10 16:48:57 crc kubenswrapper[4755]: I1210 16:48:57.915496 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-swgtc/must-gather-hksrt"] Dec 10 16:48:58 crc kubenswrapper[4755]: I1210 16:48:58.763045 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-swgtc/must-gather-hksrt" event={"ID":"a429ed81-4dbb-4907-91dd-9987257da152","Type":"ContainerStarted","Data":"07e3f22da249cf04694638395a32ae5bc20f2f39bca5a64c27def8a67d31f8ed"} Dec 10 16:49:00 crc kubenswrapper[4755]: E1210 16:49:00.759629 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:49:01 crc kubenswrapper[4755]: E1210 16:49:01.765192 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:49:03 crc kubenswrapper[4755]: I1210 16:49:03.783188 4755 scope.go:117] "RemoveContainer" containerID="c02a5ae0f2b694ce1165db44430b56590e65e71f05780d61256f730ed3b1326e" Dec 10 16:49:03 crc kubenswrapper[4755]: E1210 16:49:03.783794 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:49:06 crc kubenswrapper[4755]: I1210 16:49:06.843176 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-swgtc/must-gather-hksrt" event={"ID":"a429ed81-4dbb-4907-91dd-9987257da152","Type":"ContainerStarted","Data":"5e2e28a34e8f781fd28b00b5f78403e5ab1933b63d2afa7f1b75a75cabf716c6"} Dec 10 16:49:06 crc kubenswrapper[4755]: I1210 16:49:06.843516 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-swgtc/must-gather-hksrt" event={"ID":"a429ed81-4dbb-4907-91dd-9987257da152","Type":"ContainerStarted","Data":"aec6da169d3c8d1b56af57192a4764725497636eed9f3402b2489c8596e88395"} Dec 10 16:49:06 crc kubenswrapper[4755]: I1210 16:49:06.859933 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-swgtc/must-gather-hksrt" podStartSLOduration=1.8037654669999998 podStartE2EDuration="9.859910298s" podCreationTimestamp="2025-12-10 16:48:57 +0000 UTC" firstStartedPulling="2025-12-10 16:48:57.939240026 +0000 UTC m=+5134.540123658" lastFinishedPulling="2025-12-10 16:49:05.995384857 +0000 UTC m=+5142.596268489" observedRunningTime="2025-12-10 16:49:06.857386079 +0000 UTC m=+5143.458269731" watchObservedRunningTime="2025-12-10 16:49:06.859910298 +0000 UTC m=+5143.460793930" Dec 10 16:49:08 crc kubenswrapper[4755]: I1210 16:49:08.749263 4755 scope.go:117] "RemoveContainer" containerID="7c89e0c694ae15cc34f94868ea929e6f97c730586cc1e21ff979eef6d596704e" Dec 10 16:49:08 crc kubenswrapper[4755]: I1210 16:49:08.775551 4755 scope.go:117] "RemoveContainer" containerID="f79364777c9be3479a9b0a849763ab341a29dcc43c620b31e5c1f12ecf3bd214" Dec 10 16:49:08 crc kubenswrapper[4755]: I1210 16:49:08.834144 4755 scope.go:117] "RemoveContainer" containerID="7f6a5f1235ee15537cf8c29c65d6a6521be4b53a0794b2a696c6afcbe4c871d1" Dec 10 16:49:10 crc kubenswrapper[4755]: I1210 16:49:10.816774 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-swgtc/crc-debug-gkrw6"] Dec 10 16:49:10 crc kubenswrapper[4755]: I1210 16:49:10.818603 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-swgtc/crc-debug-gkrw6" Dec 10 16:49:10 crc kubenswrapper[4755]: I1210 16:49:10.856636 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/972eaff3-dc19-49c8-82c4-a1ea4bf8c834-host\") pod \"crc-debug-gkrw6\" (UID: \"972eaff3-dc19-49c8-82c4-a1ea4bf8c834\") " pod="openshift-must-gather-swgtc/crc-debug-gkrw6" Dec 10 16:49:10 crc kubenswrapper[4755]: I1210 16:49:10.856959 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68snp\" (UniqueName: \"kubernetes.io/projected/972eaff3-dc19-49c8-82c4-a1ea4bf8c834-kube-api-access-68snp\") pod \"crc-debug-gkrw6\" (UID: \"972eaff3-dc19-49c8-82c4-a1ea4bf8c834\") " pod="openshift-must-gather-swgtc/crc-debug-gkrw6" Dec 10 16:49:10 crc kubenswrapper[4755]: I1210 16:49:10.959295 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68snp\" (UniqueName: \"kubernetes.io/projected/972eaff3-dc19-49c8-82c4-a1ea4bf8c834-kube-api-access-68snp\") pod \"crc-debug-gkrw6\" (UID: \"972eaff3-dc19-49c8-82c4-a1ea4bf8c834\") " pod="openshift-must-gather-swgtc/crc-debug-gkrw6" Dec 10 16:49:10 crc kubenswrapper[4755]: I1210 16:49:10.959442 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/972eaff3-dc19-49c8-82c4-a1ea4bf8c834-host\") pod \"crc-debug-gkrw6\" (UID: \"972eaff3-dc19-49c8-82c4-a1ea4bf8c834\") " pod="openshift-must-gather-swgtc/crc-debug-gkrw6" Dec 10 16:49:10 crc kubenswrapper[4755]: I1210 16:49:10.959801 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/972eaff3-dc19-49c8-82c4-a1ea4bf8c834-host\") pod \"crc-debug-gkrw6\" (UID: \"972eaff3-dc19-49c8-82c4-a1ea4bf8c834\") " pod="openshift-must-gather-swgtc/crc-debug-gkrw6" Dec 10 16:49:10 crc kubenswrapper[4755]: I1210 16:49:10.993262 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68snp\" (UniqueName: \"kubernetes.io/projected/972eaff3-dc19-49c8-82c4-a1ea4bf8c834-kube-api-access-68snp\") pod \"crc-debug-gkrw6\" (UID: \"972eaff3-dc19-49c8-82c4-a1ea4bf8c834\") " pod="openshift-must-gather-swgtc/crc-debug-gkrw6" Dec 10 16:49:11 crc kubenswrapper[4755]: I1210 16:49:11.135753 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-swgtc/crc-debug-gkrw6" Dec 10 16:49:11 crc kubenswrapper[4755]: I1210 16:49:11.903686 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-swgtc/crc-debug-gkrw6" event={"ID":"972eaff3-dc19-49c8-82c4-a1ea4bf8c834","Type":"ContainerStarted","Data":"0d7c02f3d35cb5cdb491f1e98d9afae1b5f451cffaedf9650837bfab9ec43ab7"} Dec 10 16:49:13 crc kubenswrapper[4755]: E1210 16:49:13.771353 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:49:14 crc kubenswrapper[4755]: I1210 16:49:14.757923 4755 scope.go:117] "RemoveContainer" containerID="c02a5ae0f2b694ce1165db44430b56590e65e71f05780d61256f730ed3b1326e" Dec 10 16:49:14 crc kubenswrapper[4755]: E1210 16:49:14.758375 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:49:14 crc kubenswrapper[4755]: E1210 16:49:14.760661 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:49:25 crc kubenswrapper[4755]: I1210 16:49:25.757858 4755 scope.go:117] "RemoveContainer" containerID="c02a5ae0f2b694ce1165db44430b56590e65e71f05780d61256f730ed3b1326e" Dec 10 16:49:25 crc kubenswrapper[4755]: E1210 16:49:25.758803 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:49:26 crc kubenswrapper[4755]: E1210 16:49:26.976160 4755 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ab858aed98e4fe57e6b144da8e90ad5d6698bb4cc5521206f5c05809f0f9296" Dec 10 16:49:26 crc kubenswrapper[4755]: E1210 16:49:26.976801 4755 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:container-00,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ab858aed98e4fe57e6b144da8e90ad5d6698bb4cc5521206f5c05809f0f9296,Command:[chroot /host bash -c echo 'TOOLBOX_NAME=toolbox-osp' > /root/.toolboxrc ; rm -rf \"/var/tmp/sos-osp\" && mkdir -p \"/var/tmp/sos-osp\" && sudo podman rm --force toolbox-osp; sudo --preserve-env podman pull --authfile /var/lib/kubelet/config.json registry.redhat.io/rhel9/support-tools && toolbox sos report --batch --all-logs --only-plugins block,cifs,crio,devicemapper,devices,firewall_tables,firewalld,iscsi,lvm2,memory,multipath,nfs,nis,nvme,podman,process,processor,selinux,scsi,udev,logs,crypto --tmp-dir=\"/var/tmp/sos-osp\" && if [[ \"$(ls /var/log/pods/*/{*.log.*,*/*.log.*} 2>/dev/null)\" != '' ]]; then tar --ignore-failed-read --warning=no-file-changed -cJf \"/var/tmp/sos-osp/podlogs.tar.xz\" --transform 's,^,podlogs/,' /var/log/pods/*/{*.log.*,*/*.log.*} || true; fi],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:TMOUT,Value:900,ValueFrom:nil,},EnvVar{Name:HOST,Value:/host,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host,ReadOnly:false,MountPath:/host,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-68snp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod crc-debug-gkrw6_openshift-must-gather-swgtc(972eaff3-dc19-49c8-82c4-a1ea4bf8c834): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 10 16:49:26 crc kubenswrapper[4755]: E1210 16:49:26.978153 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"container-00\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-must-gather-swgtc/crc-debug-gkrw6" podUID="972eaff3-dc19-49c8-82c4-a1ea4bf8c834" Dec 10 16:49:27 crc kubenswrapper[4755]: E1210 16:49:27.061875 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"container-00\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ab858aed98e4fe57e6b144da8e90ad5d6698bb4cc5521206f5c05809f0f9296\\\"\"" pod="openshift-must-gather-swgtc/crc-debug-gkrw6" podUID="972eaff3-dc19-49c8-82c4-a1ea4bf8c834" Dec 10 16:49:27 crc kubenswrapper[4755]: E1210 16:49:27.759449 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:49:29 crc kubenswrapper[4755]: E1210 16:49:29.759276 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:49:38 crc kubenswrapper[4755]: I1210 16:49:38.757773 4755 scope.go:117] "RemoveContainer" containerID="c02a5ae0f2b694ce1165db44430b56590e65e71f05780d61256f730ed3b1326e" Dec 10 16:49:38 crc kubenswrapper[4755]: E1210 16:49:38.758635 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:49:41 crc kubenswrapper[4755]: E1210 16:49:41.759189 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:49:41 crc kubenswrapper[4755]: E1210 16:49:41.759300 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:49:42 crc kubenswrapper[4755]: I1210 16:49:42.203152 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-swgtc/crc-debug-gkrw6" event={"ID":"972eaff3-dc19-49c8-82c4-a1ea4bf8c834","Type":"ContainerStarted","Data":"e4bca1ee9fdbbac4496e3ec0bc15f70f97b6314b16f210cbf99306aaa33a02f8"} Dec 10 16:49:42 crc kubenswrapper[4755]: I1210 16:49:42.223455 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-swgtc/crc-debug-gkrw6" podStartSLOduration=2.017240995 podStartE2EDuration="32.223435727s" podCreationTimestamp="2025-12-10 16:49:10 +0000 UTC" firstStartedPulling="2025-12-10 16:49:11.18864027 +0000 UTC m=+5147.789523902" lastFinishedPulling="2025-12-10 16:49:41.394835012 +0000 UTC m=+5177.995718634" observedRunningTime="2025-12-10 16:49:42.217253389 +0000 UTC m=+5178.818137021" watchObservedRunningTime="2025-12-10 16:49:42.223435727 +0000 UTC m=+5178.824319359" Dec 10 16:49:51 crc kubenswrapper[4755]: I1210 16:49:51.759832 4755 scope.go:117] "RemoveContainer" containerID="c02a5ae0f2b694ce1165db44430b56590e65e71f05780d61256f730ed3b1326e" Dec 10 16:49:51 crc kubenswrapper[4755]: E1210 16:49:51.760910 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:49:52 crc kubenswrapper[4755]: E1210 16:49:52.760023 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:49:53 crc kubenswrapper[4755]: E1210 16:49:53.770731 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:50:03 crc kubenswrapper[4755]: I1210 16:50:03.770104 4755 scope.go:117] "RemoveContainer" containerID="c02a5ae0f2b694ce1165db44430b56590e65e71f05780d61256f730ed3b1326e" Dec 10 16:50:03 crc kubenswrapper[4755]: E1210 16:50:03.770985 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:50:03 crc kubenswrapper[4755]: E1210 16:50:03.775385 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:50:04 crc kubenswrapper[4755]: I1210 16:50:04.430279 4755 generic.go:334] "Generic (PLEG): container finished" podID="972eaff3-dc19-49c8-82c4-a1ea4bf8c834" containerID="e4bca1ee9fdbbac4496e3ec0bc15f70f97b6314b16f210cbf99306aaa33a02f8" exitCode=0 Dec 10 16:50:04 crc kubenswrapper[4755]: I1210 16:50:04.430331 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-swgtc/crc-debug-gkrw6" event={"ID":"972eaff3-dc19-49c8-82c4-a1ea4bf8c834","Type":"ContainerDied","Data":"e4bca1ee9fdbbac4496e3ec0bc15f70f97b6314b16f210cbf99306aaa33a02f8"} Dec 10 16:50:05 crc kubenswrapper[4755]: I1210 16:50:05.577989 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-swgtc/crc-debug-gkrw6" Dec 10 16:50:05 crc kubenswrapper[4755]: I1210 16:50:05.613503 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-swgtc/crc-debug-gkrw6"] Dec 10 16:50:05 crc kubenswrapper[4755]: I1210 16:50:05.627264 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-swgtc/crc-debug-gkrw6"] Dec 10 16:50:05 crc kubenswrapper[4755]: I1210 16:50:05.641624 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/972eaff3-dc19-49c8-82c4-a1ea4bf8c834-host\") pod \"972eaff3-dc19-49c8-82c4-a1ea4bf8c834\" (UID: \"972eaff3-dc19-49c8-82c4-a1ea4bf8c834\") " Dec 10 16:50:05 crc kubenswrapper[4755]: I1210 16:50:05.641853 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/972eaff3-dc19-49c8-82c4-a1ea4bf8c834-host" (OuterVolumeSpecName: "host") pod "972eaff3-dc19-49c8-82c4-a1ea4bf8c834" (UID: "972eaff3-dc19-49c8-82c4-a1ea4bf8c834"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 16:50:05 crc kubenswrapper[4755]: I1210 16:50:05.642382 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-68snp\" (UniqueName: \"kubernetes.io/projected/972eaff3-dc19-49c8-82c4-a1ea4bf8c834-kube-api-access-68snp\") pod \"972eaff3-dc19-49c8-82c4-a1ea4bf8c834\" (UID: \"972eaff3-dc19-49c8-82c4-a1ea4bf8c834\") " Dec 10 16:50:05 crc kubenswrapper[4755]: I1210 16:50:05.643925 4755 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/972eaff3-dc19-49c8-82c4-a1ea4bf8c834-host\") on node \"crc\" DevicePath \"\"" Dec 10 16:50:05 crc kubenswrapper[4755]: I1210 16:50:05.648094 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/972eaff3-dc19-49c8-82c4-a1ea4bf8c834-kube-api-access-68snp" (OuterVolumeSpecName: "kube-api-access-68snp") pod "972eaff3-dc19-49c8-82c4-a1ea4bf8c834" (UID: "972eaff3-dc19-49c8-82c4-a1ea4bf8c834"). InnerVolumeSpecName "kube-api-access-68snp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 16:50:05 crc kubenswrapper[4755]: I1210 16:50:05.745829 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-68snp\" (UniqueName: \"kubernetes.io/projected/972eaff3-dc19-49c8-82c4-a1ea4bf8c834-kube-api-access-68snp\") on node \"crc\" DevicePath \"\"" Dec 10 16:50:05 crc kubenswrapper[4755]: I1210 16:50:05.774054 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="972eaff3-dc19-49c8-82c4-a1ea4bf8c834" path="/var/lib/kubelet/pods/972eaff3-dc19-49c8-82c4-a1ea4bf8c834/volumes" Dec 10 16:50:06 crc kubenswrapper[4755]: I1210 16:50:06.456364 4755 scope.go:117] "RemoveContainer" containerID="e4bca1ee9fdbbac4496e3ec0bc15f70f97b6314b16f210cbf99306aaa33a02f8" Dec 10 16:50:06 crc kubenswrapper[4755]: I1210 16:50:06.456407 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-swgtc/crc-debug-gkrw6" Dec 10 16:50:06 crc kubenswrapper[4755]: E1210 16:50:06.760319 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-jfc28" podUID="998863b6-4f48-4c8b-8011-a40377686b99" Dec 10 16:50:06 crc kubenswrapper[4755]: I1210 16:50:06.830265 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-swgtc/crc-debug-kgw2l"] Dec 10 16:50:06 crc kubenswrapper[4755]: E1210 16:50:06.830822 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="972eaff3-dc19-49c8-82c4-a1ea4bf8c834" containerName="container-00" Dec 10 16:50:06 crc kubenswrapper[4755]: I1210 16:50:06.830845 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="972eaff3-dc19-49c8-82c4-a1ea4bf8c834" containerName="container-00" Dec 10 16:50:06 crc kubenswrapper[4755]: I1210 16:50:06.831114 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="972eaff3-dc19-49c8-82c4-a1ea4bf8c834" containerName="container-00" Dec 10 16:50:06 crc kubenswrapper[4755]: I1210 16:50:06.832124 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-swgtc/crc-debug-kgw2l" Dec 10 16:50:06 crc kubenswrapper[4755]: I1210 16:50:06.973196 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/513a68fb-fdcf-4f9a-a66e-1b16f3a082a8-host\") pod \"crc-debug-kgw2l\" (UID: \"513a68fb-fdcf-4f9a-a66e-1b16f3a082a8\") " pod="openshift-must-gather-swgtc/crc-debug-kgw2l" Dec 10 16:50:06 crc kubenswrapper[4755]: I1210 16:50:06.973644 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-242kr\" (UniqueName: \"kubernetes.io/projected/513a68fb-fdcf-4f9a-a66e-1b16f3a082a8-kube-api-access-242kr\") pod \"crc-debug-kgw2l\" (UID: \"513a68fb-fdcf-4f9a-a66e-1b16f3a082a8\") " pod="openshift-must-gather-swgtc/crc-debug-kgw2l" Dec 10 16:50:07 crc kubenswrapper[4755]: I1210 16:50:07.075644 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/513a68fb-fdcf-4f9a-a66e-1b16f3a082a8-host\") pod \"crc-debug-kgw2l\" (UID: \"513a68fb-fdcf-4f9a-a66e-1b16f3a082a8\") " pod="openshift-must-gather-swgtc/crc-debug-kgw2l" Dec 10 16:50:07 crc kubenswrapper[4755]: I1210 16:50:07.075785 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-242kr\" (UniqueName: \"kubernetes.io/projected/513a68fb-fdcf-4f9a-a66e-1b16f3a082a8-kube-api-access-242kr\") pod \"crc-debug-kgw2l\" (UID: \"513a68fb-fdcf-4f9a-a66e-1b16f3a082a8\") " pod="openshift-must-gather-swgtc/crc-debug-kgw2l" Dec 10 16:50:07 crc kubenswrapper[4755]: I1210 16:50:07.075807 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/513a68fb-fdcf-4f9a-a66e-1b16f3a082a8-host\") pod \"crc-debug-kgw2l\" (UID: \"513a68fb-fdcf-4f9a-a66e-1b16f3a082a8\") " pod="openshift-must-gather-swgtc/crc-debug-kgw2l" Dec 10 16:50:07 crc kubenswrapper[4755]: I1210 16:50:07.093194 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-242kr\" (UniqueName: \"kubernetes.io/projected/513a68fb-fdcf-4f9a-a66e-1b16f3a082a8-kube-api-access-242kr\") pod \"crc-debug-kgw2l\" (UID: \"513a68fb-fdcf-4f9a-a66e-1b16f3a082a8\") " pod="openshift-must-gather-swgtc/crc-debug-kgw2l" Dec 10 16:50:07 crc kubenswrapper[4755]: I1210 16:50:07.163646 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-swgtc/crc-debug-kgw2l" Dec 10 16:50:07 crc kubenswrapper[4755]: I1210 16:50:07.469272 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-swgtc/crc-debug-kgw2l" event={"ID":"513a68fb-fdcf-4f9a-a66e-1b16f3a082a8","Type":"ContainerStarted","Data":"a63a8f73c5f24a1e1a08c856510808da72ea871b9c81e2e8139563cf3b90ec07"} Dec 10 16:50:08 crc kubenswrapper[4755]: I1210 16:50:08.481707 4755 generic.go:334] "Generic (PLEG): container finished" podID="513a68fb-fdcf-4f9a-a66e-1b16f3a082a8" containerID="e1dc0b9d2cbe4a9fe99e2c96883496e9d3be20bdf1ee4171513a9861d3ffcf88" exitCode=1 Dec 10 16:50:08 crc kubenswrapper[4755]: I1210 16:50:08.481793 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-swgtc/crc-debug-kgw2l" event={"ID":"513a68fb-fdcf-4f9a-a66e-1b16f3a082a8","Type":"ContainerDied","Data":"e1dc0b9d2cbe4a9fe99e2c96883496e9d3be20bdf1ee4171513a9861d3ffcf88"} Dec 10 16:50:08 crc kubenswrapper[4755]: I1210 16:50:08.529209 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-swgtc/crc-debug-kgw2l"] Dec 10 16:50:08 crc kubenswrapper[4755]: I1210 16:50:08.547868 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-swgtc/crc-debug-kgw2l"] Dec 10 16:50:09 crc kubenswrapper[4755]: I1210 16:50:09.608860 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-swgtc/crc-debug-kgw2l" Dec 10 16:50:09 crc kubenswrapper[4755]: I1210 16:50:09.729816 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/513a68fb-fdcf-4f9a-a66e-1b16f3a082a8-host\") pod \"513a68fb-fdcf-4f9a-a66e-1b16f3a082a8\" (UID: \"513a68fb-fdcf-4f9a-a66e-1b16f3a082a8\") " Dec 10 16:50:09 crc kubenswrapper[4755]: I1210 16:50:09.729945 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/513a68fb-fdcf-4f9a-a66e-1b16f3a082a8-host" (OuterVolumeSpecName: "host") pod "513a68fb-fdcf-4f9a-a66e-1b16f3a082a8" (UID: "513a68fb-fdcf-4f9a-a66e-1b16f3a082a8"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 16:50:09 crc kubenswrapper[4755]: I1210 16:50:09.729950 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-242kr\" (UniqueName: \"kubernetes.io/projected/513a68fb-fdcf-4f9a-a66e-1b16f3a082a8-kube-api-access-242kr\") pod \"513a68fb-fdcf-4f9a-a66e-1b16f3a082a8\" (UID: \"513a68fb-fdcf-4f9a-a66e-1b16f3a082a8\") " Dec 10 16:50:09 crc kubenswrapper[4755]: I1210 16:50:09.730977 4755 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/513a68fb-fdcf-4f9a-a66e-1b16f3a082a8-host\") on node \"crc\" DevicePath \"\"" Dec 10 16:50:09 crc kubenswrapper[4755]: I1210 16:50:09.739935 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/513a68fb-fdcf-4f9a-a66e-1b16f3a082a8-kube-api-access-242kr" (OuterVolumeSpecName: "kube-api-access-242kr") pod "513a68fb-fdcf-4f9a-a66e-1b16f3a082a8" (UID: "513a68fb-fdcf-4f9a-a66e-1b16f3a082a8"). InnerVolumeSpecName "kube-api-access-242kr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 16:50:09 crc kubenswrapper[4755]: I1210 16:50:09.772928 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="513a68fb-fdcf-4f9a-a66e-1b16f3a082a8" path="/var/lib/kubelet/pods/513a68fb-fdcf-4f9a-a66e-1b16f3a082a8/volumes" Dec 10 16:50:09 crc kubenswrapper[4755]: I1210 16:50:09.832885 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-242kr\" (UniqueName: \"kubernetes.io/projected/513a68fb-fdcf-4f9a-a66e-1b16f3a082a8-kube-api-access-242kr\") on node \"crc\" DevicePath \"\"" Dec 10 16:50:10 crc kubenswrapper[4755]: I1210 16:50:10.500945 4755 scope.go:117] "RemoveContainer" containerID="e1dc0b9d2cbe4a9fe99e2c96883496e9d3be20bdf1ee4171513a9861d3ffcf88" Dec 10 16:50:10 crc kubenswrapper[4755]: I1210 16:50:10.501071 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-swgtc/crc-debug-kgw2l" Dec 10 16:50:16 crc kubenswrapper[4755]: I1210 16:50:16.758133 4755 scope.go:117] "RemoveContainer" containerID="c02a5ae0f2b694ce1165db44430b56590e65e71f05780d61256f730ed3b1326e" Dec 10 16:50:16 crc kubenswrapper[4755]: E1210 16:50:16.759403 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:50:18 crc kubenswrapper[4755]: E1210 16:50:18.759695 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:50:19 crc kubenswrapper[4755]: I1210 16:50:19.926136 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Dec 10 16:50:20 crc kubenswrapper[4755]: I1210 16:50:20.618324 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-jfc28" event={"ID":"998863b6-4f48-4c8b-8011-a40377686b99","Type":"ContainerStarted","Data":"c7dec93800800800f7fa3a4e355d511c0cdad1461157c619fedd00687197b5a2"} Dec 10 16:50:20 crc kubenswrapper[4755]: I1210 16:50:20.638423 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-db-sync-jfc28" podStartSLOduration=1.714478106 podStartE2EDuration="1h2m13.638406427s" podCreationTimestamp="2025-12-10 15:48:07 +0000 UTC" firstStartedPulling="2025-12-10 15:48:08.000282968 +0000 UTC m=+1484.601166610" lastFinishedPulling="2025-12-10 16:50:19.924211299 +0000 UTC m=+5216.525094931" observedRunningTime="2025-12-10 16:50:20.634389038 +0000 UTC m=+5217.235272690" watchObservedRunningTime="2025-12-10 16:50:20.638406427 +0000 UTC m=+5217.239290059" Dec 10 16:50:23 crc kubenswrapper[4755]: I1210 16:50:23.651366 4755 generic.go:334] "Generic (PLEG): container finished" podID="998863b6-4f48-4c8b-8011-a40377686b99" containerID="c7dec93800800800f7fa3a4e355d511c0cdad1461157c619fedd00687197b5a2" exitCode=0 Dec 10 16:50:23 crc kubenswrapper[4755]: I1210 16:50:23.651521 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-jfc28" event={"ID":"998863b6-4f48-4c8b-8011-a40377686b99","Type":"ContainerDied","Data":"c7dec93800800800f7fa3a4e355d511c0cdad1461157c619fedd00687197b5a2"} Dec 10 16:50:25 crc kubenswrapper[4755]: I1210 16:50:25.498898 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-jfc28" Dec 10 16:50:25 crc kubenswrapper[4755]: I1210 16:50:25.581642 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mz4t5\" (UniqueName: \"kubernetes.io/projected/998863b6-4f48-4c8b-8011-a40377686b99-kube-api-access-mz4t5\") pod \"998863b6-4f48-4c8b-8011-a40377686b99\" (UID: \"998863b6-4f48-4c8b-8011-a40377686b99\") " Dec 10 16:50:25 crc kubenswrapper[4755]: I1210 16:50:25.581733 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/998863b6-4f48-4c8b-8011-a40377686b99-combined-ca-bundle\") pod \"998863b6-4f48-4c8b-8011-a40377686b99\" (UID: \"998863b6-4f48-4c8b-8011-a40377686b99\") " Dec 10 16:50:25 crc kubenswrapper[4755]: I1210 16:50:25.581822 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/998863b6-4f48-4c8b-8011-a40377686b99-certs\") pod \"998863b6-4f48-4c8b-8011-a40377686b99\" (UID: \"998863b6-4f48-4c8b-8011-a40377686b99\") " Dec 10 16:50:25 crc kubenswrapper[4755]: I1210 16:50:25.581896 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/998863b6-4f48-4c8b-8011-a40377686b99-scripts\") pod \"998863b6-4f48-4c8b-8011-a40377686b99\" (UID: \"998863b6-4f48-4c8b-8011-a40377686b99\") " Dec 10 16:50:25 crc kubenswrapper[4755]: I1210 16:50:25.581934 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/998863b6-4f48-4c8b-8011-a40377686b99-config-data\") pod \"998863b6-4f48-4c8b-8011-a40377686b99\" (UID: \"998863b6-4f48-4c8b-8011-a40377686b99\") " Dec 10 16:50:25 crc kubenswrapper[4755]: I1210 16:50:25.587494 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/998863b6-4f48-4c8b-8011-a40377686b99-scripts" (OuterVolumeSpecName: "scripts") pod "998863b6-4f48-4c8b-8011-a40377686b99" (UID: "998863b6-4f48-4c8b-8011-a40377686b99"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 16:50:25 crc kubenswrapper[4755]: I1210 16:50:25.588648 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/998863b6-4f48-4c8b-8011-a40377686b99-certs" (OuterVolumeSpecName: "certs") pod "998863b6-4f48-4c8b-8011-a40377686b99" (UID: "998863b6-4f48-4c8b-8011-a40377686b99"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 16:50:25 crc kubenswrapper[4755]: I1210 16:50:25.589527 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/998863b6-4f48-4c8b-8011-a40377686b99-kube-api-access-mz4t5" (OuterVolumeSpecName: "kube-api-access-mz4t5") pod "998863b6-4f48-4c8b-8011-a40377686b99" (UID: "998863b6-4f48-4c8b-8011-a40377686b99"). InnerVolumeSpecName "kube-api-access-mz4t5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 16:50:25 crc kubenswrapper[4755]: I1210 16:50:25.617973 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/998863b6-4f48-4c8b-8011-a40377686b99-config-data" (OuterVolumeSpecName: "config-data") pod "998863b6-4f48-4c8b-8011-a40377686b99" (UID: "998863b6-4f48-4c8b-8011-a40377686b99"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 16:50:25 crc kubenswrapper[4755]: I1210 16:50:25.635571 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/998863b6-4f48-4c8b-8011-a40377686b99-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "998863b6-4f48-4c8b-8011-a40377686b99" (UID: "998863b6-4f48-4c8b-8011-a40377686b99"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 16:50:25 crc kubenswrapper[4755]: I1210 16:50:25.685648 4755 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/998863b6-4f48-4c8b-8011-a40377686b99-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 16:50:25 crc kubenswrapper[4755]: I1210 16:50:25.685694 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mz4t5\" (UniqueName: \"kubernetes.io/projected/998863b6-4f48-4c8b-8011-a40377686b99-kube-api-access-mz4t5\") on node \"crc\" DevicePath \"\"" Dec 10 16:50:25 crc kubenswrapper[4755]: I1210 16:50:25.685705 4755 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/998863b6-4f48-4c8b-8011-a40377686b99-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 16:50:25 crc kubenswrapper[4755]: I1210 16:50:25.685715 4755 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/projected/998863b6-4f48-4c8b-8011-a40377686b99-certs\") on node \"crc\" DevicePath \"\"" Dec 10 16:50:25 crc kubenswrapper[4755]: I1210 16:50:25.685724 4755 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/998863b6-4f48-4c8b-8011-a40377686b99-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 16:50:25 crc kubenswrapper[4755]: I1210 16:50:25.701051 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-jfc28" event={"ID":"998863b6-4f48-4c8b-8011-a40377686b99","Type":"ContainerDied","Data":"268a773575e8e7bc4c955a37a02920f4ec547cd4b0d892b69836cd046d01ff24"} Dec 10 16:50:25 crc kubenswrapper[4755]: I1210 16:50:25.701088 4755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="268a773575e8e7bc4c955a37a02920f4ec547cd4b0d892b69836cd046d01ff24" Dec 10 16:50:25 crc kubenswrapper[4755]: I1210 16:50:25.701170 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-jfc28" Dec 10 16:50:25 crc kubenswrapper[4755]: I1210 16:50:25.767724 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-storageinit-rvkrd"] Dec 10 16:50:25 crc kubenswrapper[4755]: E1210 16:50:25.768035 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="513a68fb-fdcf-4f9a-a66e-1b16f3a082a8" containerName="container-00" Dec 10 16:50:25 crc kubenswrapper[4755]: I1210 16:50:25.768051 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="513a68fb-fdcf-4f9a-a66e-1b16f3a082a8" containerName="container-00" Dec 10 16:50:25 crc kubenswrapper[4755]: E1210 16:50:25.768074 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="998863b6-4f48-4c8b-8011-a40377686b99" containerName="cloudkitty-db-sync" Dec 10 16:50:25 crc kubenswrapper[4755]: I1210 16:50:25.768081 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="998863b6-4f48-4c8b-8011-a40377686b99" containerName="cloudkitty-db-sync" Dec 10 16:50:25 crc kubenswrapper[4755]: I1210 16:50:25.768270 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="998863b6-4f48-4c8b-8011-a40377686b99" containerName="cloudkitty-db-sync" Dec 10 16:50:25 crc kubenswrapper[4755]: I1210 16:50:25.768284 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="513a68fb-fdcf-4f9a-a66e-1b16f3a082a8" containerName="container-00" Dec 10 16:50:25 crc kubenswrapper[4755]: I1210 16:50:25.768988 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-storageinit-rvkrd" Dec 10 16:50:25 crc kubenswrapper[4755]: I1210 16:50:25.769022 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-storageinit-rvkrd"] Dec 10 16:50:25 crc kubenswrapper[4755]: I1210 16:50:25.770787 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Dec 10 16:50:25 crc kubenswrapper[4755]: I1210 16:50:25.890996 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/085de3b2-f23a-4359-a057-459a8a81d898-certs\") pod \"cloudkitty-storageinit-rvkrd\" (UID: \"085de3b2-f23a-4359-a057-459a8a81d898\") " pod="openstack/cloudkitty-storageinit-rvkrd" Dec 10 16:50:25 crc kubenswrapper[4755]: I1210 16:50:25.891678 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/085de3b2-f23a-4359-a057-459a8a81d898-config-data\") pod \"cloudkitty-storageinit-rvkrd\" (UID: \"085de3b2-f23a-4359-a057-459a8a81d898\") " pod="openstack/cloudkitty-storageinit-rvkrd" Dec 10 16:50:25 crc kubenswrapper[4755]: I1210 16:50:25.891738 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/085de3b2-f23a-4359-a057-459a8a81d898-combined-ca-bundle\") pod \"cloudkitty-storageinit-rvkrd\" (UID: \"085de3b2-f23a-4359-a057-459a8a81d898\") " pod="openstack/cloudkitty-storageinit-rvkrd" Dec 10 16:50:25 crc kubenswrapper[4755]: I1210 16:50:25.891760 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmj8z\" (UniqueName: \"kubernetes.io/projected/085de3b2-f23a-4359-a057-459a8a81d898-kube-api-access-hmj8z\") pod \"cloudkitty-storageinit-rvkrd\" (UID: \"085de3b2-f23a-4359-a057-459a8a81d898\") " pod="openstack/cloudkitty-storageinit-rvkrd" Dec 10 16:50:25 crc kubenswrapper[4755]: I1210 16:50:25.892104 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/085de3b2-f23a-4359-a057-459a8a81d898-scripts\") pod \"cloudkitty-storageinit-rvkrd\" (UID: \"085de3b2-f23a-4359-a057-459a8a81d898\") " pod="openstack/cloudkitty-storageinit-rvkrd" Dec 10 16:50:25 crc kubenswrapper[4755]: I1210 16:50:25.993995 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/085de3b2-f23a-4359-a057-459a8a81d898-scripts\") pod \"cloudkitty-storageinit-rvkrd\" (UID: \"085de3b2-f23a-4359-a057-459a8a81d898\") " pod="openstack/cloudkitty-storageinit-rvkrd" Dec 10 16:50:25 crc kubenswrapper[4755]: I1210 16:50:25.994129 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/085de3b2-f23a-4359-a057-459a8a81d898-certs\") pod \"cloudkitty-storageinit-rvkrd\" (UID: \"085de3b2-f23a-4359-a057-459a8a81d898\") " pod="openstack/cloudkitty-storageinit-rvkrd" Dec 10 16:50:25 crc kubenswrapper[4755]: I1210 16:50:25.994200 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/085de3b2-f23a-4359-a057-459a8a81d898-config-data\") pod \"cloudkitty-storageinit-rvkrd\" (UID: \"085de3b2-f23a-4359-a057-459a8a81d898\") " pod="openstack/cloudkitty-storageinit-rvkrd" Dec 10 16:50:25 crc kubenswrapper[4755]: I1210 16:50:25.994223 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/085de3b2-f23a-4359-a057-459a8a81d898-combined-ca-bundle\") pod \"cloudkitty-storageinit-rvkrd\" (UID: \"085de3b2-f23a-4359-a057-459a8a81d898\") " pod="openstack/cloudkitty-storageinit-rvkrd" Dec 10 16:50:25 crc kubenswrapper[4755]: I1210 16:50:25.994241 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmj8z\" (UniqueName: \"kubernetes.io/projected/085de3b2-f23a-4359-a057-459a8a81d898-kube-api-access-hmj8z\") pod \"cloudkitty-storageinit-rvkrd\" (UID: \"085de3b2-f23a-4359-a057-459a8a81d898\") " pod="openstack/cloudkitty-storageinit-rvkrd" Dec 10 16:50:25 crc kubenswrapper[4755]: I1210 16:50:25.998341 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/085de3b2-f23a-4359-a057-459a8a81d898-certs\") pod \"cloudkitty-storageinit-rvkrd\" (UID: \"085de3b2-f23a-4359-a057-459a8a81d898\") " pod="openstack/cloudkitty-storageinit-rvkrd" Dec 10 16:50:26 crc kubenswrapper[4755]: I1210 16:50:26.002219 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/085de3b2-f23a-4359-a057-459a8a81d898-scripts\") pod \"cloudkitty-storageinit-rvkrd\" (UID: \"085de3b2-f23a-4359-a057-459a8a81d898\") " pod="openstack/cloudkitty-storageinit-rvkrd" Dec 10 16:50:26 crc kubenswrapper[4755]: I1210 16:50:26.004096 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/085de3b2-f23a-4359-a057-459a8a81d898-combined-ca-bundle\") pod \"cloudkitty-storageinit-rvkrd\" (UID: \"085de3b2-f23a-4359-a057-459a8a81d898\") " pod="openstack/cloudkitty-storageinit-rvkrd" Dec 10 16:50:26 crc kubenswrapper[4755]: I1210 16:50:26.011326 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/085de3b2-f23a-4359-a057-459a8a81d898-config-data\") pod \"cloudkitty-storageinit-rvkrd\" (UID: \"085de3b2-f23a-4359-a057-459a8a81d898\") " pod="openstack/cloudkitty-storageinit-rvkrd" Dec 10 16:50:26 crc kubenswrapper[4755]: I1210 16:50:26.013211 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmj8z\" (UniqueName: \"kubernetes.io/projected/085de3b2-f23a-4359-a057-459a8a81d898-kube-api-access-hmj8z\") pod \"cloudkitty-storageinit-rvkrd\" (UID: \"085de3b2-f23a-4359-a057-459a8a81d898\") " pod="openstack/cloudkitty-storageinit-rvkrd" Dec 10 16:50:26 crc kubenswrapper[4755]: I1210 16:50:26.092642 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-storageinit-rvkrd" Dec 10 16:50:26 crc kubenswrapper[4755]: I1210 16:50:26.584568 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-storageinit-rvkrd"] Dec 10 16:50:27 crc kubenswrapper[4755]: I1210 16:50:27.721417 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-storageinit-rvkrd" event={"ID":"085de3b2-f23a-4359-a057-459a8a81d898","Type":"ContainerStarted","Data":"a91e0c4549c9d7bf12d7afdf22f40afb9764a065cab446fd84e478458b293986"} Dec 10 16:50:27 crc kubenswrapper[4755]: I1210 16:50:27.721959 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-storageinit-rvkrd" event={"ID":"085de3b2-f23a-4359-a057-459a8a81d898","Type":"ContainerStarted","Data":"9bcaac72848607581bd408edf5516316bd58b39fdb265bf48cb876aeecacc7b1"} Dec 10 16:50:29 crc kubenswrapper[4755]: I1210 16:50:29.747720 4755 generic.go:334] "Generic (PLEG): container finished" podID="085de3b2-f23a-4359-a057-459a8a81d898" containerID="a91e0c4549c9d7bf12d7afdf22f40afb9764a065cab446fd84e478458b293986" exitCode=0 Dec 10 16:50:29 crc kubenswrapper[4755]: I1210 16:50:29.747816 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-storageinit-rvkrd" event={"ID":"085de3b2-f23a-4359-a057-459a8a81d898","Type":"ContainerDied","Data":"a91e0c4549c9d7bf12d7afdf22f40afb9764a065cab446fd84e478458b293986"} Dec 10 16:50:31 crc kubenswrapper[4755]: I1210 16:50:31.268601 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-storageinit-rvkrd" Dec 10 16:50:31 crc kubenswrapper[4755]: I1210 16:50:31.330637 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/085de3b2-f23a-4359-a057-459a8a81d898-combined-ca-bundle\") pod \"085de3b2-f23a-4359-a057-459a8a81d898\" (UID: \"085de3b2-f23a-4359-a057-459a8a81d898\") " Dec 10 16:50:31 crc kubenswrapper[4755]: I1210 16:50:31.331023 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/085de3b2-f23a-4359-a057-459a8a81d898-certs\") pod \"085de3b2-f23a-4359-a057-459a8a81d898\" (UID: \"085de3b2-f23a-4359-a057-459a8a81d898\") " Dec 10 16:50:31 crc kubenswrapper[4755]: I1210 16:50:31.331125 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/085de3b2-f23a-4359-a057-459a8a81d898-config-data\") pod \"085de3b2-f23a-4359-a057-459a8a81d898\" (UID: \"085de3b2-f23a-4359-a057-459a8a81d898\") " Dec 10 16:50:31 crc kubenswrapper[4755]: I1210 16:50:31.331212 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hmj8z\" (UniqueName: \"kubernetes.io/projected/085de3b2-f23a-4359-a057-459a8a81d898-kube-api-access-hmj8z\") pod \"085de3b2-f23a-4359-a057-459a8a81d898\" (UID: \"085de3b2-f23a-4359-a057-459a8a81d898\") " Dec 10 16:50:31 crc kubenswrapper[4755]: I1210 16:50:31.331386 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/085de3b2-f23a-4359-a057-459a8a81d898-scripts\") pod \"085de3b2-f23a-4359-a057-459a8a81d898\" (UID: \"085de3b2-f23a-4359-a057-459a8a81d898\") " Dec 10 16:50:31 crc kubenswrapper[4755]: I1210 16:50:31.337430 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/085de3b2-f23a-4359-a057-459a8a81d898-certs" (OuterVolumeSpecName: "certs") pod "085de3b2-f23a-4359-a057-459a8a81d898" (UID: "085de3b2-f23a-4359-a057-459a8a81d898"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 16:50:31 crc kubenswrapper[4755]: I1210 16:50:31.341000 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/085de3b2-f23a-4359-a057-459a8a81d898-kube-api-access-hmj8z" (OuterVolumeSpecName: "kube-api-access-hmj8z") pod "085de3b2-f23a-4359-a057-459a8a81d898" (UID: "085de3b2-f23a-4359-a057-459a8a81d898"). InnerVolumeSpecName "kube-api-access-hmj8z". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 16:50:31 crc kubenswrapper[4755]: I1210 16:50:31.341325 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/085de3b2-f23a-4359-a057-459a8a81d898-scripts" (OuterVolumeSpecName: "scripts") pod "085de3b2-f23a-4359-a057-459a8a81d898" (UID: "085de3b2-f23a-4359-a057-459a8a81d898"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 16:50:31 crc kubenswrapper[4755]: I1210 16:50:31.389889 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/085de3b2-f23a-4359-a057-459a8a81d898-config-data" (OuterVolumeSpecName: "config-data") pod "085de3b2-f23a-4359-a057-459a8a81d898" (UID: "085de3b2-f23a-4359-a057-459a8a81d898"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 16:50:31 crc kubenswrapper[4755]: I1210 16:50:31.393629 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/085de3b2-f23a-4359-a057-459a8a81d898-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "085de3b2-f23a-4359-a057-459a8a81d898" (UID: "085de3b2-f23a-4359-a057-459a8a81d898"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 16:50:31 crc kubenswrapper[4755]: I1210 16:50:31.434021 4755 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/projected/085de3b2-f23a-4359-a057-459a8a81d898-certs\") on node \"crc\" DevicePath \"\"" Dec 10 16:50:31 crc kubenswrapper[4755]: I1210 16:50:31.434066 4755 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/085de3b2-f23a-4359-a057-459a8a81d898-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 16:50:31 crc kubenswrapper[4755]: I1210 16:50:31.434082 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hmj8z\" (UniqueName: \"kubernetes.io/projected/085de3b2-f23a-4359-a057-459a8a81d898-kube-api-access-hmj8z\") on node \"crc\" DevicePath \"\"" Dec 10 16:50:31 crc kubenswrapper[4755]: I1210 16:50:31.434097 4755 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/085de3b2-f23a-4359-a057-459a8a81d898-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 16:50:31 crc kubenswrapper[4755]: I1210 16:50:31.434109 4755 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/085de3b2-f23a-4359-a057-459a8a81d898-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 16:50:31 crc kubenswrapper[4755]: I1210 16:50:31.758310 4755 scope.go:117] "RemoveContainer" containerID="c02a5ae0f2b694ce1165db44430b56590e65e71f05780d61256f730ed3b1326e" Dec 10 16:50:31 crc kubenswrapper[4755]: E1210 16:50:31.759132 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:50:31 crc kubenswrapper[4755]: I1210 16:50:31.774286 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-storageinit-rvkrd" Dec 10 16:50:31 crc kubenswrapper[4755]: I1210 16:50:31.795004 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-storageinit-rvkrd" event={"ID":"085de3b2-f23a-4359-a057-459a8a81d898","Type":"ContainerDied","Data":"9bcaac72848607581bd408edf5516316bd58b39fdb265bf48cb876aeecacc7b1"} Dec 10 16:50:31 crc kubenswrapper[4755]: I1210 16:50:31.795050 4755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9bcaac72848607581bd408edf5516316bd58b39fdb265bf48cb876aeecacc7b1" Dec 10 16:50:31 crc kubenswrapper[4755]: I1210 16:50:31.919342 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-proc-0"] Dec 10 16:50:31 crc kubenswrapper[4755]: I1210 16:50:31.919692 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cloudkitty-proc-0" podUID="83ca4bf3-4811-4418-af2b-0fdc5e299a00" containerName="cloudkitty-proc" containerID="cri-o://24ccadc9c45111f5cbe20607275ed9395aedb86da1f2b16db2d5ef6d0092b297" gracePeriod=30 Dec 10 16:50:31 crc kubenswrapper[4755]: I1210 16:50:31.930410 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-api-0"] Dec 10 16:50:31 crc kubenswrapper[4755]: I1210 16:50:31.930653 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cloudkitty-api-0" podUID="0051924b-bff8-4934-92b8-f787e29c758e" containerName="cloudkitty-api-log" containerID="cri-o://4103e4b2f069b4f2bc9a6575d879f8de044a07ef8738ca637803f0513c1899e7" gracePeriod=30 Dec 10 16:50:31 crc kubenswrapper[4755]: I1210 16:50:31.930757 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cloudkitty-api-0" podUID="0051924b-bff8-4934-92b8-f787e29c758e" containerName="cloudkitty-api" containerID="cri-o://59619a06643efabc2935abba8da3f7016a1f4dd3c72371fc5ac552269ec194d5" gracePeriod=30 Dec 10 16:50:32 crc kubenswrapper[4755]: I1210 16:50:32.785919 4755 generic.go:334] "Generic (PLEG): container finished" podID="0051924b-bff8-4934-92b8-f787e29c758e" containerID="4103e4b2f069b4f2bc9a6575d879f8de044a07ef8738ca637803f0513c1899e7" exitCode=143 Dec 10 16:50:32 crc kubenswrapper[4755]: I1210 16:50:32.786431 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"0051924b-bff8-4934-92b8-f787e29c758e","Type":"ContainerDied","Data":"4103e4b2f069b4f2bc9a6575d879f8de044a07ef8738ca637803f0513c1899e7"} Dec 10 16:50:32 crc kubenswrapper[4755]: I1210 16:50:32.791336 4755 generic.go:334] "Generic (PLEG): container finished" podID="83ca4bf3-4811-4418-af2b-0fdc5e299a00" containerID="24ccadc9c45111f5cbe20607275ed9395aedb86da1f2b16db2d5ef6d0092b297" exitCode=0 Dec 10 16:50:32 crc kubenswrapper[4755]: I1210 16:50:32.791370 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"83ca4bf3-4811-4418-af2b-0fdc5e299a00","Type":"ContainerDied","Data":"24ccadc9c45111f5cbe20607275ed9395aedb86da1f2b16db2d5ef6d0092b297"} Dec 10 16:50:32 crc kubenswrapper[4755]: I1210 16:50:32.805109 4755 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-api-0" podUID="0051924b-bff8-4934-92b8-f787e29c758e" containerName="cloudkitty-api" probeResult="failure" output="Get \"https://10.217.0.190:8889/healthcheck\": read tcp 10.217.0.2:38134->10.217.0.190:8889: read: connection reset by peer" Dec 10 16:50:32 crc kubenswrapper[4755]: I1210 16:50:32.992269 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Dec 10 16:50:33 crc kubenswrapper[4755]: I1210 16:50:33.074714 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kn6bl\" (UniqueName: \"kubernetes.io/projected/83ca4bf3-4811-4418-af2b-0fdc5e299a00-kube-api-access-kn6bl\") pod \"83ca4bf3-4811-4418-af2b-0fdc5e299a00\" (UID: \"83ca4bf3-4811-4418-af2b-0fdc5e299a00\") " Dec 10 16:50:33 crc kubenswrapper[4755]: I1210 16:50:33.074820 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/83ca4bf3-4811-4418-af2b-0fdc5e299a00-certs\") pod \"83ca4bf3-4811-4418-af2b-0fdc5e299a00\" (UID: \"83ca4bf3-4811-4418-af2b-0fdc5e299a00\") " Dec 10 16:50:33 crc kubenswrapper[4755]: I1210 16:50:33.074897 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83ca4bf3-4811-4418-af2b-0fdc5e299a00-config-data\") pod \"83ca4bf3-4811-4418-af2b-0fdc5e299a00\" (UID: \"83ca4bf3-4811-4418-af2b-0fdc5e299a00\") " Dec 10 16:50:33 crc kubenswrapper[4755]: I1210 16:50:33.074926 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/83ca4bf3-4811-4418-af2b-0fdc5e299a00-config-data-custom\") pod \"83ca4bf3-4811-4418-af2b-0fdc5e299a00\" (UID: \"83ca4bf3-4811-4418-af2b-0fdc5e299a00\") " Dec 10 16:50:33 crc kubenswrapper[4755]: I1210 16:50:33.074964 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83ca4bf3-4811-4418-af2b-0fdc5e299a00-scripts\") pod \"83ca4bf3-4811-4418-af2b-0fdc5e299a00\" (UID: \"83ca4bf3-4811-4418-af2b-0fdc5e299a00\") " Dec 10 16:50:33 crc kubenswrapper[4755]: I1210 16:50:33.075023 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83ca4bf3-4811-4418-af2b-0fdc5e299a00-combined-ca-bundle\") pod \"83ca4bf3-4811-4418-af2b-0fdc5e299a00\" (UID: \"83ca4bf3-4811-4418-af2b-0fdc5e299a00\") " Dec 10 16:50:33 crc kubenswrapper[4755]: I1210 16:50:33.100568 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83ca4bf3-4811-4418-af2b-0fdc5e299a00-certs" (OuterVolumeSpecName: "certs") pod "83ca4bf3-4811-4418-af2b-0fdc5e299a00" (UID: "83ca4bf3-4811-4418-af2b-0fdc5e299a00"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 16:50:33 crc kubenswrapper[4755]: I1210 16:50:33.100783 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83ca4bf3-4811-4418-af2b-0fdc5e299a00-scripts" (OuterVolumeSpecName: "scripts") pod "83ca4bf3-4811-4418-af2b-0fdc5e299a00" (UID: "83ca4bf3-4811-4418-af2b-0fdc5e299a00"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 16:50:33 crc kubenswrapper[4755]: I1210 16:50:33.103642 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83ca4bf3-4811-4418-af2b-0fdc5e299a00-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "83ca4bf3-4811-4418-af2b-0fdc5e299a00" (UID: "83ca4bf3-4811-4418-af2b-0fdc5e299a00"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 16:50:33 crc kubenswrapper[4755]: I1210 16:50:33.112745 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83ca4bf3-4811-4418-af2b-0fdc5e299a00-kube-api-access-kn6bl" (OuterVolumeSpecName: "kube-api-access-kn6bl") pod "83ca4bf3-4811-4418-af2b-0fdc5e299a00" (UID: "83ca4bf3-4811-4418-af2b-0fdc5e299a00"). InnerVolumeSpecName "kube-api-access-kn6bl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 16:50:33 crc kubenswrapper[4755]: I1210 16:50:33.153603 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83ca4bf3-4811-4418-af2b-0fdc5e299a00-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "83ca4bf3-4811-4418-af2b-0fdc5e299a00" (UID: "83ca4bf3-4811-4418-af2b-0fdc5e299a00"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 16:50:33 crc kubenswrapper[4755]: I1210 16:50:33.160039 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83ca4bf3-4811-4418-af2b-0fdc5e299a00-config-data" (OuterVolumeSpecName: "config-data") pod "83ca4bf3-4811-4418-af2b-0fdc5e299a00" (UID: "83ca4bf3-4811-4418-af2b-0fdc5e299a00"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 16:50:33 crc kubenswrapper[4755]: I1210 16:50:33.184569 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kn6bl\" (UniqueName: \"kubernetes.io/projected/83ca4bf3-4811-4418-af2b-0fdc5e299a00-kube-api-access-kn6bl\") on node \"crc\" DevicePath \"\"" Dec 10 16:50:33 crc kubenswrapper[4755]: I1210 16:50:33.184599 4755 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/projected/83ca4bf3-4811-4418-af2b-0fdc5e299a00-certs\") on node \"crc\" DevicePath \"\"" Dec 10 16:50:33 crc kubenswrapper[4755]: I1210 16:50:33.184610 4755 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83ca4bf3-4811-4418-af2b-0fdc5e299a00-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 16:50:33 crc kubenswrapper[4755]: I1210 16:50:33.184620 4755 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/83ca4bf3-4811-4418-af2b-0fdc5e299a00-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 10 16:50:33 crc kubenswrapper[4755]: I1210 16:50:33.184628 4755 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83ca4bf3-4811-4418-af2b-0fdc5e299a00-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 16:50:33 crc kubenswrapper[4755]: I1210 16:50:33.184636 4755 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83ca4bf3-4811-4418-af2b-0fdc5e299a00-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 16:50:33 crc kubenswrapper[4755]: I1210 16:50:33.424876 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Dec 10 16:50:33 crc kubenswrapper[4755]: I1210 16:50:33.490847 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rhf9p\" (UniqueName: \"kubernetes.io/projected/0051924b-bff8-4934-92b8-f787e29c758e-kube-api-access-rhf9p\") pod \"0051924b-bff8-4934-92b8-f787e29c758e\" (UID: \"0051924b-bff8-4934-92b8-f787e29c758e\") " Dec 10 16:50:33 crc kubenswrapper[4755]: I1210 16:50:33.490905 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/0051924b-bff8-4934-92b8-f787e29c758e-certs\") pod \"0051924b-bff8-4934-92b8-f787e29c758e\" (UID: \"0051924b-bff8-4934-92b8-f787e29c758e\") " Dec 10 16:50:33 crc kubenswrapper[4755]: I1210 16:50:33.490931 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0051924b-bff8-4934-92b8-f787e29c758e-scripts\") pod \"0051924b-bff8-4934-92b8-f787e29c758e\" (UID: \"0051924b-bff8-4934-92b8-f787e29c758e\") " Dec 10 16:50:33 crc kubenswrapper[4755]: I1210 16:50:33.490984 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0051924b-bff8-4934-92b8-f787e29c758e-config-data\") pod \"0051924b-bff8-4934-92b8-f787e29c758e\" (UID: \"0051924b-bff8-4934-92b8-f787e29c758e\") " Dec 10 16:50:33 crc kubenswrapper[4755]: I1210 16:50:33.491051 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0051924b-bff8-4934-92b8-f787e29c758e-internal-tls-certs\") pod \"0051924b-bff8-4934-92b8-f787e29c758e\" (UID: \"0051924b-bff8-4934-92b8-f787e29c758e\") " Dec 10 16:50:33 crc kubenswrapper[4755]: I1210 16:50:33.491189 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0051924b-bff8-4934-92b8-f787e29c758e-combined-ca-bundle\") pod \"0051924b-bff8-4934-92b8-f787e29c758e\" (UID: \"0051924b-bff8-4934-92b8-f787e29c758e\") " Dec 10 16:50:33 crc kubenswrapper[4755]: I1210 16:50:33.491230 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0051924b-bff8-4934-92b8-f787e29c758e-public-tls-certs\") pod \"0051924b-bff8-4934-92b8-f787e29c758e\" (UID: \"0051924b-bff8-4934-92b8-f787e29c758e\") " Dec 10 16:50:33 crc kubenswrapper[4755]: I1210 16:50:33.491271 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0051924b-bff8-4934-92b8-f787e29c758e-logs\") pod \"0051924b-bff8-4934-92b8-f787e29c758e\" (UID: \"0051924b-bff8-4934-92b8-f787e29c758e\") " Dec 10 16:50:33 crc kubenswrapper[4755]: I1210 16:50:33.491316 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0051924b-bff8-4934-92b8-f787e29c758e-config-data-custom\") pod \"0051924b-bff8-4934-92b8-f787e29c758e\" (UID: \"0051924b-bff8-4934-92b8-f787e29c758e\") " Dec 10 16:50:33 crc kubenswrapper[4755]: I1210 16:50:33.494417 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0051924b-bff8-4934-92b8-f787e29c758e-logs" (OuterVolumeSpecName: "logs") pod "0051924b-bff8-4934-92b8-f787e29c758e" (UID: "0051924b-bff8-4934-92b8-f787e29c758e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 16:50:33 crc kubenswrapper[4755]: I1210 16:50:33.499685 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0051924b-bff8-4934-92b8-f787e29c758e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "0051924b-bff8-4934-92b8-f787e29c758e" (UID: "0051924b-bff8-4934-92b8-f787e29c758e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 16:50:33 crc kubenswrapper[4755]: I1210 16:50:33.509912 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0051924b-bff8-4934-92b8-f787e29c758e-certs" (OuterVolumeSpecName: "certs") pod "0051924b-bff8-4934-92b8-f787e29c758e" (UID: "0051924b-bff8-4934-92b8-f787e29c758e"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 16:50:33 crc kubenswrapper[4755]: I1210 16:50:33.511755 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0051924b-bff8-4934-92b8-f787e29c758e-kube-api-access-rhf9p" (OuterVolumeSpecName: "kube-api-access-rhf9p") pod "0051924b-bff8-4934-92b8-f787e29c758e" (UID: "0051924b-bff8-4934-92b8-f787e29c758e"). InnerVolumeSpecName "kube-api-access-rhf9p". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 16:50:33 crc kubenswrapper[4755]: I1210 16:50:33.513229 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0051924b-bff8-4934-92b8-f787e29c758e-scripts" (OuterVolumeSpecName: "scripts") pod "0051924b-bff8-4934-92b8-f787e29c758e" (UID: "0051924b-bff8-4934-92b8-f787e29c758e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 16:50:33 crc kubenswrapper[4755]: I1210 16:50:33.557042 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0051924b-bff8-4934-92b8-f787e29c758e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0051924b-bff8-4934-92b8-f787e29c758e" (UID: "0051924b-bff8-4934-92b8-f787e29c758e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 16:50:33 crc kubenswrapper[4755]: I1210 16:50:33.594813 4755 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0051924b-bff8-4934-92b8-f787e29c758e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 16:50:33 crc kubenswrapper[4755]: I1210 16:50:33.594842 4755 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0051924b-bff8-4934-92b8-f787e29c758e-logs\") on node \"crc\" DevicePath \"\"" Dec 10 16:50:33 crc kubenswrapper[4755]: I1210 16:50:33.594851 4755 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0051924b-bff8-4934-92b8-f787e29c758e-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 10 16:50:33 crc kubenswrapper[4755]: I1210 16:50:33.594860 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rhf9p\" (UniqueName: \"kubernetes.io/projected/0051924b-bff8-4934-92b8-f787e29c758e-kube-api-access-rhf9p\") on node \"crc\" DevicePath \"\"" Dec 10 16:50:33 crc kubenswrapper[4755]: I1210 16:50:33.594870 4755 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/projected/0051924b-bff8-4934-92b8-f787e29c758e-certs\") on node \"crc\" DevicePath \"\"" Dec 10 16:50:33 crc kubenswrapper[4755]: I1210 16:50:33.594879 4755 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0051924b-bff8-4934-92b8-f787e29c758e-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 16:50:33 crc kubenswrapper[4755]: I1210 16:50:33.600201 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0051924b-bff8-4934-92b8-f787e29c758e-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "0051924b-bff8-4934-92b8-f787e29c758e" (UID: "0051924b-bff8-4934-92b8-f787e29c758e"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 16:50:33 crc kubenswrapper[4755]: I1210 16:50:33.614036 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0051924b-bff8-4934-92b8-f787e29c758e-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "0051924b-bff8-4934-92b8-f787e29c758e" (UID: "0051924b-bff8-4934-92b8-f787e29c758e"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 16:50:33 crc kubenswrapper[4755]: I1210 16:50:33.629105 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0051924b-bff8-4934-92b8-f787e29c758e-config-data" (OuterVolumeSpecName: "config-data") pod "0051924b-bff8-4934-92b8-f787e29c758e" (UID: "0051924b-bff8-4934-92b8-f787e29c758e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 16:50:33 crc kubenswrapper[4755]: I1210 16:50:33.696703 4755 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0051924b-bff8-4934-92b8-f787e29c758e-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 10 16:50:33 crc kubenswrapper[4755]: I1210 16:50:33.696745 4755 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0051924b-bff8-4934-92b8-f787e29c758e-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 10 16:50:33 crc kubenswrapper[4755]: I1210 16:50:33.696758 4755 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0051924b-bff8-4934-92b8-f787e29c758e-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 16:50:33 crc kubenswrapper[4755]: E1210 16:50:33.765900 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6d104bea-ecdc-4fe1-9861-fb1a19fce845" Dec 10 16:50:33 crc kubenswrapper[4755]: I1210 16:50:33.805715 4755 generic.go:334] "Generic (PLEG): container finished" podID="0051924b-bff8-4934-92b8-f787e29c758e" containerID="59619a06643efabc2935abba8da3f7016a1f4dd3c72371fc5ac552269ec194d5" exitCode=0 Dec 10 16:50:33 crc kubenswrapper[4755]: I1210 16:50:33.805786 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"0051924b-bff8-4934-92b8-f787e29c758e","Type":"ContainerDied","Data":"59619a06643efabc2935abba8da3f7016a1f4dd3c72371fc5ac552269ec194d5"} Dec 10 16:50:33 crc kubenswrapper[4755]: I1210 16:50:33.805819 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"0051924b-bff8-4934-92b8-f787e29c758e","Type":"ContainerDied","Data":"8dfeb9217f0a851602cc274352c84bf665c5881bd77398b207056d60566b0ac8"} Dec 10 16:50:33 crc kubenswrapper[4755]: I1210 16:50:33.805840 4755 scope.go:117] "RemoveContainer" containerID="59619a06643efabc2935abba8da3f7016a1f4dd3c72371fc5ac552269ec194d5" Dec 10 16:50:33 crc kubenswrapper[4755]: I1210 16:50:33.805989 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Dec 10 16:50:33 crc kubenswrapper[4755]: I1210 16:50:33.811995 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"83ca4bf3-4811-4418-af2b-0fdc5e299a00","Type":"ContainerDied","Data":"f27de51107600b19f9413beaee2a246fbc5c83c828b1a0466eabe4becaa3c894"} Dec 10 16:50:33 crc kubenswrapper[4755]: I1210 16:50:33.812135 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Dec 10 16:50:33 crc kubenswrapper[4755]: I1210 16:50:33.836632 4755 scope.go:117] "RemoveContainer" containerID="4103e4b2f069b4f2bc9a6575d879f8de044a07ef8738ca637803f0513c1899e7" Dec 10 16:50:33 crc kubenswrapper[4755]: I1210 16:50:33.849953 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-proc-0"] Dec 10 16:50:33 crc kubenswrapper[4755]: I1210 16:50:33.859947 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-proc-0"] Dec 10 16:50:33 crc kubenswrapper[4755]: I1210 16:50:33.870560 4755 scope.go:117] "RemoveContainer" containerID="59619a06643efabc2935abba8da3f7016a1f4dd3c72371fc5ac552269ec194d5" Dec 10 16:50:33 crc kubenswrapper[4755]: E1210 16:50:33.871026 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59619a06643efabc2935abba8da3f7016a1f4dd3c72371fc5ac552269ec194d5\": container with ID starting with 59619a06643efabc2935abba8da3f7016a1f4dd3c72371fc5ac552269ec194d5 not found: ID does not exist" containerID="59619a06643efabc2935abba8da3f7016a1f4dd3c72371fc5ac552269ec194d5" Dec 10 16:50:33 crc kubenswrapper[4755]: I1210 16:50:33.871061 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59619a06643efabc2935abba8da3f7016a1f4dd3c72371fc5ac552269ec194d5"} err="failed to get container status \"59619a06643efabc2935abba8da3f7016a1f4dd3c72371fc5ac552269ec194d5\": rpc error: code = NotFound desc = could not find container \"59619a06643efabc2935abba8da3f7016a1f4dd3c72371fc5ac552269ec194d5\": container with ID starting with 59619a06643efabc2935abba8da3f7016a1f4dd3c72371fc5ac552269ec194d5 not found: ID does not exist" Dec 10 16:50:33 crc kubenswrapper[4755]: I1210 16:50:33.871080 4755 scope.go:117] "RemoveContainer" containerID="4103e4b2f069b4f2bc9a6575d879f8de044a07ef8738ca637803f0513c1899e7" Dec 10 16:50:33 crc kubenswrapper[4755]: E1210 16:50:33.871408 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4103e4b2f069b4f2bc9a6575d879f8de044a07ef8738ca637803f0513c1899e7\": container with ID starting with 4103e4b2f069b4f2bc9a6575d879f8de044a07ef8738ca637803f0513c1899e7 not found: ID does not exist" containerID="4103e4b2f069b4f2bc9a6575d879f8de044a07ef8738ca637803f0513c1899e7" Dec 10 16:50:33 crc kubenswrapper[4755]: I1210 16:50:33.871442 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4103e4b2f069b4f2bc9a6575d879f8de044a07ef8738ca637803f0513c1899e7"} err="failed to get container status \"4103e4b2f069b4f2bc9a6575d879f8de044a07ef8738ca637803f0513c1899e7\": rpc error: code = NotFound desc = could not find container \"4103e4b2f069b4f2bc9a6575d879f8de044a07ef8738ca637803f0513c1899e7\": container with ID starting with 4103e4b2f069b4f2bc9a6575d879f8de044a07ef8738ca637803f0513c1899e7 not found: ID does not exist" Dec 10 16:50:33 crc kubenswrapper[4755]: I1210 16:50:33.871455 4755 scope.go:117] "RemoveContainer" containerID="24ccadc9c45111f5cbe20607275ed9395aedb86da1f2b16db2d5ef6d0092b297" Dec 10 16:50:33 crc kubenswrapper[4755]: I1210 16:50:33.883785 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-api-0"] Dec 10 16:50:33 crc kubenswrapper[4755]: I1210 16:50:33.893444 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-api-0"] Dec 10 16:50:33 crc kubenswrapper[4755]: I1210 16:50:33.901638 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-proc-0"] Dec 10 16:50:33 crc kubenswrapper[4755]: E1210 16:50:33.902191 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="085de3b2-f23a-4359-a057-459a8a81d898" containerName="cloudkitty-storageinit" Dec 10 16:50:33 crc kubenswrapper[4755]: I1210 16:50:33.902207 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="085de3b2-f23a-4359-a057-459a8a81d898" containerName="cloudkitty-storageinit" Dec 10 16:50:33 crc kubenswrapper[4755]: E1210 16:50:33.902229 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0051924b-bff8-4934-92b8-f787e29c758e" containerName="cloudkitty-api" Dec 10 16:50:33 crc kubenswrapper[4755]: I1210 16:50:33.902237 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="0051924b-bff8-4934-92b8-f787e29c758e" containerName="cloudkitty-api" Dec 10 16:50:33 crc kubenswrapper[4755]: E1210 16:50:33.902288 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0051924b-bff8-4934-92b8-f787e29c758e" containerName="cloudkitty-api-log" Dec 10 16:50:33 crc kubenswrapper[4755]: I1210 16:50:33.902297 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="0051924b-bff8-4934-92b8-f787e29c758e" containerName="cloudkitty-api-log" Dec 10 16:50:33 crc kubenswrapper[4755]: E1210 16:50:33.902307 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83ca4bf3-4811-4418-af2b-0fdc5e299a00" containerName="cloudkitty-proc" Dec 10 16:50:33 crc kubenswrapper[4755]: I1210 16:50:33.902314 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="83ca4bf3-4811-4418-af2b-0fdc5e299a00" containerName="cloudkitty-proc" Dec 10 16:50:33 crc kubenswrapper[4755]: I1210 16:50:33.902576 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="83ca4bf3-4811-4418-af2b-0fdc5e299a00" containerName="cloudkitty-proc" Dec 10 16:50:33 crc kubenswrapper[4755]: I1210 16:50:33.902594 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="085de3b2-f23a-4359-a057-459a8a81d898" containerName="cloudkitty-storageinit" Dec 10 16:50:33 crc kubenswrapper[4755]: I1210 16:50:33.902609 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="0051924b-bff8-4934-92b8-f787e29c758e" containerName="cloudkitty-api" Dec 10 16:50:33 crc kubenswrapper[4755]: I1210 16:50:33.902634 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="0051924b-bff8-4934-92b8-f787e29c758e" containerName="cloudkitty-api-log" Dec 10 16:50:33 crc kubenswrapper[4755]: I1210 16:50:33.903742 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Dec 10 16:50:33 crc kubenswrapper[4755]: I1210 16:50:33.906361 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-scripts" Dec 10 16:50:33 crc kubenswrapper[4755]: I1210 16:50:33.906937 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-config-data" Dec 10 16:50:33 crc kubenswrapper[4755]: I1210 16:50:33.907101 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cloudkitty-client-internal" Dec 10 16:50:33 crc kubenswrapper[4755]: I1210 16:50:33.907141 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-cloudkitty-dockercfg-6f74p" Dec 10 16:50:33 crc kubenswrapper[4755]: I1210 16:50:33.907107 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-proc-config-data" Dec 10 16:50:33 crc kubenswrapper[4755]: I1210 16:50:33.925512 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-api-0"] Dec 10 16:50:33 crc kubenswrapper[4755]: I1210 16:50:33.927353 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Dec 10 16:50:33 crc kubenswrapper[4755]: I1210 16:50:33.928936 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-api-config-data" Dec 10 16:50:33 crc kubenswrapper[4755]: I1210 16:50:33.929027 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cloudkitty-public-svc" Dec 10 16:50:33 crc kubenswrapper[4755]: I1210 16:50:33.934004 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cloudkitty-internal-svc" Dec 10 16:50:33 crc kubenswrapper[4755]: I1210 16:50:33.949615 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-proc-0"] Dec 10 16:50:33 crc kubenswrapper[4755]: I1210 16:50:33.958399 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-api-0"] Dec 10 16:50:34 crc kubenswrapper[4755]: I1210 16:50:34.004711 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xx7w2\" (UniqueName: \"kubernetes.io/projected/a16014c1-2894-4c15-8c1e-24325dd91aa8-kube-api-access-xx7w2\") pod \"cloudkitty-proc-0\" (UID: \"a16014c1-2894-4c15-8c1e-24325dd91aa8\") " pod="openstack/cloudkitty-proc-0" Dec 10 16:50:34 crc kubenswrapper[4755]: I1210 16:50:34.004763 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/239c0154-72d2-4617-8be2-ecc33a8a7a2e-logs\") pod \"cloudkitty-api-0\" (UID: \"239c0154-72d2-4617-8be2-ecc33a8a7a2e\") " pod="openstack/cloudkitty-api-0" Dec 10 16:50:34 crc kubenswrapper[4755]: I1210 16:50:34.004793 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/239c0154-72d2-4617-8be2-ecc33a8a7a2e-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"239c0154-72d2-4617-8be2-ecc33a8a7a2e\") " pod="openstack/cloudkitty-api-0" Dec 10 16:50:34 crc kubenswrapper[4755]: I1210 16:50:34.004943 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/239c0154-72d2-4617-8be2-ecc33a8a7a2e-certs\") pod \"cloudkitty-api-0\" (UID: \"239c0154-72d2-4617-8be2-ecc33a8a7a2e\") " pod="openstack/cloudkitty-api-0" Dec 10 16:50:34 crc kubenswrapper[4755]: I1210 16:50:34.005063 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/239c0154-72d2-4617-8be2-ecc33a8a7a2e-internal-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"239c0154-72d2-4617-8be2-ecc33a8a7a2e\") " pod="openstack/cloudkitty-api-0" Dec 10 16:50:34 crc kubenswrapper[4755]: I1210 16:50:34.005186 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/239c0154-72d2-4617-8be2-ecc33a8a7a2e-scripts\") pod \"cloudkitty-api-0\" (UID: \"239c0154-72d2-4617-8be2-ecc33a8a7a2e\") " pod="openstack/cloudkitty-api-0" Dec 10 16:50:34 crc kubenswrapper[4755]: I1210 16:50:34.005207 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/239c0154-72d2-4617-8be2-ecc33a8a7a2e-config-data\") pod \"cloudkitty-api-0\" (UID: \"239c0154-72d2-4617-8be2-ecc33a8a7a2e\") " pod="openstack/cloudkitty-api-0" Dec 10 16:50:34 crc kubenswrapper[4755]: I1210 16:50:34.005268 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a16014c1-2894-4c15-8c1e-24325dd91aa8-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"a16014c1-2894-4c15-8c1e-24325dd91aa8\") " pod="openstack/cloudkitty-proc-0" Dec 10 16:50:34 crc kubenswrapper[4755]: I1210 16:50:34.005301 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a16014c1-2894-4c15-8c1e-24325dd91aa8-config-data\") pod \"cloudkitty-proc-0\" (UID: \"a16014c1-2894-4c15-8c1e-24325dd91aa8\") " pod="openstack/cloudkitty-proc-0" Dec 10 16:50:34 crc kubenswrapper[4755]: I1210 16:50:34.005352 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/239c0154-72d2-4617-8be2-ecc33a8a7a2e-public-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"239c0154-72d2-4617-8be2-ecc33a8a7a2e\") " pod="openstack/cloudkitty-api-0" Dec 10 16:50:34 crc kubenswrapper[4755]: I1210 16:50:34.005377 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4gms\" (UniqueName: \"kubernetes.io/projected/239c0154-72d2-4617-8be2-ecc33a8a7a2e-kube-api-access-b4gms\") pod \"cloudkitty-api-0\" (UID: \"239c0154-72d2-4617-8be2-ecc33a8a7a2e\") " pod="openstack/cloudkitty-api-0" Dec 10 16:50:34 crc kubenswrapper[4755]: I1210 16:50:34.005448 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a16014c1-2894-4c15-8c1e-24325dd91aa8-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"a16014c1-2894-4c15-8c1e-24325dd91aa8\") " pod="openstack/cloudkitty-proc-0" Dec 10 16:50:34 crc kubenswrapper[4755]: I1210 16:50:34.005512 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a16014c1-2894-4c15-8c1e-24325dd91aa8-scripts\") pod \"cloudkitty-proc-0\" (UID: \"a16014c1-2894-4c15-8c1e-24325dd91aa8\") " pod="openstack/cloudkitty-proc-0" Dec 10 16:50:34 crc kubenswrapper[4755]: I1210 16:50:34.005537 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/239c0154-72d2-4617-8be2-ecc33a8a7a2e-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"239c0154-72d2-4617-8be2-ecc33a8a7a2e\") " pod="openstack/cloudkitty-api-0" Dec 10 16:50:34 crc kubenswrapper[4755]: I1210 16:50:34.005557 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/a16014c1-2894-4c15-8c1e-24325dd91aa8-certs\") pod \"cloudkitty-proc-0\" (UID: \"a16014c1-2894-4c15-8c1e-24325dd91aa8\") " pod="openstack/cloudkitty-proc-0" Dec 10 16:50:34 crc kubenswrapper[4755]: I1210 16:50:34.107661 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a16014c1-2894-4c15-8c1e-24325dd91aa8-scripts\") pod \"cloudkitty-proc-0\" (UID: \"a16014c1-2894-4c15-8c1e-24325dd91aa8\") " pod="openstack/cloudkitty-proc-0" Dec 10 16:50:34 crc kubenswrapper[4755]: I1210 16:50:34.107717 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/239c0154-72d2-4617-8be2-ecc33a8a7a2e-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"239c0154-72d2-4617-8be2-ecc33a8a7a2e\") " pod="openstack/cloudkitty-api-0" Dec 10 16:50:34 crc kubenswrapper[4755]: I1210 16:50:34.107741 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/a16014c1-2894-4c15-8c1e-24325dd91aa8-certs\") pod \"cloudkitty-proc-0\" (UID: \"a16014c1-2894-4c15-8c1e-24325dd91aa8\") " pod="openstack/cloudkitty-proc-0" Dec 10 16:50:34 crc kubenswrapper[4755]: I1210 16:50:34.107784 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xx7w2\" (UniqueName: \"kubernetes.io/projected/a16014c1-2894-4c15-8c1e-24325dd91aa8-kube-api-access-xx7w2\") pod \"cloudkitty-proc-0\" (UID: \"a16014c1-2894-4c15-8c1e-24325dd91aa8\") " pod="openstack/cloudkitty-proc-0" Dec 10 16:50:34 crc kubenswrapper[4755]: I1210 16:50:34.107807 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/239c0154-72d2-4617-8be2-ecc33a8a7a2e-logs\") pod \"cloudkitty-api-0\" (UID: \"239c0154-72d2-4617-8be2-ecc33a8a7a2e\") " pod="openstack/cloudkitty-api-0" Dec 10 16:50:34 crc kubenswrapper[4755]: I1210 16:50:34.107830 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/239c0154-72d2-4617-8be2-ecc33a8a7a2e-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"239c0154-72d2-4617-8be2-ecc33a8a7a2e\") " pod="openstack/cloudkitty-api-0" Dec 10 16:50:34 crc kubenswrapper[4755]: I1210 16:50:34.107857 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/239c0154-72d2-4617-8be2-ecc33a8a7a2e-certs\") pod \"cloudkitty-api-0\" (UID: \"239c0154-72d2-4617-8be2-ecc33a8a7a2e\") " pod="openstack/cloudkitty-api-0" Dec 10 16:50:34 crc kubenswrapper[4755]: I1210 16:50:34.107879 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/239c0154-72d2-4617-8be2-ecc33a8a7a2e-internal-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"239c0154-72d2-4617-8be2-ecc33a8a7a2e\") " pod="openstack/cloudkitty-api-0" Dec 10 16:50:34 crc kubenswrapper[4755]: I1210 16:50:34.108674 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/239c0154-72d2-4617-8be2-ecc33a8a7a2e-scripts\") pod \"cloudkitty-api-0\" (UID: \"239c0154-72d2-4617-8be2-ecc33a8a7a2e\") " pod="openstack/cloudkitty-api-0" Dec 10 16:50:34 crc kubenswrapper[4755]: I1210 16:50:34.108710 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/239c0154-72d2-4617-8be2-ecc33a8a7a2e-config-data\") pod \"cloudkitty-api-0\" (UID: \"239c0154-72d2-4617-8be2-ecc33a8a7a2e\") " pod="openstack/cloudkitty-api-0" Dec 10 16:50:34 crc kubenswrapper[4755]: I1210 16:50:34.108748 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a16014c1-2894-4c15-8c1e-24325dd91aa8-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"a16014c1-2894-4c15-8c1e-24325dd91aa8\") " pod="openstack/cloudkitty-proc-0" Dec 10 16:50:34 crc kubenswrapper[4755]: I1210 16:50:34.108770 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a16014c1-2894-4c15-8c1e-24325dd91aa8-config-data\") pod \"cloudkitty-proc-0\" (UID: \"a16014c1-2894-4c15-8c1e-24325dd91aa8\") " pod="openstack/cloudkitty-proc-0" Dec 10 16:50:34 crc kubenswrapper[4755]: I1210 16:50:34.108809 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/239c0154-72d2-4617-8be2-ecc33a8a7a2e-public-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"239c0154-72d2-4617-8be2-ecc33a8a7a2e\") " pod="openstack/cloudkitty-api-0" Dec 10 16:50:34 crc kubenswrapper[4755]: I1210 16:50:34.108836 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4gms\" (UniqueName: \"kubernetes.io/projected/239c0154-72d2-4617-8be2-ecc33a8a7a2e-kube-api-access-b4gms\") pod \"cloudkitty-api-0\" (UID: \"239c0154-72d2-4617-8be2-ecc33a8a7a2e\") " pod="openstack/cloudkitty-api-0" Dec 10 16:50:34 crc kubenswrapper[4755]: I1210 16:50:34.108896 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a16014c1-2894-4c15-8c1e-24325dd91aa8-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"a16014c1-2894-4c15-8c1e-24325dd91aa8\") " pod="openstack/cloudkitty-proc-0" Dec 10 16:50:34 crc kubenswrapper[4755]: I1210 16:50:34.109009 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/239c0154-72d2-4617-8be2-ecc33a8a7a2e-logs\") pod \"cloudkitty-api-0\" (UID: \"239c0154-72d2-4617-8be2-ecc33a8a7a2e\") " pod="openstack/cloudkitty-api-0" Dec 10 16:50:34 crc kubenswrapper[4755]: I1210 16:50:34.113298 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/239c0154-72d2-4617-8be2-ecc33a8a7a2e-public-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"239c0154-72d2-4617-8be2-ecc33a8a7a2e\") " pod="openstack/cloudkitty-api-0" Dec 10 16:50:34 crc kubenswrapper[4755]: I1210 16:50:34.113991 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/239c0154-72d2-4617-8be2-ecc33a8a7a2e-config-data\") pod \"cloudkitty-api-0\" (UID: \"239c0154-72d2-4617-8be2-ecc33a8a7a2e\") " pod="openstack/cloudkitty-api-0" Dec 10 16:50:34 crc kubenswrapper[4755]: I1210 16:50:34.114126 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/a16014c1-2894-4c15-8c1e-24325dd91aa8-certs\") pod \"cloudkitty-proc-0\" (UID: \"a16014c1-2894-4c15-8c1e-24325dd91aa8\") " pod="openstack/cloudkitty-proc-0" Dec 10 16:50:34 crc kubenswrapper[4755]: I1210 16:50:34.114797 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/239c0154-72d2-4617-8be2-ecc33a8a7a2e-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"239c0154-72d2-4617-8be2-ecc33a8a7a2e\") " pod="openstack/cloudkitty-api-0" Dec 10 16:50:34 crc kubenswrapper[4755]: I1210 16:50:34.114868 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/239c0154-72d2-4617-8be2-ecc33a8a7a2e-certs\") pod \"cloudkitty-api-0\" (UID: \"239c0154-72d2-4617-8be2-ecc33a8a7a2e\") " pod="openstack/cloudkitty-api-0" Dec 10 16:50:34 crc kubenswrapper[4755]: I1210 16:50:34.115847 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a16014c1-2894-4c15-8c1e-24325dd91aa8-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"a16014c1-2894-4c15-8c1e-24325dd91aa8\") " pod="openstack/cloudkitty-proc-0" Dec 10 16:50:34 crc kubenswrapper[4755]: I1210 16:50:34.118523 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a16014c1-2894-4c15-8c1e-24325dd91aa8-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"a16014c1-2894-4c15-8c1e-24325dd91aa8\") " pod="openstack/cloudkitty-proc-0" Dec 10 16:50:34 crc kubenswrapper[4755]: I1210 16:50:34.119640 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/239c0154-72d2-4617-8be2-ecc33a8a7a2e-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"239c0154-72d2-4617-8be2-ecc33a8a7a2e\") " pod="openstack/cloudkitty-api-0" Dec 10 16:50:34 crc kubenswrapper[4755]: I1210 16:50:34.120144 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a16014c1-2894-4c15-8c1e-24325dd91aa8-scripts\") pod \"cloudkitty-proc-0\" (UID: \"a16014c1-2894-4c15-8c1e-24325dd91aa8\") " pod="openstack/cloudkitty-proc-0" Dec 10 16:50:34 crc kubenswrapper[4755]: I1210 16:50:34.123568 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/239c0154-72d2-4617-8be2-ecc33a8a7a2e-internal-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"239c0154-72d2-4617-8be2-ecc33a8a7a2e\") " pod="openstack/cloudkitty-api-0" Dec 10 16:50:34 crc kubenswrapper[4755]: I1210 16:50:34.124340 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/239c0154-72d2-4617-8be2-ecc33a8a7a2e-scripts\") pod \"cloudkitty-api-0\" (UID: \"239c0154-72d2-4617-8be2-ecc33a8a7a2e\") " pod="openstack/cloudkitty-api-0" Dec 10 16:50:34 crc kubenswrapper[4755]: I1210 16:50:34.125234 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4gms\" (UniqueName: \"kubernetes.io/projected/239c0154-72d2-4617-8be2-ecc33a8a7a2e-kube-api-access-b4gms\") pod \"cloudkitty-api-0\" (UID: \"239c0154-72d2-4617-8be2-ecc33a8a7a2e\") " pod="openstack/cloudkitty-api-0" Dec 10 16:50:34 crc kubenswrapper[4755]: I1210 16:50:34.129483 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a16014c1-2894-4c15-8c1e-24325dd91aa8-config-data\") pod \"cloudkitty-proc-0\" (UID: \"a16014c1-2894-4c15-8c1e-24325dd91aa8\") " pod="openstack/cloudkitty-proc-0" Dec 10 16:50:34 crc kubenswrapper[4755]: I1210 16:50:34.136882 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xx7w2\" (UniqueName: \"kubernetes.io/projected/a16014c1-2894-4c15-8c1e-24325dd91aa8-kube-api-access-xx7w2\") pod \"cloudkitty-proc-0\" (UID: \"a16014c1-2894-4c15-8c1e-24325dd91aa8\") " pod="openstack/cloudkitty-proc-0" Dec 10 16:50:34 crc kubenswrapper[4755]: I1210 16:50:34.223381 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Dec 10 16:50:34 crc kubenswrapper[4755]: I1210 16:50:34.247716 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Dec 10 16:50:34 crc kubenswrapper[4755]: W1210 16:50:34.702648 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda16014c1_2894_4c15_8c1e_24325dd91aa8.slice/crio-807f23b4bcc6002d2d97871ee91faf4e7d6bfdbd332596ccb2cd0f6169f9ccef WatchSource:0}: Error finding container 807f23b4bcc6002d2d97871ee91faf4e7d6bfdbd332596ccb2cd0f6169f9ccef: Status 404 returned error can't find the container with id 807f23b4bcc6002d2d97871ee91faf4e7d6bfdbd332596ccb2cd0f6169f9ccef Dec 10 16:50:34 crc kubenswrapper[4755]: I1210 16:50:34.708825 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-proc-0"] Dec 10 16:50:34 crc kubenswrapper[4755]: I1210 16:50:34.818084 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-api-0"] Dec 10 16:50:34 crc kubenswrapper[4755]: I1210 16:50:34.834013 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"a16014c1-2894-4c15-8c1e-24325dd91aa8","Type":"ContainerStarted","Data":"807f23b4bcc6002d2d97871ee91faf4e7d6bfdbd332596ccb2cd0f6169f9ccef"} Dec 10 16:50:35 crc kubenswrapper[4755]: I1210 16:50:35.802700 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0051924b-bff8-4934-92b8-f787e29c758e" path="/var/lib/kubelet/pods/0051924b-bff8-4934-92b8-f787e29c758e/volumes" Dec 10 16:50:35 crc kubenswrapper[4755]: I1210 16:50:35.805153 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83ca4bf3-4811-4418-af2b-0fdc5e299a00" path="/var/lib/kubelet/pods/83ca4bf3-4811-4418-af2b-0fdc5e299a00/volumes" Dec 10 16:50:35 crc kubenswrapper[4755]: I1210 16:50:35.845169 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"239c0154-72d2-4617-8be2-ecc33a8a7a2e","Type":"ContainerStarted","Data":"af9f783d48bb778d108013127bd3ac5341a7d06998e8bdf1d907f215f220ad3a"} Dec 10 16:50:35 crc kubenswrapper[4755]: I1210 16:50:35.845249 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"239c0154-72d2-4617-8be2-ecc33a8a7a2e","Type":"ContainerStarted","Data":"9c6643ddee4cecd1f1995e41879eae94a27259fabcadf9bdbbff716598dd2ec5"} Dec 10 16:50:35 crc kubenswrapper[4755]: I1210 16:50:35.845261 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"239c0154-72d2-4617-8be2-ecc33a8a7a2e","Type":"ContainerStarted","Data":"9daaa958bbd4a052650ec11c241e07355d1fdfb96a8c4153ac7ec973687b9df7"} Dec 10 16:50:35 crc kubenswrapper[4755]: I1210 16:50:35.845614 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-api-0" Dec 10 16:50:35 crc kubenswrapper[4755]: I1210 16:50:35.846601 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"a16014c1-2894-4c15-8c1e-24325dd91aa8","Type":"ContainerStarted","Data":"c98eb1b37c242b85df0b591edafa3295fc5e5a33f26acced7ab77b69b300ad90"} Dec 10 16:50:35 crc kubenswrapper[4755]: I1210 16:50:35.882728 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-api-0" podStartSLOduration=2.882712654 podStartE2EDuration="2.882712654s" podCreationTimestamp="2025-12-10 16:50:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 16:50:35.876536986 +0000 UTC m=+5232.477420708" watchObservedRunningTime="2025-12-10 16:50:35.882712654 +0000 UTC m=+5232.483596286" Dec 10 16:50:35 crc kubenswrapper[4755]: I1210 16:50:35.901235 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-proc-0" podStartSLOduration=2.720710303 podStartE2EDuration="2.901220296s" podCreationTimestamp="2025-12-10 16:50:33 +0000 UTC" firstStartedPulling="2025-12-10 16:50:34.704737139 +0000 UTC m=+5231.305620771" lastFinishedPulling="2025-12-10 16:50:34.885247132 +0000 UTC m=+5231.486130764" observedRunningTime="2025-12-10 16:50:35.897652189 +0000 UTC m=+5232.498535821" watchObservedRunningTime="2025-12-10 16:50:35.901220296 +0000 UTC m=+5232.502103928" Dec 10 16:50:38 crc kubenswrapper[4755]: I1210 16:50:38.051415 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_376461c9-8e89-4c8c-bcef-6a873320a293/init-config-reloader/0.log" Dec 10 16:50:38 crc kubenswrapper[4755]: I1210 16:50:38.226705 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_376461c9-8e89-4c8c-bcef-6a873320a293/init-config-reloader/0.log" Dec 10 16:50:38 crc kubenswrapper[4755]: I1210 16:50:38.315067 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_376461c9-8e89-4c8c-bcef-6a873320a293/config-reloader/0.log" Dec 10 16:50:38 crc kubenswrapper[4755]: I1210 16:50:38.352375 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_376461c9-8e89-4c8c-bcef-6a873320a293/alertmanager/0.log" Dec 10 16:50:38 crc kubenswrapper[4755]: I1210 16:50:38.425595 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-55f9947ffb-mpljd_add12461-ad93-468e-9e20-de46b26414a0/barbican-api/0.log" Dec 10 16:50:38 crc kubenswrapper[4755]: I1210 16:50:38.544104 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-55f9947ffb-mpljd_add12461-ad93-468e-9e20-de46b26414a0/barbican-api-log/0.log" Dec 10 16:50:38 crc kubenswrapper[4755]: I1210 16:50:38.560123 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-979964fb-8vrlp_a5eb4f86-4f65-41b4-8694-279c44c08491/barbican-keystone-listener/0.log" Dec 10 16:50:38 crc kubenswrapper[4755]: I1210 16:50:38.660362 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-979964fb-8vrlp_a5eb4f86-4f65-41b4-8694-279c44c08491/barbican-keystone-listener-log/0.log" Dec 10 16:50:38 crc kubenswrapper[4755]: I1210 16:50:38.795537 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5d855b58d9-fzhd2_83de1ea1-f46a-43ab-9a89-f5980d7bed78/barbican-worker/0.log" Dec 10 16:50:38 crc kubenswrapper[4755]: I1210 16:50:38.867792 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5d855b58d9-fzhd2_83de1ea1-f46a-43ab-9a89-f5980d7bed78/barbican-worker-log/0.log" Dec 10 16:50:39 crc kubenswrapper[4755]: I1210 16:50:39.071409 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-jwpvp_c36d5e87-d120-4bf2-8680-cd2c7634f1cf/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Dec 10 16:50:39 crc kubenswrapper[4755]: I1210 16:50:39.154696 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_6d104bea-ecdc-4fe1-9861-fb1a19fce845/ceilometer-notification-agent/0.log" Dec 10 16:50:39 crc kubenswrapper[4755]: I1210 16:50:39.289789 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_6d104bea-ecdc-4fe1-9861-fb1a19fce845/sg-core/0.log" Dec 10 16:50:39 crc kubenswrapper[4755]: I1210 16:50:39.307327 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_6d104bea-ecdc-4fe1-9861-fb1a19fce845/proxy-httpd/0.log" Dec 10 16:50:39 crc kubenswrapper[4755]: I1210 16:50:39.455056 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_900a05ac-78b6-44d0-9499-2dbfb52fcdfc/cinder-api/0.log" Dec 10 16:50:39 crc kubenswrapper[4755]: I1210 16:50:39.534154 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_900a05ac-78b6-44d0-9499-2dbfb52fcdfc/cinder-api-log/0.log" Dec 10 16:50:39 crc kubenswrapper[4755]: I1210 16:50:39.671035 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_fd0b3d98-9eda-4a87-8540-4a15ec2c174d/cinder-scheduler/0.log" Dec 10 16:50:39 crc kubenswrapper[4755]: I1210 16:50:39.715854 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_fd0b3d98-9eda-4a87-8540-4a15ec2c174d/probe/0.log" Dec 10 16:50:39 crc kubenswrapper[4755]: I1210 16:50:39.851113 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-api-0_239c0154-72d2-4617-8be2-ecc33a8a7a2e/cloudkitty-api/0.log" Dec 10 16:50:39 crc kubenswrapper[4755]: I1210 16:50:39.933155 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-api-0_239c0154-72d2-4617-8be2-ecc33a8a7a2e/cloudkitty-api-log/0.log" Dec 10 16:50:40 crc kubenswrapper[4755]: I1210 16:50:40.081391 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-db-sync-jfc28_998863b6-4f48-4c8b-8011-a40377686b99/cloudkitty-db-sync/0.log" Dec 10 16:50:40 crc kubenswrapper[4755]: I1210 16:50:40.226952 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-compactor-0_d69d2cc3-cf06-420b-a629-1a1a924eee12/loki-compactor/0.log" Dec 10 16:50:40 crc kubenswrapper[4755]: I1210 16:50:40.476352 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-distributor-66dfd9bb-5zwd4_56a3d20e-f422-40f4-bbe3-fc61da743389/loki-distributor/0.log" Dec 10 16:50:40 crc kubenswrapper[4755]: I1210 16:50:40.540542 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-gateway-7db4f4db8c-c65qj_996e9361-db27-4210-8a4d-92a76a7874aa/gateway/0.log" Dec 10 16:50:40 crc kubenswrapper[4755]: I1210 16:50:40.753517 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-gateway-7db4f4db8c-cnjnh_6414fa30-9e0e-4dc8-99aa-d35799f2cb46/gateway/0.log" Dec 10 16:50:40 crc kubenswrapper[4755]: I1210 16:50:40.800995 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-index-gateway-0_e72469cb-a78b-45a9-8fea-afb38a2c78dc/loki-index-gateway/0.log" Dec 10 16:50:40 crc kubenswrapper[4755]: I1210 16:50:40.963085 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-ingester-0_ceb83259-f1d9-4219-a0c3-b42d35e2dc02/loki-ingester/0.log" Dec 10 16:50:41 crc kubenswrapper[4755]: I1210 16:50:41.085324 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-querier-795fd8f8cc-qjtx7_051535b2-1182-4452-a267-16d22047e3d3/loki-querier/0.log" Dec 10 16:50:41 crc kubenswrapper[4755]: I1210 16:50:41.187683 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-query-frontend-5cd44666df-5kvgz_779f3508-3735-4419-8503-834dc6f5b298/loki-query-frontend/0.log" Dec 10 16:50:41 crc kubenswrapper[4755]: I1210 16:50:41.246185 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-proc-0_a16014c1-2894-4c15-8c1e-24325dd91aa8/cloudkitty-proc/0.log" Dec 10 16:50:41 crc kubenswrapper[4755]: I1210 16:50:41.389228 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-storageinit-rvkrd_085de3b2-f23a-4359-a057-459a8a81d898/cloudkitty-storageinit/0.log" Dec 10 16:50:41 crc kubenswrapper[4755]: I1210 16:50:41.433924 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-c4b758ff5-f4t6l_3215d8ec-c0b3-4fda-a96e-4ed078293493/init/0.log" Dec 10 16:50:41 crc kubenswrapper[4755]: I1210 16:50:41.589571 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-c4b758ff5-f4t6l_3215d8ec-c0b3-4fda-a96e-4ed078293493/init/0.log" Dec 10 16:50:41 crc kubenswrapper[4755]: I1210 16:50:41.603639 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-c4b758ff5-f4t6l_3215d8ec-c0b3-4fda-a96e-4ed078293493/dnsmasq-dns/0.log" Dec 10 16:50:41 crc kubenswrapper[4755]: I1210 16:50:41.647760 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-246lg_e893969f-84c7-4d33-a977-13cdc1a9ef2e/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Dec 10 16:50:41 crc kubenswrapper[4755]: I1210 16:50:41.833511 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-7fj8w_cce50278-7a20-499b-bbe8-7304224cc6e4/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Dec 10 16:50:41 crc kubenswrapper[4755]: I1210 16:50:41.844359 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-8fhgf_696db2de-32c6-4679-965f-ec8d2a52ae64/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Dec 10 16:50:41 crc kubenswrapper[4755]: I1210 16:50:41.948727 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-bdgt7_40fb2154-25cc-4263-beb4-f375fce600d1/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Dec 10 16:50:42 crc kubenswrapper[4755]: I1210 16:50:42.029265 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-mhhqh_72c64052-d330-4d83-a2b5-37e7c7233934/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Dec 10 16:50:42 crc kubenswrapper[4755]: I1210 16:50:42.179451 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-mwqmw_b4ab39f5-c779-4d0c-9497-5c7c567dc0bc/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Dec 10 16:50:42 crc kubenswrapper[4755]: I1210 16:50:42.263971 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-pq5sx_48fe9944-e282-45c9-b9b2-6716af358188/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Dec 10 16:50:42 crc kubenswrapper[4755]: I1210 16:50:42.349420 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_dd0a8c00-fa43-4605-9c34-4f3e86a6e92a/glance-httpd/0.log" Dec 10 16:50:42 crc kubenswrapper[4755]: I1210 16:50:42.378314 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_dd0a8c00-fa43-4605-9c34-4f3e86a6e92a/glance-log/0.log" Dec 10 16:50:42 crc kubenswrapper[4755]: I1210 16:50:42.453190 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_650da1cb-bb89-41e8-bd6c-3cad85726723/glance-log/0.log" Dec 10 16:50:42 crc kubenswrapper[4755]: I1210 16:50:42.480233 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_650da1cb-bb89-41e8-bd6c-3cad85726723/glance-httpd/0.log" Dec 10 16:50:42 crc kubenswrapper[4755]: I1210 16:50:42.671203 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-764c845c6f-lbzq2_73f09975-30b1-46a8-a34e-ccb4683adf6c/keystone-api/0.log" Dec 10 16:50:42 crc kubenswrapper[4755]: I1210 16:50:42.700376 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29423041-9vf8s_53fbaf86-07f0-41db-b467-1b101d16fc8d/keystone-cron/0.log" Dec 10 16:50:42 crc kubenswrapper[4755]: I1210 16:50:42.792161 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_6a99a8eb-9c08-42c4-ba66-5c5b641b39b1/kube-state-metrics/0.log" Dec 10 16:50:42 crc kubenswrapper[4755]: I1210 16:50:42.968907 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5f46cf586c-brqwd_efbad6ea-87b6-40ec-b2a6-542e31d18e69/neutron-api/0.log" Dec 10 16:50:43 crc kubenswrapper[4755]: I1210 16:50:43.051590 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5f46cf586c-brqwd_efbad6ea-87b6-40ec-b2a6-542e31d18e69/neutron-httpd/0.log" Dec 10 16:50:43 crc kubenswrapper[4755]: I1210 16:50:43.351739 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_203a68f8-a70b-45c0-8fd5-37c56b10fe90/nova-api-log/0.log" Dec 10 16:50:43 crc kubenswrapper[4755]: I1210 16:50:43.522418 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_1be7309b-6fe2-437d-adc5-c5e7f1f351e9/nova-cell0-conductor-conductor/0.log" Dec 10 16:50:43 crc kubenswrapper[4755]: I1210 16:50:43.639194 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_203a68f8-a70b-45c0-8fd5-37c56b10fe90/nova-api-api/0.log" Dec 10 16:50:43 crc kubenswrapper[4755]: I1210 16:50:43.705321 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_8c0bd361-24a4-4d1f-ba5c-6614b244f726/nova-cell1-conductor-conductor/0.log" Dec 10 16:50:43 crc kubenswrapper[4755]: I1210 16:50:43.921235 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_bfeba145-c5bb-4035-93b6-ce1f9ce9c68e/nova-cell1-novncproxy-novncproxy/0.log" Dec 10 16:50:44 crc kubenswrapper[4755]: I1210 16:50:44.009483 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_6457b1e1-42e4-46b2-bc4b-6bbd9451131e/nova-metadata-log/0.log" Dec 10 16:50:44 crc kubenswrapper[4755]: I1210 16:50:44.264993 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_c0cfe6de-3c35-486c-a767-35484b3a0f3d/mysql-bootstrap/0.log" Dec 10 16:50:44 crc kubenswrapper[4755]: I1210 16:50:44.311007 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_ae382d2b-a255-4ec6-8bf4-d70a8d3a7a4e/nova-scheduler-scheduler/0.log" Dec 10 16:50:44 crc kubenswrapper[4755]: I1210 16:50:44.478131 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_c0cfe6de-3c35-486c-a767-35484b3a0f3d/mysql-bootstrap/0.log" Dec 10 16:50:44 crc kubenswrapper[4755]: I1210 16:50:44.528927 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_c0cfe6de-3c35-486c-a767-35484b3a0f3d/galera/0.log" Dec 10 16:50:44 crc kubenswrapper[4755]: I1210 16:50:44.667426 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_48b9cc99-2595-445c-aca6-b13972e95324/mysql-bootstrap/0.log" Dec 10 16:50:44 crc kubenswrapper[4755]: I1210 16:50:44.927861 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_48b9cc99-2595-445c-aca6-b13972e95324/galera/0.log" Dec 10 16:50:44 crc kubenswrapper[4755]: I1210 16:50:44.931743 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_48b9cc99-2595-445c-aca6-b13972e95324/mysql-bootstrap/0.log" Dec 10 16:50:45 crc kubenswrapper[4755]: I1210 16:50:45.103049 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_a9d4b4ae-84b2-4971-aafc-6f5bdad0b69d/openstackclient/0.log" Dec 10 16:50:45 crc kubenswrapper[4755]: I1210 16:50:45.239094 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-lxfpz_a2d233ea-7ff9-4ce1-ada7-40d66f801cea/openstack-network-exporter/0.log" Dec 10 16:50:45 crc kubenswrapper[4755]: I1210 16:50:45.859549 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_6457b1e1-42e4-46b2-bc4b-6bbd9451131e/nova-metadata-metadata/0.log" Dec 10 16:50:45 crc kubenswrapper[4755]: I1210 16:50:45.900066 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-x972h_7b79f2f6-2414-4403-8c2e-b58f114d941a/ovsdb-server-init/0.log" Dec 10 16:50:46 crc kubenswrapper[4755]: I1210 16:50:46.124251 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-x972h_7b79f2f6-2414-4403-8c2e-b58f114d941a/ovs-vswitchd/0.log" Dec 10 16:50:46 crc kubenswrapper[4755]: I1210 16:50:46.150741 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-x972h_7b79f2f6-2414-4403-8c2e-b58f114d941a/ovsdb-server-init/0.log" Dec 10 16:50:46 crc kubenswrapper[4755]: I1210 16:50:46.176012 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-x972h_7b79f2f6-2414-4403-8c2e-b58f114d941a/ovsdb-server/0.log" Dec 10 16:50:46 crc kubenswrapper[4755]: I1210 16:50:46.363053 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-q6n4p_46b6df85-96b1-4583-a80f-97a5d980cc72/ovn-controller/0.log" Dec 10 16:50:46 crc kubenswrapper[4755]: I1210 16:50:46.400419 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_3423b67e-8bda-4237-a94e-82cf18faf1c2/openstack-network-exporter/0.log" Dec 10 16:50:46 crc kubenswrapper[4755]: I1210 16:50:46.526289 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_3423b67e-8bda-4237-a94e-82cf18faf1c2/ovn-northd/0.log" Dec 10 16:50:46 crc kubenswrapper[4755]: I1210 16:50:46.620572 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_e6228d01-72c0-4088-a51d-e90dc686a41a/openstack-network-exporter/0.log" Dec 10 16:50:46 crc kubenswrapper[4755]: I1210 16:50:46.674374 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_e6228d01-72c0-4088-a51d-e90dc686a41a/ovsdbserver-nb/0.log" Dec 10 16:50:46 crc kubenswrapper[4755]: I1210 16:50:46.757232 4755 scope.go:117] "RemoveContainer" containerID="c02a5ae0f2b694ce1165db44430b56590e65e71f05780d61256f730ed3b1326e" Dec 10 16:50:46 crc kubenswrapper[4755]: E1210 16:50:46.757557 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:50:46 crc kubenswrapper[4755]: I1210 16:50:46.760155 4755 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 10 16:50:46 crc kubenswrapper[4755]: I1210 16:50:46.851342 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_d586c26d-c444-4202-b286-522cfc372f16/openstack-network-exporter/0.log" Dec 10 16:50:47 crc kubenswrapper[4755]: I1210 16:50:47.126378 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_d586c26d-c444-4202-b286-522cfc372f16/ovsdbserver-sb/0.log" Dec 10 16:50:47 crc kubenswrapper[4755]: I1210 16:50:47.256342 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-75b8ff9576-fcxhh_c3c11b73-3e0e-4c7e-ac2f-943e44b99d92/placement-api/0.log" Dec 10 16:50:47 crc kubenswrapper[4755]: I1210 16:50:47.337941 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-75b8ff9576-fcxhh_c3c11b73-3e0e-4c7e-ac2f-943e44b99d92/placement-log/0.log" Dec 10 16:50:47 crc kubenswrapper[4755]: I1210 16:50:47.383453 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_28273c51-8829-45f1-9edb-4f30a83b66e3/init-config-reloader/0.log" Dec 10 16:50:47 crc kubenswrapper[4755]: I1210 16:50:47.585654 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_28273c51-8829-45f1-9edb-4f30a83b66e3/thanos-sidecar/0.log" Dec 10 16:50:47 crc kubenswrapper[4755]: I1210 16:50:47.610030 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_28273c51-8829-45f1-9edb-4f30a83b66e3/init-config-reloader/0.log" Dec 10 16:50:47 crc kubenswrapper[4755]: I1210 16:50:47.630810 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_28273c51-8829-45f1-9edb-4f30a83b66e3/config-reloader/0.log" Dec 10 16:50:47 crc kubenswrapper[4755]: I1210 16:50:47.645762 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_28273c51-8829-45f1-9edb-4f30a83b66e3/prometheus/0.log" Dec 10 16:50:47 crc kubenswrapper[4755]: I1210 16:50:47.842134 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_c5084508-a21d-4f43-bc50-2f0c7f13edbe/setup-container/0.log" Dec 10 16:50:48 crc kubenswrapper[4755]: I1210 16:50:48.576702 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_c5084508-a21d-4f43-bc50-2f0c7f13edbe/rabbitmq/0.log" Dec 10 16:50:48 crc kubenswrapper[4755]: I1210 16:50:48.592526 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_c5084508-a21d-4f43-bc50-2f0c7f13edbe/setup-container/0.log" Dec 10 16:50:48 crc kubenswrapper[4755]: I1210 16:50:48.617842 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_b7d822e6-7034-4d4d-b3b4-07ecee1fb7cd/setup-container/0.log" Dec 10 16:50:48 crc kubenswrapper[4755]: I1210 16:50:48.897905 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_b7d822e6-7034-4d4d-b3b4-07ecee1fb7cd/setup-container/0.log" Dec 10 16:50:48 crc kubenswrapper[4755]: I1210 16:50:48.993129 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-7jwjf_7c07d7db-e17e-446c-9576-8baae941768e/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Dec 10 16:50:48 crc kubenswrapper[4755]: I1210 16:50:48.995736 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_b7d822e6-7034-4d4d-b3b4-07ecee1fb7cd/rabbitmq/0.log" Dec 10 16:50:49 crc kubenswrapper[4755]: I1210 16:50:49.233486 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-sntj7_0700ff42-76b3-4d25-aa15-323a506bb50b/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Dec 10 16:50:49 crc kubenswrapper[4755]: I1210 16:50:49.358760 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-5b8c78b5dc-vl479_77a6e4fa-6291-41fc-a165-9fe6d6039810/proxy-server/0.log" Dec 10 16:50:49 crc kubenswrapper[4755]: I1210 16:50:49.489486 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-5b8c78b5dc-vl479_77a6e4fa-6291-41fc-a165-9fe6d6039810/proxy-httpd/0.log" Dec 10 16:50:49 crc kubenswrapper[4755]: I1210 16:50:49.505673 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-qg2cr_aff2e950-1295-4b9e-996a-f9a6c4a1dedd/swift-ring-rebalance/0.log" Dec 10 16:50:49 crc kubenswrapper[4755]: I1210 16:50:49.728285 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_72a1cce7-93cb-4fe1-9d12-3d4e19692457/account-auditor/0.log" Dec 10 16:50:49 crc kubenswrapper[4755]: I1210 16:50:49.746447 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_72a1cce7-93cb-4fe1-9d12-3d4e19692457/account-reaper/0.log" Dec 10 16:50:49 crc kubenswrapper[4755]: I1210 16:50:49.785014 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_72a1cce7-93cb-4fe1-9d12-3d4e19692457/account-replicator/0.log" Dec 10 16:50:49 crc kubenswrapper[4755]: I1210 16:50:49.830778 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_72a1cce7-93cb-4fe1-9d12-3d4e19692457/account-server/0.log" Dec 10 16:50:49 crc kubenswrapper[4755]: I1210 16:50:49.907933 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_72a1cce7-93cb-4fe1-9d12-3d4e19692457/container-auditor/0.log" Dec 10 16:50:49 crc kubenswrapper[4755]: I1210 16:50:49.993258 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_72a1cce7-93cb-4fe1-9d12-3d4e19692457/container-server/0.log" Dec 10 16:50:50 crc kubenswrapper[4755]: I1210 16:50:50.023653 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_72a1cce7-93cb-4fe1-9d12-3d4e19692457/container-replicator/0.log" Dec 10 16:50:50 crc kubenswrapper[4755]: I1210 16:50:50.080343 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_72a1cce7-93cb-4fe1-9d12-3d4e19692457/container-updater/0.log" Dec 10 16:50:50 crc kubenswrapper[4755]: I1210 16:50:50.111303 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_72a1cce7-93cb-4fe1-9d12-3d4e19692457/object-auditor/0.log" Dec 10 16:50:50 crc kubenswrapper[4755]: I1210 16:50:50.218710 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_72a1cce7-93cb-4fe1-9d12-3d4e19692457/object-replicator/0.log" Dec 10 16:50:50 crc kubenswrapper[4755]: I1210 16:50:50.226383 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_72a1cce7-93cb-4fe1-9d12-3d4e19692457/object-expirer/0.log" Dec 10 16:50:50 crc kubenswrapper[4755]: I1210 16:50:50.303424 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_72a1cce7-93cb-4fe1-9d12-3d4e19692457/object-server/0.log" Dec 10 16:50:50 crc kubenswrapper[4755]: I1210 16:50:50.346064 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_72a1cce7-93cb-4fe1-9d12-3d4e19692457/object-updater/0.log" Dec 10 16:50:50 crc kubenswrapper[4755]: I1210 16:50:50.440957 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_72a1cce7-93cb-4fe1-9d12-3d4e19692457/swift-recon-cron/0.log" Dec 10 16:50:50 crc kubenswrapper[4755]: I1210 16:50:50.467197 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_72a1cce7-93cb-4fe1-9d12-3d4e19692457/rsync/0.log" Dec 10 16:50:53 crc kubenswrapper[4755]: I1210 16:50:53.050324 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6d104bea-ecdc-4fe1-9861-fb1a19fce845","Type":"ContainerStarted","Data":"937670769d2d7529d2232618b4b777e234c0acdee1e40e998d7ff3dee0b8c850"} Dec 10 16:50:53 crc kubenswrapper[4755]: I1210 16:50:53.079782 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.8012774499999997 podStartE2EDuration="1h2m38.079760748s" podCreationTimestamp="2025-12-10 15:48:15 +0000 UTC" firstStartedPulling="2025-12-10 15:48:16.467448042 +0000 UTC m=+1493.068331684" lastFinishedPulling="2025-12-10 16:50:51.74593135 +0000 UTC m=+5248.346814982" observedRunningTime="2025-12-10 16:50:53.075785719 +0000 UTC m=+5249.676669351" watchObservedRunningTime="2025-12-10 16:50:53.079760748 +0000 UTC m=+5249.680644380" Dec 10 16:50:57 crc kubenswrapper[4755]: I1210 16:50:57.256077 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_b6fc3b5b-a2c7-404f-8435-6c2a72d2c4a8/memcached/0.log" Dec 10 16:50:59 crc kubenswrapper[4755]: I1210 16:50:59.758463 4755 scope.go:117] "RemoveContainer" containerID="c02a5ae0f2b694ce1165db44430b56590e65e71f05780d61256f730ed3b1326e" Dec 10 16:50:59 crc kubenswrapper[4755]: E1210 16:50:59.759336 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:51:11 crc kubenswrapper[4755]: I1210 16:51:11.100612 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-api-0" Dec 10 16:51:12 crc kubenswrapper[4755]: I1210 16:51:12.757330 4755 scope.go:117] "RemoveContainer" containerID="c02a5ae0f2b694ce1165db44430b56590e65e71f05780d61256f730ed3b1326e" Dec 10 16:51:12 crc kubenswrapper[4755]: E1210 16:51:12.757931 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:51:19 crc kubenswrapper[4755]: I1210 16:51:19.615315 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_147d6c35b26de94843aae2cc16def28bc6b9292bfcf7a2079ec0c049657wkgl_42436916-b7c2-4531-ada8-a5590d158fe9/util/0.log" Dec 10 16:51:19 crc kubenswrapper[4755]: I1210 16:51:19.821421 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_147d6c35b26de94843aae2cc16def28bc6b9292bfcf7a2079ec0c049657wkgl_42436916-b7c2-4531-ada8-a5590d158fe9/pull/0.log" Dec 10 16:51:19 crc kubenswrapper[4755]: I1210 16:51:19.835818 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_147d6c35b26de94843aae2cc16def28bc6b9292bfcf7a2079ec0c049657wkgl_42436916-b7c2-4531-ada8-a5590d158fe9/util/0.log" Dec 10 16:51:19 crc kubenswrapper[4755]: I1210 16:51:19.902906 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_147d6c35b26de94843aae2cc16def28bc6b9292bfcf7a2079ec0c049657wkgl_42436916-b7c2-4531-ada8-a5590d158fe9/pull/0.log" Dec 10 16:51:20 crc kubenswrapper[4755]: I1210 16:51:20.054999 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_147d6c35b26de94843aae2cc16def28bc6b9292bfcf7a2079ec0c049657wkgl_42436916-b7c2-4531-ada8-a5590d158fe9/util/0.log" Dec 10 16:51:20 crc kubenswrapper[4755]: I1210 16:51:20.062718 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_147d6c35b26de94843aae2cc16def28bc6b9292bfcf7a2079ec0c049657wkgl_42436916-b7c2-4531-ada8-a5590d158fe9/pull/0.log" Dec 10 16:51:20 crc kubenswrapper[4755]: I1210 16:51:20.080343 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_147d6c35b26de94843aae2cc16def28bc6b9292bfcf7a2079ec0c049657wkgl_42436916-b7c2-4531-ada8-a5590d158fe9/extract/0.log" Dec 10 16:51:20 crc kubenswrapper[4755]: I1210 16:51:20.231639 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-bk4xd_83bd67ec-3fa0-4f1e-9f87-7005f731f7e4/kube-rbac-proxy/0.log" Dec 10 16:51:20 crc kubenswrapper[4755]: I1210 16:51:20.327583 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-bk4xd_83bd67ec-3fa0-4f1e-9f87-7005f731f7e4/manager/0.log" Dec 10 16:51:20 crc kubenswrapper[4755]: I1210 16:51:20.358148 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-6c677c69b-ljn8k_313bb539-c9d7-4bb0-a5e3-3a36c45c0f79/kube-rbac-proxy/0.log" Dec 10 16:51:20 crc kubenswrapper[4755]: I1210 16:51:20.475112 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-6c677c69b-ljn8k_313bb539-c9d7-4bb0-a5e3-3a36c45c0f79/manager/0.log" Dec 10 16:51:20 crc kubenswrapper[4755]: I1210 16:51:20.547148 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-697fb699cf-z5frc_05b2a283-f9ce-4cbb-a92f-a22a227de36d/kube-rbac-proxy/0.log" Dec 10 16:51:20 crc kubenswrapper[4755]: I1210 16:51:20.560967 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-697fb699cf-z5frc_05b2a283-f9ce-4cbb-a92f-a22a227de36d/manager/0.log" Dec 10 16:51:20 crc kubenswrapper[4755]: I1210 16:51:20.741362 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-5697bb5779-rjdmk_8bcd3e35-31c8-4dbc-96e1-e6f4b486f082/kube-rbac-proxy/0.log" Dec 10 16:51:20 crc kubenswrapper[4755]: I1210 16:51:20.803267 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-5697bb5779-rjdmk_8bcd3e35-31c8-4dbc-96e1-e6f4b486f082/manager/0.log" Dec 10 16:51:20 crc kubenswrapper[4755]: I1210 16:51:20.883338 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-vqgpv_ab09fdaf-b326-4221-a24c-9415dabdbcdd/kube-rbac-proxy/0.log" Dec 10 16:51:20 crc kubenswrapper[4755]: I1210 16:51:20.962966 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-vqgpv_ab09fdaf-b326-4221-a24c-9415dabdbcdd/manager/0.log" Dec 10 16:51:20 crc kubenswrapper[4755]: I1210 16:51:20.997615 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-bs4zx_a07fdc07-16fa-4834-b370-378b543dde9f/kube-rbac-proxy/0.log" Dec 10 16:51:21 crc kubenswrapper[4755]: I1210 16:51:21.064455 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-bs4zx_a07fdc07-16fa-4834-b370-378b543dde9f/manager/0.log" Dec 10 16:51:21 crc kubenswrapper[4755]: I1210 16:51:21.250313 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-78d48bff9d-wsxsj_423be682-6135-4dd2-8366-b7106adbc632/kube-rbac-proxy/0.log" Dec 10 16:51:21 crc kubenswrapper[4755]: I1210 16:51:21.418797 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-967d97867-h8w5g_359a4730-4858-4677-9977-a9d6cea57122/kube-rbac-proxy/0.log" Dec 10 16:51:21 crc kubenswrapper[4755]: I1210 16:51:21.535454 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-967d97867-h8w5g_359a4730-4858-4677-9977-a9d6cea57122/manager/0.log" Dec 10 16:51:21 crc kubenswrapper[4755]: I1210 16:51:21.545477 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-78d48bff9d-wsxsj_423be682-6135-4dd2-8366-b7106adbc632/manager/0.log" Dec 10 16:51:21 crc kubenswrapper[4755]: I1210 16:51:21.639555 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7765d96ddf-t7zjt_8bc636b5-ac4d-4b4e-8b50-102a72e6ee2a/kube-rbac-proxy/0.log" Dec 10 16:51:21 crc kubenswrapper[4755]: I1210 16:51:21.827332 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7765d96ddf-t7zjt_8bc636b5-ac4d-4b4e-8b50-102a72e6ee2a/manager/0.log" Dec 10 16:51:21 crc kubenswrapper[4755]: I1210 16:51:21.841599 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-5b5fd79c9c-5kfxq_2a918143-c2cf-4c73-b547-c8d0d9c6e2a6/kube-rbac-proxy/0.log" Dec 10 16:51:21 crc kubenswrapper[4755]: I1210 16:51:21.854339 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-5b5fd79c9c-5kfxq_2a918143-c2cf-4c73-b547-c8d0d9c6e2a6/manager/0.log" Dec 10 16:51:22 crc kubenswrapper[4755]: I1210 16:51:22.011675 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-79c8c4686c-6vjxq_e4692dc7-ecb8-45b5-be03-9990c0a32b2a/kube-rbac-proxy/0.log" Dec 10 16:51:22 crc kubenswrapper[4755]: I1210 16:51:22.060446 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-79c8c4686c-6vjxq_e4692dc7-ecb8-45b5-be03-9990c0a32b2a/manager/0.log" Dec 10 16:51:22 crc kubenswrapper[4755]: I1210 16:51:22.207023 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-bgxgp_46715591-f787-42bc-9871-a51b08963893/kube-rbac-proxy/0.log" Dec 10 16:51:22 crc kubenswrapper[4755]: I1210 16:51:22.289950 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-bgxgp_46715591-f787-42bc-9871-a51b08963893/manager/0.log" Dec 10 16:51:22 crc kubenswrapper[4755]: I1210 16:51:22.325019 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-xtr7m_d3b1545f-1f46-4869-bc92-cdc7e5b1fc4c/kube-rbac-proxy/0.log" Dec 10 16:51:22 crc kubenswrapper[4755]: I1210 16:51:22.508485 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-xtr7m_d3b1545f-1f46-4869-bc92-cdc7e5b1fc4c/manager/0.log" Dec 10 16:51:22 crc kubenswrapper[4755]: I1210 16:51:22.541712 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-pxstj_10728d77-c715-4cb1-ab30-5747594a6320/manager/0.log" Dec 10 16:51:22 crc kubenswrapper[4755]: I1210 16:51:22.543629 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-pxstj_10728d77-c715-4cb1-ab30-5747594a6320/kube-rbac-proxy/0.log" Dec 10 16:51:22 crc kubenswrapper[4755]: I1210 16:51:22.727196 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-84b575879fh778d_0bcf5a92-0324-4799-be55-0e49bd060ee7/manager/0.log" Dec 10 16:51:22 crc kubenswrapper[4755]: I1210 16:51:22.802801 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-84b575879fh778d_0bcf5a92-0324-4799-be55-0e49bd060ee7/kube-rbac-proxy/0.log" Dec 10 16:51:23 crc kubenswrapper[4755]: I1210 16:51:23.172553 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-6899b76-8jwvz_4b263601-3c4c-48b8-a169-7e3caaa77be2/operator/0.log" Dec 10 16:51:23 crc kubenswrapper[4755]: I1210 16:51:23.214335 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-5gf5f_be7f5dba-16d0-45c6-a8df-f978e3042232/registry-server/0.log" Dec 10 16:51:23 crc kubenswrapper[4755]: I1210 16:51:23.429736 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-xpv7s_67e9d86f-4e93-4e78-a9d5-d8023721414d/kube-rbac-proxy/0.log" Dec 10 16:51:23 crc kubenswrapper[4755]: I1210 16:51:23.478047 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-xpv7s_67e9d86f-4e93-4e78-a9d5-d8023721414d/manager/0.log" Dec 10 16:51:23 crc kubenswrapper[4755]: I1210 16:51:23.590983 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-lbj4z_ebb31199-21f8-4493-8725-1c5e1aa70d66/kube-rbac-proxy/0.log" Dec 10 16:51:23 crc kubenswrapper[4755]: I1210 16:51:23.727434 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-lbj4z_ebb31199-21f8-4493-8725-1c5e1aa70d66/manager/0.log" Dec 10 16:51:23 crc kubenswrapper[4755]: I1210 16:51:23.727890 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-zz7fk_fbaec88b-8593-468f-aefc-777f8140504d/operator/0.log" Dec 10 16:51:23 crc kubenswrapper[4755]: I1210 16:51:23.971605 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-9d58d64bc-jzkms_f0af4059-171e-409f-8043-8f112664e01c/kube-rbac-proxy/0.log" Dec 10 16:51:24 crc kubenswrapper[4755]: I1210 16:51:24.015457 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-656546cb8f-64wpq_3fe0f8bf-8203-4cbb-b474-d00be4716ff5/manager/0.log" Dec 10 16:51:24 crc kubenswrapper[4755]: I1210 16:51:24.022549 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-9d58d64bc-jzkms_f0af4059-171e-409f-8043-8f112664e01c/manager/0.log" Dec 10 16:51:24 crc kubenswrapper[4755]: I1210 16:51:24.128124 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5f8d644d5d-87jfq_e36da2bb-2cc5-4a66-97f3-ace6966152fb/kube-rbac-proxy/0.log" Dec 10 16:51:24 crc kubenswrapper[4755]: I1210 16:51:24.271131 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-wbch9_91969126-0986-41a4-8d56-19b071710ca8/kube-rbac-proxy/0.log" Dec 10 16:51:24 crc kubenswrapper[4755]: I1210 16:51:24.291877 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-wbch9_91969126-0986-41a4-8d56-19b071710ca8/manager/0.log" Dec 10 16:51:24 crc kubenswrapper[4755]: I1210 16:51:24.458026 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-75944c9b7-qrhhh_15009193-27b2-4bf2-a795-f6106327e331/kube-rbac-proxy/0.log" Dec 10 16:51:24 crc kubenswrapper[4755]: I1210 16:51:24.514083 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-75944c9b7-qrhhh_15009193-27b2-4bf2-a795-f6106327e331/manager/0.log" Dec 10 16:51:24 crc kubenswrapper[4755]: I1210 16:51:24.758510 4755 scope.go:117] "RemoveContainer" containerID="c02a5ae0f2b694ce1165db44430b56590e65e71f05780d61256f730ed3b1326e" Dec 10 16:51:24 crc kubenswrapper[4755]: E1210 16:51:24.758740 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:51:24 crc kubenswrapper[4755]: I1210 16:51:24.808793 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5f8d644d5d-87jfq_e36da2bb-2cc5-4a66-97f3-ace6966152fb/manager/0.log" Dec 10 16:51:35 crc kubenswrapper[4755]: I1210 16:51:35.757506 4755 scope.go:117] "RemoveContainer" containerID="c02a5ae0f2b694ce1165db44430b56590e65e71f05780d61256f730ed3b1326e" Dec 10 16:51:35 crc kubenswrapper[4755]: E1210 16:51:35.758143 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:51:47 crc kubenswrapper[4755]: I1210 16:51:47.676990 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-sdrvv_d91d7b48-9096-4f3e-8260-2d762173eb80/control-plane-machine-set-operator/0.log" Dec 10 16:51:47 crc kubenswrapper[4755]: I1210 16:51:47.815344 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-n66x6_228e9f52-aead-4cf5-af32-8b0b3aec8cf4/kube-rbac-proxy/0.log" Dec 10 16:51:47 crc kubenswrapper[4755]: I1210 16:51:47.875830 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-n66x6_228e9f52-aead-4cf5-af32-8b0b3aec8cf4/machine-api-operator/0.log" Dec 10 16:51:50 crc kubenswrapper[4755]: I1210 16:51:50.308513 4755 scope.go:117] "RemoveContainer" containerID="c02a5ae0f2b694ce1165db44430b56590e65e71f05780d61256f730ed3b1326e" Dec 10 16:51:50 crc kubenswrapper[4755]: E1210 16:51:50.309138 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:52:00 crc kubenswrapper[4755]: I1210 16:52:00.758209 4755 scope.go:117] "RemoveContainer" containerID="c02a5ae0f2b694ce1165db44430b56590e65e71f05780d61256f730ed3b1326e" Dec 10 16:52:00 crc kubenswrapper[4755]: E1210 16:52:00.758941 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:52:04 crc kubenswrapper[4755]: I1210 16:52:04.084118 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-mpd5f_a03db35b-ea91-49fd-8658-7af8b10d927e/cert-manager-controller/0.log" Dec 10 16:52:04 crc kubenswrapper[4755]: I1210 16:52:04.233156 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-ktrqg_43eded5c-00d1-4ae6-b30e-2c0c8d521325/cert-manager-cainjector/0.log" Dec 10 16:52:04 crc kubenswrapper[4755]: I1210 16:52:04.299770 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-58jbs_d45a9cb9-b26c-4eb4-a597-24366eab31b6/cert-manager-webhook/0.log" Dec 10 16:52:13 crc kubenswrapper[4755]: I1210 16:52:13.765323 4755 scope.go:117] "RemoveContainer" containerID="c02a5ae0f2b694ce1165db44430b56590e65e71f05780d61256f730ed3b1326e" Dec 10 16:52:13 crc kubenswrapper[4755]: E1210 16:52:13.766139 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:52:17 crc kubenswrapper[4755]: I1210 16:52:17.853574 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7fbb5f6569-kb7wz_75ec7fc8-4770-41e7-9d6a-d9a43d832125/nmstate-console-plugin/0.log" Dec 10 16:52:18 crc kubenswrapper[4755]: I1210 16:52:18.050995 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-jjt7x_31a7ad79-a502-42d0-ab81-6e22092f7c9e/nmstate-handler/0.log" Dec 10 16:52:18 crc kubenswrapper[4755]: I1210 16:52:18.142983 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-2lfwl_03a03d51-0d67-4cf5-b102-74f7d787298e/kube-rbac-proxy/0.log" Dec 10 16:52:18 crc kubenswrapper[4755]: I1210 16:52:18.177037 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-2lfwl_03a03d51-0d67-4cf5-b102-74f7d787298e/nmstate-metrics/0.log" Dec 10 16:52:18 crc kubenswrapper[4755]: I1210 16:52:18.312570 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-5b5b58f5c8-v9zk6_a0cfbb00-d9e1-46c2-a3b7-f6f5fc8c95c2/nmstate-operator/0.log" Dec 10 16:52:18 crc kubenswrapper[4755]: I1210 16:52:18.387645 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f6d4c5ccb-8mfv6_a1d36f33-a6fd-4c6c-9739-fb7bfc94ca98/nmstate-webhook/0.log" Dec 10 16:52:22 crc kubenswrapper[4755]: I1210 16:52:22.768707 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4nkbw"] Dec 10 16:52:22 crc kubenswrapper[4755]: I1210 16:52:22.790154 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4nkbw" Dec 10 16:52:22 crc kubenswrapper[4755]: I1210 16:52:22.792601 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4nkbw"] Dec 10 16:52:22 crc kubenswrapper[4755]: I1210 16:52:22.890115 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f4366b7-c8c9-46af-a5fa-0448ac3aadf4-utilities\") pod \"community-operators-4nkbw\" (UID: \"5f4366b7-c8c9-46af-a5fa-0448ac3aadf4\") " pod="openshift-marketplace/community-operators-4nkbw" Dec 10 16:52:22 crc kubenswrapper[4755]: I1210 16:52:22.890705 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5l7tm\" (UniqueName: \"kubernetes.io/projected/5f4366b7-c8c9-46af-a5fa-0448ac3aadf4-kube-api-access-5l7tm\") pod \"community-operators-4nkbw\" (UID: \"5f4366b7-c8c9-46af-a5fa-0448ac3aadf4\") " pod="openshift-marketplace/community-operators-4nkbw" Dec 10 16:52:22 crc kubenswrapper[4755]: I1210 16:52:22.891089 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f4366b7-c8c9-46af-a5fa-0448ac3aadf4-catalog-content\") pod \"community-operators-4nkbw\" (UID: \"5f4366b7-c8c9-46af-a5fa-0448ac3aadf4\") " pod="openshift-marketplace/community-operators-4nkbw" Dec 10 16:52:22 crc kubenswrapper[4755]: I1210 16:52:22.993699 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f4366b7-c8c9-46af-a5fa-0448ac3aadf4-catalog-content\") pod \"community-operators-4nkbw\" (UID: \"5f4366b7-c8c9-46af-a5fa-0448ac3aadf4\") " pod="openshift-marketplace/community-operators-4nkbw" Dec 10 16:52:22 crc kubenswrapper[4755]: I1210 16:52:22.993911 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f4366b7-c8c9-46af-a5fa-0448ac3aadf4-utilities\") pod \"community-operators-4nkbw\" (UID: \"5f4366b7-c8c9-46af-a5fa-0448ac3aadf4\") " pod="openshift-marketplace/community-operators-4nkbw" Dec 10 16:52:22 crc kubenswrapper[4755]: I1210 16:52:22.993974 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5l7tm\" (UniqueName: \"kubernetes.io/projected/5f4366b7-c8c9-46af-a5fa-0448ac3aadf4-kube-api-access-5l7tm\") pod \"community-operators-4nkbw\" (UID: \"5f4366b7-c8c9-46af-a5fa-0448ac3aadf4\") " pod="openshift-marketplace/community-operators-4nkbw" Dec 10 16:52:22 crc kubenswrapper[4755]: I1210 16:52:22.994252 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f4366b7-c8c9-46af-a5fa-0448ac3aadf4-catalog-content\") pod \"community-operators-4nkbw\" (UID: \"5f4366b7-c8c9-46af-a5fa-0448ac3aadf4\") " pod="openshift-marketplace/community-operators-4nkbw" Dec 10 16:52:22 crc kubenswrapper[4755]: I1210 16:52:22.994339 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f4366b7-c8c9-46af-a5fa-0448ac3aadf4-utilities\") pod \"community-operators-4nkbw\" (UID: \"5f4366b7-c8c9-46af-a5fa-0448ac3aadf4\") " pod="openshift-marketplace/community-operators-4nkbw" Dec 10 16:52:23 crc kubenswrapper[4755]: I1210 16:52:23.094965 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5l7tm\" (UniqueName: \"kubernetes.io/projected/5f4366b7-c8c9-46af-a5fa-0448ac3aadf4-kube-api-access-5l7tm\") pod \"community-operators-4nkbw\" (UID: \"5f4366b7-c8c9-46af-a5fa-0448ac3aadf4\") " pod="openshift-marketplace/community-operators-4nkbw" Dec 10 16:52:23 crc kubenswrapper[4755]: I1210 16:52:23.124315 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4nkbw" Dec 10 16:52:23 crc kubenswrapper[4755]: I1210 16:52:23.744246 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4nkbw"] Dec 10 16:52:24 crc kubenswrapper[4755]: I1210 16:52:24.733024 4755 generic.go:334] "Generic (PLEG): container finished" podID="5f4366b7-c8c9-46af-a5fa-0448ac3aadf4" containerID="70b783833605e770f28f5b7978f9c033a7830d1f592fc19ca8cf79a27a597e8f" exitCode=0 Dec 10 16:52:24 crc kubenswrapper[4755]: I1210 16:52:24.733135 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4nkbw" event={"ID":"5f4366b7-c8c9-46af-a5fa-0448ac3aadf4","Type":"ContainerDied","Data":"70b783833605e770f28f5b7978f9c033a7830d1f592fc19ca8cf79a27a597e8f"} Dec 10 16:52:24 crc kubenswrapper[4755]: I1210 16:52:24.733597 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4nkbw" event={"ID":"5f4366b7-c8c9-46af-a5fa-0448ac3aadf4","Type":"ContainerStarted","Data":"17c2423ac4c7ce73295015e169e05f81fdc23fb65a405ea6eda9b3e476f737bc"} Dec 10 16:52:25 crc kubenswrapper[4755]: I1210 16:52:25.745185 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4nkbw" event={"ID":"5f4366b7-c8c9-46af-a5fa-0448ac3aadf4","Type":"ContainerStarted","Data":"aae9831892a6095efe81c610f88ce117b7fd84726cd182f179b22718fd5dcd81"} Dec 10 16:52:26 crc kubenswrapper[4755]: I1210 16:52:26.764935 4755 generic.go:334] "Generic (PLEG): container finished" podID="5f4366b7-c8c9-46af-a5fa-0448ac3aadf4" containerID="aae9831892a6095efe81c610f88ce117b7fd84726cd182f179b22718fd5dcd81" exitCode=0 Dec 10 16:52:26 crc kubenswrapper[4755]: I1210 16:52:26.764995 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4nkbw" event={"ID":"5f4366b7-c8c9-46af-a5fa-0448ac3aadf4","Type":"ContainerDied","Data":"aae9831892a6095efe81c610f88ce117b7fd84726cd182f179b22718fd5dcd81"} Dec 10 16:52:27 crc kubenswrapper[4755]: I1210 16:52:27.758655 4755 scope.go:117] "RemoveContainer" containerID="c02a5ae0f2b694ce1165db44430b56590e65e71f05780d61256f730ed3b1326e" Dec 10 16:52:27 crc kubenswrapper[4755]: E1210 16:52:27.759727 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:52:27 crc kubenswrapper[4755]: I1210 16:52:27.778322 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4nkbw" event={"ID":"5f4366b7-c8c9-46af-a5fa-0448ac3aadf4","Type":"ContainerStarted","Data":"c251a6c3eed3e5c9e81d709c320fb94511f09ae3446bf49163e2f30eea6f741a"} Dec 10 16:52:27 crc kubenswrapper[4755]: I1210 16:52:27.821457 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4nkbw" podStartSLOduration=3.295764804 podStartE2EDuration="5.821425608s" podCreationTimestamp="2025-12-10 16:52:22 +0000 UTC" firstStartedPulling="2025-12-10 16:52:24.738366035 +0000 UTC m=+5341.339249707" lastFinishedPulling="2025-12-10 16:52:27.264026879 +0000 UTC m=+5343.864910511" observedRunningTime="2025-12-10 16:52:27.803385837 +0000 UTC m=+5344.404269499" watchObservedRunningTime="2025-12-10 16:52:27.821425608 +0000 UTC m=+5344.422309250" Dec 10 16:52:33 crc kubenswrapper[4755]: I1210 16:52:33.124444 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4nkbw" Dec 10 16:52:33 crc kubenswrapper[4755]: I1210 16:52:33.125077 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4nkbw" Dec 10 16:52:33 crc kubenswrapper[4755]: I1210 16:52:33.180651 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4nkbw" Dec 10 16:52:33 crc kubenswrapper[4755]: I1210 16:52:33.897796 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4nkbw" Dec 10 16:52:33 crc kubenswrapper[4755]: I1210 16:52:33.946997 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4nkbw"] Dec 10 16:52:34 crc kubenswrapper[4755]: I1210 16:52:34.173960 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-558c4df967-tdf8t_57ef8333-3c3c-4e02-ad27-24ccac555a55/kube-rbac-proxy/0.log" Dec 10 16:52:34 crc kubenswrapper[4755]: I1210 16:52:34.184914 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-558c4df967-tdf8t_57ef8333-3c3c-4e02-ad27-24ccac555a55/manager/0.log" Dec 10 16:52:35 crc kubenswrapper[4755]: I1210 16:52:35.857336 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4nkbw" podUID="5f4366b7-c8c9-46af-a5fa-0448ac3aadf4" containerName="registry-server" containerID="cri-o://c251a6c3eed3e5c9e81d709c320fb94511f09ae3446bf49163e2f30eea6f741a" gracePeriod=2 Dec 10 16:52:36 crc kubenswrapper[4755]: I1210 16:52:36.868440 4755 generic.go:334] "Generic (PLEG): container finished" podID="5f4366b7-c8c9-46af-a5fa-0448ac3aadf4" containerID="c251a6c3eed3e5c9e81d709c320fb94511f09ae3446bf49163e2f30eea6f741a" exitCode=0 Dec 10 16:52:36 crc kubenswrapper[4755]: I1210 16:52:36.868510 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4nkbw" event={"ID":"5f4366b7-c8c9-46af-a5fa-0448ac3aadf4","Type":"ContainerDied","Data":"c251a6c3eed3e5c9e81d709c320fb94511f09ae3446bf49163e2f30eea6f741a"} Dec 10 16:52:36 crc kubenswrapper[4755]: I1210 16:52:36.868879 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4nkbw" event={"ID":"5f4366b7-c8c9-46af-a5fa-0448ac3aadf4","Type":"ContainerDied","Data":"17c2423ac4c7ce73295015e169e05f81fdc23fb65a405ea6eda9b3e476f737bc"} Dec 10 16:52:36 crc kubenswrapper[4755]: I1210 16:52:36.868909 4755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="17c2423ac4c7ce73295015e169e05f81fdc23fb65a405ea6eda9b3e476f737bc" Dec 10 16:52:36 crc kubenswrapper[4755]: I1210 16:52:36.943793 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4nkbw" Dec 10 16:52:36 crc kubenswrapper[4755]: I1210 16:52:36.994845 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f4366b7-c8c9-46af-a5fa-0448ac3aadf4-utilities\") pod \"5f4366b7-c8c9-46af-a5fa-0448ac3aadf4\" (UID: \"5f4366b7-c8c9-46af-a5fa-0448ac3aadf4\") " Dec 10 16:52:36 crc kubenswrapper[4755]: I1210 16:52:36.995099 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f4366b7-c8c9-46af-a5fa-0448ac3aadf4-catalog-content\") pod \"5f4366b7-c8c9-46af-a5fa-0448ac3aadf4\" (UID: \"5f4366b7-c8c9-46af-a5fa-0448ac3aadf4\") " Dec 10 16:52:36 crc kubenswrapper[4755]: I1210 16:52:36.995208 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5l7tm\" (UniqueName: \"kubernetes.io/projected/5f4366b7-c8c9-46af-a5fa-0448ac3aadf4-kube-api-access-5l7tm\") pod \"5f4366b7-c8c9-46af-a5fa-0448ac3aadf4\" (UID: \"5f4366b7-c8c9-46af-a5fa-0448ac3aadf4\") " Dec 10 16:52:36 crc kubenswrapper[4755]: I1210 16:52:36.996325 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f4366b7-c8c9-46af-a5fa-0448ac3aadf4-utilities" (OuterVolumeSpecName: "utilities") pod "5f4366b7-c8c9-46af-a5fa-0448ac3aadf4" (UID: "5f4366b7-c8c9-46af-a5fa-0448ac3aadf4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 16:52:37 crc kubenswrapper[4755]: I1210 16:52:37.041825 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f4366b7-c8c9-46af-a5fa-0448ac3aadf4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5f4366b7-c8c9-46af-a5fa-0448ac3aadf4" (UID: "5f4366b7-c8c9-46af-a5fa-0448ac3aadf4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 16:52:37 crc kubenswrapper[4755]: I1210 16:52:37.093789 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f4366b7-c8c9-46af-a5fa-0448ac3aadf4-kube-api-access-5l7tm" (OuterVolumeSpecName: "kube-api-access-5l7tm") pod "5f4366b7-c8c9-46af-a5fa-0448ac3aadf4" (UID: "5f4366b7-c8c9-46af-a5fa-0448ac3aadf4"). InnerVolumeSpecName "kube-api-access-5l7tm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 16:52:37 crc kubenswrapper[4755]: I1210 16:52:37.098054 4755 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f4366b7-c8c9-46af-a5fa-0448ac3aadf4-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 16:52:37 crc kubenswrapper[4755]: I1210 16:52:37.098083 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5l7tm\" (UniqueName: \"kubernetes.io/projected/5f4366b7-c8c9-46af-a5fa-0448ac3aadf4-kube-api-access-5l7tm\") on node \"crc\" DevicePath \"\"" Dec 10 16:52:37 crc kubenswrapper[4755]: I1210 16:52:37.098095 4755 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f4366b7-c8c9-46af-a5fa-0448ac3aadf4-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 16:52:37 crc kubenswrapper[4755]: I1210 16:52:37.881081 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4nkbw" Dec 10 16:52:37 crc kubenswrapper[4755]: I1210 16:52:37.920118 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4nkbw"] Dec 10 16:52:37 crc kubenswrapper[4755]: I1210 16:52:37.934632 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4nkbw"] Dec 10 16:52:39 crc kubenswrapper[4755]: I1210 16:52:39.758295 4755 scope.go:117] "RemoveContainer" containerID="c02a5ae0f2b694ce1165db44430b56590e65e71f05780d61256f730ed3b1326e" Dec 10 16:52:39 crc kubenswrapper[4755]: E1210 16:52:39.759189 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:52:39 crc kubenswrapper[4755]: I1210 16:52:39.786365 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f4366b7-c8c9-46af-a5fa-0448ac3aadf4" path="/var/lib/kubelet/pods/5f4366b7-c8c9-46af-a5fa-0448ac3aadf4/volumes" Dec 10 16:52:50 crc kubenswrapper[4755]: I1210 16:52:50.400314 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-qsjcn_0463accf-2a6b-41bb-a91c-7609e8ff9a00/kube-rbac-proxy/0.log" Dec 10 16:52:50 crc kubenswrapper[4755]: I1210 16:52:50.652003 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-qsjcn_0463accf-2a6b-41bb-a91c-7609e8ff9a00/controller/0.log" Dec 10 16:52:50 crc kubenswrapper[4755]: I1210 16:52:50.723762 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bddzj_68449e5b-980d-40dc-b54f-d1263755f703/cp-frr-files/0.log" Dec 10 16:52:50 crc kubenswrapper[4755]: I1210 16:52:50.893700 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bddzj_68449e5b-980d-40dc-b54f-d1263755f703/cp-reloader/0.log" Dec 10 16:52:50 crc kubenswrapper[4755]: I1210 16:52:50.899669 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bddzj_68449e5b-980d-40dc-b54f-d1263755f703/cp-frr-files/0.log" Dec 10 16:52:50 crc kubenswrapper[4755]: I1210 16:52:50.923692 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bddzj_68449e5b-980d-40dc-b54f-d1263755f703/cp-metrics/0.log" Dec 10 16:52:50 crc kubenswrapper[4755]: I1210 16:52:50.927346 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bddzj_68449e5b-980d-40dc-b54f-d1263755f703/cp-reloader/0.log" Dec 10 16:52:51 crc kubenswrapper[4755]: I1210 16:52:51.138126 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bddzj_68449e5b-980d-40dc-b54f-d1263755f703/cp-metrics/0.log" Dec 10 16:52:51 crc kubenswrapper[4755]: I1210 16:52:51.139430 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bddzj_68449e5b-980d-40dc-b54f-d1263755f703/cp-frr-files/0.log" Dec 10 16:52:51 crc kubenswrapper[4755]: I1210 16:52:51.153751 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bddzj_68449e5b-980d-40dc-b54f-d1263755f703/cp-metrics/0.log" Dec 10 16:52:51 crc kubenswrapper[4755]: I1210 16:52:51.154419 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bddzj_68449e5b-980d-40dc-b54f-d1263755f703/cp-reloader/0.log" Dec 10 16:52:51 crc kubenswrapper[4755]: I1210 16:52:51.325676 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bddzj_68449e5b-980d-40dc-b54f-d1263755f703/cp-frr-files/0.log" Dec 10 16:52:51 crc kubenswrapper[4755]: I1210 16:52:51.351648 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bddzj_68449e5b-980d-40dc-b54f-d1263755f703/controller/0.log" Dec 10 16:52:51 crc kubenswrapper[4755]: I1210 16:52:51.372098 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bddzj_68449e5b-980d-40dc-b54f-d1263755f703/cp-metrics/0.log" Dec 10 16:52:51 crc kubenswrapper[4755]: I1210 16:52:51.380432 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bddzj_68449e5b-980d-40dc-b54f-d1263755f703/cp-reloader/0.log" Dec 10 16:52:51 crc kubenswrapper[4755]: I1210 16:52:51.557918 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bddzj_68449e5b-980d-40dc-b54f-d1263755f703/frr-metrics/0.log" Dec 10 16:52:51 crc kubenswrapper[4755]: I1210 16:52:51.566885 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bddzj_68449e5b-980d-40dc-b54f-d1263755f703/kube-rbac-proxy-frr/0.log" Dec 10 16:52:51 crc kubenswrapper[4755]: I1210 16:52:51.570595 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bddzj_68449e5b-980d-40dc-b54f-d1263755f703/kube-rbac-proxy/0.log" Dec 10 16:52:51 crc kubenswrapper[4755]: I1210 16:52:51.757312 4755 scope.go:117] "RemoveContainer" containerID="c02a5ae0f2b694ce1165db44430b56590e65e71f05780d61256f730ed3b1326e" Dec 10 16:52:51 crc kubenswrapper[4755]: I1210 16:52:51.787592 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bddzj_68449e5b-980d-40dc-b54f-d1263755f703/reloader/0.log" Dec 10 16:52:51 crc kubenswrapper[4755]: I1210 16:52:51.808663 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7fcb986d4-xvxb9_d04b8edb-ca78-4d5d-9de7-11935b847af1/frr-k8s-webhook-server/0.log" Dec 10 16:52:52 crc kubenswrapper[4755]: I1210 16:52:52.022252 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-74cdb46bb8-59fxs_0f530a80-69ef-4c16-abb6-befe3285a8fd/manager/0.log" Dec 10 16:52:52 crc kubenswrapper[4755]: I1210 16:52:52.175210 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-69bc58cc7-kjrfc_9497c2dd-9f7a-4cf0-ab2b-5447fde7fc2a/webhook-server/0.log" Dec 10 16:52:52 crc kubenswrapper[4755]: I1210 16:52:52.281751 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-86dvw_72472af7-45ad-4637-b6ea-7c39bd98cfbf/kube-rbac-proxy/0.log" Dec 10 16:52:52 crc kubenswrapper[4755]: I1210 16:52:52.465716 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" event={"ID":"b132a8b9-1c99-414d-8773-229bf36b305d","Type":"ContainerStarted","Data":"89057655675f6d124f1d2ccf4bdb67361d7785d60846c227de326814a7f4d3a9"} Dec 10 16:52:53 crc kubenswrapper[4755]: I1210 16:52:53.133515 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-86dvw_72472af7-45ad-4637-b6ea-7c39bd98cfbf/speaker/0.log" Dec 10 16:52:53 crc kubenswrapper[4755]: I1210 16:52:53.377546 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bddzj_68449e5b-980d-40dc-b54f-d1263755f703/frr/0.log" Dec 10 16:53:08 crc kubenswrapper[4755]: I1210 16:53:08.283668 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773r7kx7_0713bf5f-e7a0-40b6-b1a9-001d4fb9d972/util/0.log" Dec 10 16:53:08 crc kubenswrapper[4755]: I1210 16:53:08.451139 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773r7kx7_0713bf5f-e7a0-40b6-b1a9-001d4fb9d972/util/0.log" Dec 10 16:53:08 crc kubenswrapper[4755]: I1210 16:53:08.462784 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773r7kx7_0713bf5f-e7a0-40b6-b1a9-001d4fb9d972/pull/0.log" Dec 10 16:53:08 crc kubenswrapper[4755]: I1210 16:53:08.533277 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773r7kx7_0713bf5f-e7a0-40b6-b1a9-001d4fb9d972/pull/0.log" Dec 10 16:53:08 crc kubenswrapper[4755]: I1210 16:53:08.660433 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773r7kx7_0713bf5f-e7a0-40b6-b1a9-001d4fb9d972/util/0.log" Dec 10 16:53:08 crc kubenswrapper[4755]: I1210 16:53:08.674500 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773r7kx7_0713bf5f-e7a0-40b6-b1a9-001d4fb9d972/extract/0.log" Dec 10 16:53:08 crc kubenswrapper[4755]: I1210 16:53:08.677682 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773r7kx7_0713bf5f-e7a0-40b6-b1a9-001d4fb9d972/pull/0.log" Dec 10 16:53:08 crc kubenswrapper[4755]: I1210 16:53:08.908422 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f8rvt9_1be44708-dd96-4718-b835-4b4a8b9e5b9f/util/0.log" Dec 10 16:53:09 crc kubenswrapper[4755]: I1210 16:53:09.035615 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f8rvt9_1be44708-dd96-4718-b835-4b4a8b9e5b9f/pull/0.log" Dec 10 16:53:09 crc kubenswrapper[4755]: I1210 16:53:09.044945 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f8rvt9_1be44708-dd96-4718-b835-4b4a8b9e5b9f/util/0.log" Dec 10 16:53:09 crc kubenswrapper[4755]: I1210 16:53:09.122708 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f8rvt9_1be44708-dd96-4718-b835-4b4a8b9e5b9f/pull/0.log" Dec 10 16:53:09 crc kubenswrapper[4755]: I1210 16:53:09.238790 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f8rvt9_1be44708-dd96-4718-b835-4b4a8b9e5b9f/extract/0.log" Dec 10 16:53:09 crc kubenswrapper[4755]: I1210 16:53:09.240037 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f8rvt9_1be44708-dd96-4718-b835-4b4a8b9e5b9f/pull/0.log" Dec 10 16:53:09 crc kubenswrapper[4755]: I1210 16:53:09.247628 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f8rvt9_1be44708-dd96-4718-b835-4b4a8b9e5b9f/util/0.log" Dec 10 16:53:10 crc kubenswrapper[4755]: I1210 16:53:10.107822 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dn67g_995e5079-efb3-40a6-b2bd-4fa4e6f040c1/util/0.log" Dec 10 16:53:10 crc kubenswrapper[4755]: I1210 16:53:10.305788 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dn67g_995e5079-efb3-40a6-b2bd-4fa4e6f040c1/pull/0.log" Dec 10 16:53:10 crc kubenswrapper[4755]: I1210 16:53:10.346092 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dn67g_995e5079-efb3-40a6-b2bd-4fa4e6f040c1/util/0.log" Dec 10 16:53:10 crc kubenswrapper[4755]: I1210 16:53:10.433502 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dn67g_995e5079-efb3-40a6-b2bd-4fa4e6f040c1/pull/0.log" Dec 10 16:53:10 crc kubenswrapper[4755]: I1210 16:53:10.770881 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dn67g_995e5079-efb3-40a6-b2bd-4fa4e6f040c1/extract/0.log" Dec 10 16:53:10 crc kubenswrapper[4755]: I1210 16:53:10.781231 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dn67g_995e5079-efb3-40a6-b2bd-4fa4e6f040c1/pull/0.log" Dec 10 16:53:10 crc kubenswrapper[4755]: I1210 16:53:10.783887 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dn67g_995e5079-efb3-40a6-b2bd-4fa4e6f040c1/util/0.log" Dec 10 16:53:10 crc kubenswrapper[4755]: I1210 16:53:10.952506 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c1t8zr8_54eaef18-b1aa-4151-99aa-9e758934bd5c/util/0.log" Dec 10 16:53:11 crc kubenswrapper[4755]: I1210 16:53:11.201852 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c1t8zr8_54eaef18-b1aa-4151-99aa-9e758934bd5c/pull/0.log" Dec 10 16:53:11 crc kubenswrapper[4755]: I1210 16:53:11.202055 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c1t8zr8_54eaef18-b1aa-4151-99aa-9e758934bd5c/pull/0.log" Dec 10 16:53:11 crc kubenswrapper[4755]: I1210 16:53:11.246840 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c1t8zr8_54eaef18-b1aa-4151-99aa-9e758934bd5c/util/0.log" Dec 10 16:53:11 crc kubenswrapper[4755]: I1210 16:53:11.395167 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c1t8zr8_54eaef18-b1aa-4151-99aa-9e758934bd5c/pull/0.log" Dec 10 16:53:11 crc kubenswrapper[4755]: I1210 16:53:11.412916 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c1t8zr8_54eaef18-b1aa-4151-99aa-9e758934bd5c/util/0.log" Dec 10 16:53:11 crc kubenswrapper[4755]: I1210 16:53:11.461498 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c1t8zr8_54eaef18-b1aa-4151-99aa-9e758934bd5c/extract/0.log" Dec 10 16:53:11 crc kubenswrapper[4755]: I1210 16:53:11.495318 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83q5l8d_e96c66c9-c2da-4ab5-95eb-3f8bbc6e8917/util/0.log" Dec 10 16:53:11 crc kubenswrapper[4755]: I1210 16:53:11.686416 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83q5l8d_e96c66c9-c2da-4ab5-95eb-3f8bbc6e8917/pull/0.log" Dec 10 16:53:11 crc kubenswrapper[4755]: I1210 16:53:11.698778 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83q5l8d_e96c66c9-c2da-4ab5-95eb-3f8bbc6e8917/pull/0.log" Dec 10 16:53:11 crc kubenswrapper[4755]: I1210 16:53:11.752696 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83q5l8d_e96c66c9-c2da-4ab5-95eb-3f8bbc6e8917/util/0.log" Dec 10 16:53:11 crc kubenswrapper[4755]: I1210 16:53:11.840856 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83q5l8d_e96c66c9-c2da-4ab5-95eb-3f8bbc6e8917/util/0.log" Dec 10 16:53:11 crc kubenswrapper[4755]: I1210 16:53:11.878809 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83q5l8d_e96c66c9-c2da-4ab5-95eb-3f8bbc6e8917/extract/0.log" Dec 10 16:53:11 crc kubenswrapper[4755]: I1210 16:53:11.894688 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83q5l8d_e96c66c9-c2da-4ab5-95eb-3f8bbc6e8917/pull/0.log" Dec 10 16:53:11 crc kubenswrapper[4755]: I1210 16:53:11.965426 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-ltxsm_bf0eab0b-0b57-4c95-8edd-0b84d5f8e8f6/extract-utilities/0.log" Dec 10 16:53:12 crc kubenswrapper[4755]: I1210 16:53:12.158035 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-ltxsm_bf0eab0b-0b57-4c95-8edd-0b84d5f8e8f6/extract-content/0.log" Dec 10 16:53:12 crc kubenswrapper[4755]: I1210 16:53:12.162453 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-ltxsm_bf0eab0b-0b57-4c95-8edd-0b84d5f8e8f6/extract-content/0.log" Dec 10 16:53:12 crc kubenswrapper[4755]: I1210 16:53:12.170109 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-ltxsm_bf0eab0b-0b57-4c95-8edd-0b84d5f8e8f6/extract-utilities/0.log" Dec 10 16:53:12 crc kubenswrapper[4755]: I1210 16:53:12.390287 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-ltxsm_bf0eab0b-0b57-4c95-8edd-0b84d5f8e8f6/extract-utilities/0.log" Dec 10 16:53:12 crc kubenswrapper[4755]: I1210 16:53:12.393645 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-ltxsm_bf0eab0b-0b57-4c95-8edd-0b84d5f8e8f6/extract-content/0.log" Dec 10 16:53:12 crc kubenswrapper[4755]: I1210 16:53:12.457308 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-sx998_4bc52194-661c-4b4c-9642-b6b1706e2fd0/extract-utilities/0.log" Dec 10 16:53:12 crc kubenswrapper[4755]: I1210 16:53:12.694411 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-sx998_4bc52194-661c-4b4c-9642-b6b1706e2fd0/extract-utilities/0.log" Dec 10 16:53:12 crc kubenswrapper[4755]: I1210 16:53:12.712521 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-sx998_4bc52194-661c-4b4c-9642-b6b1706e2fd0/extract-content/0.log" Dec 10 16:53:12 crc kubenswrapper[4755]: I1210 16:53:12.743579 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-sx998_4bc52194-661c-4b4c-9642-b6b1706e2fd0/extract-content/0.log" Dec 10 16:53:12 crc kubenswrapper[4755]: I1210 16:53:12.941276 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-sx998_4bc52194-661c-4b4c-9642-b6b1706e2fd0/extract-utilities/0.log" Dec 10 16:53:12 crc kubenswrapper[4755]: I1210 16:53:12.985597 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-sx998_4bc52194-661c-4b4c-9642-b6b1706e2fd0/extract-content/0.log" Dec 10 16:53:13 crc kubenswrapper[4755]: I1210 16:53:13.158897 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-zlc5s_62b9cf5c-ad14-40aa-a245-027d775331d7/marketplace-operator/0.log" Dec 10 16:53:13 crc kubenswrapper[4755]: I1210 16:53:13.293082 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-ltxsm_bf0eab0b-0b57-4c95-8edd-0b84d5f8e8f6/registry-server/0.log" Dec 10 16:53:13 crc kubenswrapper[4755]: I1210 16:53:13.394608 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pwfnv_c87a9c35-e8c9-42d7-9715-a1467d4f134e/extract-utilities/0.log" Dec 10 16:53:13 crc kubenswrapper[4755]: I1210 16:53:13.559311 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pwfnv_c87a9c35-e8c9-42d7-9715-a1467d4f134e/extract-content/0.log" Dec 10 16:53:13 crc kubenswrapper[4755]: I1210 16:53:13.562667 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pwfnv_c87a9c35-e8c9-42d7-9715-a1467d4f134e/extract-utilities/0.log" Dec 10 16:53:13 crc kubenswrapper[4755]: I1210 16:53:13.666431 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pwfnv_c87a9c35-e8c9-42d7-9715-a1467d4f134e/extract-content/0.log" Dec 10 16:53:13 crc kubenswrapper[4755]: I1210 16:53:13.828862 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-sx998_4bc52194-661c-4b4c-9642-b6b1706e2fd0/registry-server/0.log" Dec 10 16:53:13 crc kubenswrapper[4755]: I1210 16:53:13.900156 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pwfnv_c87a9c35-e8c9-42d7-9715-a1467d4f134e/extract-content/0.log" Dec 10 16:53:13 crc kubenswrapper[4755]: I1210 16:53:13.937223 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pwfnv_c87a9c35-e8c9-42d7-9715-a1467d4f134e/extract-utilities/0.log" Dec 10 16:53:14 crc kubenswrapper[4755]: I1210 16:53:14.023285 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lkzzd_3df44112-9453-4958-8a64-ce354428a949/extract-utilities/0.log" Dec 10 16:53:14 crc kubenswrapper[4755]: I1210 16:53:14.042094 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pwfnv_c87a9c35-e8c9-42d7-9715-a1467d4f134e/registry-server/0.log" Dec 10 16:53:14 crc kubenswrapper[4755]: I1210 16:53:14.254588 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lkzzd_3df44112-9453-4958-8a64-ce354428a949/extract-utilities/0.log" Dec 10 16:53:14 crc kubenswrapper[4755]: I1210 16:53:14.313010 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lkzzd_3df44112-9453-4958-8a64-ce354428a949/extract-content/0.log" Dec 10 16:53:14 crc kubenswrapper[4755]: I1210 16:53:14.316372 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lkzzd_3df44112-9453-4958-8a64-ce354428a949/extract-content/0.log" Dec 10 16:53:14 crc kubenswrapper[4755]: I1210 16:53:14.685144 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lkzzd_3df44112-9453-4958-8a64-ce354428a949/extract-utilities/0.log" Dec 10 16:53:14 crc kubenswrapper[4755]: I1210 16:53:14.731458 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lkzzd_3df44112-9453-4958-8a64-ce354428a949/extract-content/0.log" Dec 10 16:53:14 crc kubenswrapper[4755]: I1210 16:53:14.822267 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lkzzd_3df44112-9453-4958-8a64-ce354428a949/registry-server/0.log" Dec 10 16:53:30 crc kubenswrapper[4755]: I1210 16:53:30.827954 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-668cf9dfbb-2tlgr_a03bc2c9-3ec6-4ca5-9181-88e9cb9fe2be/prometheus-operator/0.log" Dec 10 16:53:30 crc kubenswrapper[4755]: I1210 16:53:30.837227 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-f66fc9f65-4kjbh_95d667d8-b323-4d59-84e6-ffaa553526c7/prometheus-operator-admission-webhook/0.log" Dec 10 16:53:30 crc kubenswrapper[4755]: I1210 16:53:30.986331 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-f66fc9f65-5j7mt_2ace45ac-e8fa-4b58-b40c-c32fbc5a1c6e/prometheus-operator-admission-webhook/0.log" Dec 10 16:53:31 crc kubenswrapper[4755]: I1210 16:53:31.116771 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-d8bb48f5d-57crm_aed5adbf-d512-4557-bb9a-474301586611/operator/0.log" Dec 10 16:53:31 crc kubenswrapper[4755]: I1210 16:53:31.208053 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5446b9c989-rwkrm_5dad8e56-f3d6-4d95-bd98-96b2f7ae1d6e/perses-operator/0.log" Dec 10 16:53:44 crc kubenswrapper[4755]: I1210 16:53:44.649325 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-558c4df967-tdf8t_57ef8333-3c3c-4e02-ad27-24ccac555a55/manager/0.log" Dec 10 16:53:44 crc kubenswrapper[4755]: I1210 16:53:44.688549 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-558c4df967-tdf8t_57ef8333-3c3c-4e02-ad27-24ccac555a55/kube-rbac-proxy/0.log" Dec 10 16:55:10 crc kubenswrapper[4755]: I1210 16:55:10.359574 4755 patch_prober.go:28] interesting pod/machine-config-daemon-ggt8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 16:55:10 crc kubenswrapper[4755]: I1210 16:55:10.360304 4755 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 16:55:14 crc kubenswrapper[4755]: I1210 16:55:14.952881 4755 generic.go:334] "Generic (PLEG): container finished" podID="a429ed81-4dbb-4907-91dd-9987257da152" containerID="aec6da169d3c8d1b56af57192a4764725497636eed9f3402b2489c8596e88395" exitCode=0 Dec 10 16:55:14 crc kubenswrapper[4755]: I1210 16:55:14.953021 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-swgtc/must-gather-hksrt" event={"ID":"a429ed81-4dbb-4907-91dd-9987257da152","Type":"ContainerDied","Data":"aec6da169d3c8d1b56af57192a4764725497636eed9f3402b2489c8596e88395"} Dec 10 16:55:14 crc kubenswrapper[4755]: I1210 16:55:14.953911 4755 scope.go:117] "RemoveContainer" containerID="aec6da169d3c8d1b56af57192a4764725497636eed9f3402b2489c8596e88395" Dec 10 16:55:15 crc kubenswrapper[4755]: I1210 16:55:15.981626 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-swgtc_must-gather-hksrt_a429ed81-4dbb-4907-91dd-9987257da152/gather/0.log" Dec 10 16:55:23 crc kubenswrapper[4755]: I1210 16:55:23.980523 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-swgtc/must-gather-hksrt"] Dec 10 16:55:23 crc kubenswrapper[4755]: I1210 16:55:23.981413 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-swgtc/must-gather-hksrt" podUID="a429ed81-4dbb-4907-91dd-9987257da152" containerName="copy" containerID="cri-o://5e2e28a34e8f781fd28b00b5f78403e5ab1933b63d2afa7f1b75a75cabf716c6" gracePeriod=2 Dec 10 16:55:23 crc kubenswrapper[4755]: I1210 16:55:23.995797 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-swgtc/must-gather-hksrt"] Dec 10 16:55:24 crc kubenswrapper[4755]: I1210 16:55:24.513837 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-swgtc_must-gather-hksrt_a429ed81-4dbb-4907-91dd-9987257da152/copy/0.log" Dec 10 16:55:24 crc kubenswrapper[4755]: I1210 16:55:24.514564 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-swgtc/must-gather-hksrt" Dec 10 16:55:24 crc kubenswrapper[4755]: I1210 16:55:24.717261 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j7cn9\" (UniqueName: \"kubernetes.io/projected/a429ed81-4dbb-4907-91dd-9987257da152-kube-api-access-j7cn9\") pod \"a429ed81-4dbb-4907-91dd-9987257da152\" (UID: \"a429ed81-4dbb-4907-91dd-9987257da152\") " Dec 10 16:55:24 crc kubenswrapper[4755]: I1210 16:55:24.717595 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a429ed81-4dbb-4907-91dd-9987257da152-must-gather-output\") pod \"a429ed81-4dbb-4907-91dd-9987257da152\" (UID: \"a429ed81-4dbb-4907-91dd-9987257da152\") " Dec 10 16:55:24 crc kubenswrapper[4755]: I1210 16:55:24.757788 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a429ed81-4dbb-4907-91dd-9987257da152-kube-api-access-j7cn9" (OuterVolumeSpecName: "kube-api-access-j7cn9") pod "a429ed81-4dbb-4907-91dd-9987257da152" (UID: "a429ed81-4dbb-4907-91dd-9987257da152"). InnerVolumeSpecName "kube-api-access-j7cn9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 16:55:24 crc kubenswrapper[4755]: I1210 16:55:24.822284 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j7cn9\" (UniqueName: \"kubernetes.io/projected/a429ed81-4dbb-4907-91dd-9987257da152-kube-api-access-j7cn9\") on node \"crc\" DevicePath \"\"" Dec 10 16:55:24 crc kubenswrapper[4755]: I1210 16:55:24.913407 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a429ed81-4dbb-4907-91dd-9987257da152-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "a429ed81-4dbb-4907-91dd-9987257da152" (UID: "a429ed81-4dbb-4907-91dd-9987257da152"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 16:55:24 crc kubenswrapper[4755]: I1210 16:55:24.923444 4755 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a429ed81-4dbb-4907-91dd-9987257da152-must-gather-output\") on node \"crc\" DevicePath \"\"" Dec 10 16:55:25 crc kubenswrapper[4755]: I1210 16:55:25.068646 4755 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-swgtc_must-gather-hksrt_a429ed81-4dbb-4907-91dd-9987257da152/copy/0.log" Dec 10 16:55:25 crc kubenswrapper[4755]: I1210 16:55:25.069036 4755 generic.go:334] "Generic (PLEG): container finished" podID="a429ed81-4dbb-4907-91dd-9987257da152" containerID="5e2e28a34e8f781fd28b00b5f78403e5ab1933b63d2afa7f1b75a75cabf716c6" exitCode=143 Dec 10 16:55:25 crc kubenswrapper[4755]: I1210 16:55:25.069085 4755 scope.go:117] "RemoveContainer" containerID="5e2e28a34e8f781fd28b00b5f78403e5ab1933b63d2afa7f1b75a75cabf716c6" Dec 10 16:55:25 crc kubenswrapper[4755]: I1210 16:55:25.069112 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-swgtc/must-gather-hksrt" Dec 10 16:55:25 crc kubenswrapper[4755]: I1210 16:55:25.109304 4755 scope.go:117] "RemoveContainer" containerID="aec6da169d3c8d1b56af57192a4764725497636eed9f3402b2489c8596e88395" Dec 10 16:55:25 crc kubenswrapper[4755]: I1210 16:55:25.171101 4755 scope.go:117] "RemoveContainer" containerID="5e2e28a34e8f781fd28b00b5f78403e5ab1933b63d2afa7f1b75a75cabf716c6" Dec 10 16:55:25 crc kubenswrapper[4755]: E1210 16:55:25.171907 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e2e28a34e8f781fd28b00b5f78403e5ab1933b63d2afa7f1b75a75cabf716c6\": container with ID starting with 5e2e28a34e8f781fd28b00b5f78403e5ab1933b63d2afa7f1b75a75cabf716c6 not found: ID does not exist" containerID="5e2e28a34e8f781fd28b00b5f78403e5ab1933b63d2afa7f1b75a75cabf716c6" Dec 10 16:55:25 crc kubenswrapper[4755]: I1210 16:55:25.171937 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e2e28a34e8f781fd28b00b5f78403e5ab1933b63d2afa7f1b75a75cabf716c6"} err="failed to get container status \"5e2e28a34e8f781fd28b00b5f78403e5ab1933b63d2afa7f1b75a75cabf716c6\": rpc error: code = NotFound desc = could not find container \"5e2e28a34e8f781fd28b00b5f78403e5ab1933b63d2afa7f1b75a75cabf716c6\": container with ID starting with 5e2e28a34e8f781fd28b00b5f78403e5ab1933b63d2afa7f1b75a75cabf716c6 not found: ID does not exist" Dec 10 16:55:25 crc kubenswrapper[4755]: I1210 16:55:25.171959 4755 scope.go:117] "RemoveContainer" containerID="aec6da169d3c8d1b56af57192a4764725497636eed9f3402b2489c8596e88395" Dec 10 16:55:25 crc kubenswrapper[4755]: E1210 16:55:25.172139 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aec6da169d3c8d1b56af57192a4764725497636eed9f3402b2489c8596e88395\": container with ID starting with aec6da169d3c8d1b56af57192a4764725497636eed9f3402b2489c8596e88395 not found: ID does not exist" containerID="aec6da169d3c8d1b56af57192a4764725497636eed9f3402b2489c8596e88395" Dec 10 16:55:25 crc kubenswrapper[4755]: I1210 16:55:25.172156 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aec6da169d3c8d1b56af57192a4764725497636eed9f3402b2489c8596e88395"} err="failed to get container status \"aec6da169d3c8d1b56af57192a4764725497636eed9f3402b2489c8596e88395\": rpc error: code = NotFound desc = could not find container \"aec6da169d3c8d1b56af57192a4764725497636eed9f3402b2489c8596e88395\": container with ID starting with aec6da169d3c8d1b56af57192a4764725497636eed9f3402b2489c8596e88395 not found: ID does not exist" Dec 10 16:55:25 crc kubenswrapper[4755]: I1210 16:55:25.778894 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a429ed81-4dbb-4907-91dd-9987257da152" path="/var/lib/kubelet/pods/a429ed81-4dbb-4907-91dd-9987257da152/volumes" Dec 10 16:55:40 crc kubenswrapper[4755]: I1210 16:55:40.359697 4755 patch_prober.go:28] interesting pod/machine-config-daemon-ggt8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 16:55:40 crc kubenswrapper[4755]: I1210 16:55:40.360227 4755 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 16:56:10 crc kubenswrapper[4755]: I1210 16:56:10.358787 4755 patch_prober.go:28] interesting pod/machine-config-daemon-ggt8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 16:56:10 crc kubenswrapper[4755]: I1210 16:56:10.359299 4755 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 16:56:10 crc kubenswrapper[4755]: I1210 16:56:10.359354 4755 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" Dec 10 16:56:10 crc kubenswrapper[4755]: I1210 16:56:10.361012 4755 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"89057655675f6d124f1d2ccf4bdb67361d7785d60846c227de326814a7f4d3a9"} pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 10 16:56:10 crc kubenswrapper[4755]: I1210 16:56:10.361137 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" containerName="machine-config-daemon" containerID="cri-o://89057655675f6d124f1d2ccf4bdb67361d7785d60846c227de326814a7f4d3a9" gracePeriod=600 Dec 10 16:56:10 crc kubenswrapper[4755]: I1210 16:56:10.547525 4755 generic.go:334] "Generic (PLEG): container finished" podID="b132a8b9-1c99-414d-8773-229bf36b305d" containerID="89057655675f6d124f1d2ccf4bdb67361d7785d60846c227de326814a7f4d3a9" exitCode=0 Dec 10 16:56:10 crc kubenswrapper[4755]: I1210 16:56:10.547606 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" event={"ID":"b132a8b9-1c99-414d-8773-229bf36b305d","Type":"ContainerDied","Data":"89057655675f6d124f1d2ccf4bdb67361d7785d60846c227de326814a7f4d3a9"} Dec 10 16:56:10 crc kubenswrapper[4755]: I1210 16:56:10.547897 4755 scope.go:117] "RemoveContainer" containerID="c02a5ae0f2b694ce1165db44430b56590e65e71f05780d61256f730ed3b1326e" Dec 10 16:56:11 crc kubenswrapper[4755]: I1210 16:56:11.558812 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" event={"ID":"b132a8b9-1c99-414d-8773-229bf36b305d","Type":"ContainerStarted","Data":"c6023c772f9cc48b73b2c2cb21153ab38fb3d0f36b9615d776f62c1c12e7d3a8"} Dec 10 16:57:24 crc kubenswrapper[4755]: I1210 16:57:24.303312 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-trtz6"] Dec 10 16:57:24 crc kubenswrapper[4755]: E1210 16:57:24.304533 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f4366b7-c8c9-46af-a5fa-0448ac3aadf4" containerName="extract-utilities" Dec 10 16:57:24 crc kubenswrapper[4755]: I1210 16:57:24.304555 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f4366b7-c8c9-46af-a5fa-0448ac3aadf4" containerName="extract-utilities" Dec 10 16:57:24 crc kubenswrapper[4755]: E1210 16:57:24.304598 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f4366b7-c8c9-46af-a5fa-0448ac3aadf4" containerName="extract-content" Dec 10 16:57:24 crc kubenswrapper[4755]: I1210 16:57:24.304608 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f4366b7-c8c9-46af-a5fa-0448ac3aadf4" containerName="extract-content" Dec 10 16:57:24 crc kubenswrapper[4755]: E1210 16:57:24.304627 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a429ed81-4dbb-4907-91dd-9987257da152" containerName="gather" Dec 10 16:57:24 crc kubenswrapper[4755]: I1210 16:57:24.304635 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="a429ed81-4dbb-4907-91dd-9987257da152" containerName="gather" Dec 10 16:57:24 crc kubenswrapper[4755]: E1210 16:57:24.304658 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a429ed81-4dbb-4907-91dd-9987257da152" containerName="copy" Dec 10 16:57:24 crc kubenswrapper[4755]: I1210 16:57:24.304666 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="a429ed81-4dbb-4907-91dd-9987257da152" containerName="copy" Dec 10 16:57:24 crc kubenswrapper[4755]: E1210 16:57:24.304687 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f4366b7-c8c9-46af-a5fa-0448ac3aadf4" containerName="registry-server" Dec 10 16:57:24 crc kubenswrapper[4755]: I1210 16:57:24.304695 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f4366b7-c8c9-46af-a5fa-0448ac3aadf4" containerName="registry-server" Dec 10 16:57:24 crc kubenswrapper[4755]: I1210 16:57:24.304934 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f4366b7-c8c9-46af-a5fa-0448ac3aadf4" containerName="registry-server" Dec 10 16:57:24 crc kubenswrapper[4755]: I1210 16:57:24.304967 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="a429ed81-4dbb-4907-91dd-9987257da152" containerName="gather" Dec 10 16:57:24 crc kubenswrapper[4755]: I1210 16:57:24.304979 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="a429ed81-4dbb-4907-91dd-9987257da152" containerName="copy" Dec 10 16:57:24 crc kubenswrapper[4755]: I1210 16:57:24.307101 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-trtz6" Dec 10 16:57:24 crc kubenswrapper[4755]: I1210 16:57:24.348343 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-trtz6"] Dec 10 16:57:24 crc kubenswrapper[4755]: I1210 16:57:24.395866 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/015ccb61-36a8-4353-9ac1-f5fc815d9976-utilities\") pod \"certified-operators-trtz6\" (UID: \"015ccb61-36a8-4353-9ac1-f5fc815d9976\") " pod="openshift-marketplace/certified-operators-trtz6" Dec 10 16:57:24 crc kubenswrapper[4755]: I1210 16:57:24.396233 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/015ccb61-36a8-4353-9ac1-f5fc815d9976-catalog-content\") pod \"certified-operators-trtz6\" (UID: \"015ccb61-36a8-4353-9ac1-f5fc815d9976\") " pod="openshift-marketplace/certified-operators-trtz6" Dec 10 16:57:24 crc kubenswrapper[4755]: I1210 16:57:24.396444 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pmxv\" (UniqueName: \"kubernetes.io/projected/015ccb61-36a8-4353-9ac1-f5fc815d9976-kube-api-access-9pmxv\") pod \"certified-operators-trtz6\" (UID: \"015ccb61-36a8-4353-9ac1-f5fc815d9976\") " pod="openshift-marketplace/certified-operators-trtz6" Dec 10 16:57:24 crc kubenswrapper[4755]: I1210 16:57:24.461878 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9ltpr"] Dec 10 16:57:24 crc kubenswrapper[4755]: I1210 16:57:24.465508 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9ltpr" Dec 10 16:57:24 crc kubenswrapper[4755]: I1210 16:57:24.475835 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9ltpr"] Dec 10 16:57:24 crc kubenswrapper[4755]: I1210 16:57:24.499358 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/015ccb61-36a8-4353-9ac1-f5fc815d9976-utilities\") pod \"certified-operators-trtz6\" (UID: \"015ccb61-36a8-4353-9ac1-f5fc815d9976\") " pod="openshift-marketplace/certified-operators-trtz6" Dec 10 16:57:24 crc kubenswrapper[4755]: I1210 16:57:24.499422 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/015ccb61-36a8-4353-9ac1-f5fc815d9976-catalog-content\") pod \"certified-operators-trtz6\" (UID: \"015ccb61-36a8-4353-9ac1-f5fc815d9976\") " pod="openshift-marketplace/certified-operators-trtz6" Dec 10 16:57:24 crc kubenswrapper[4755]: I1210 16:57:24.499553 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9pmxv\" (UniqueName: \"kubernetes.io/projected/015ccb61-36a8-4353-9ac1-f5fc815d9976-kube-api-access-9pmxv\") pod \"certified-operators-trtz6\" (UID: \"015ccb61-36a8-4353-9ac1-f5fc815d9976\") " pod="openshift-marketplace/certified-operators-trtz6" Dec 10 16:57:24 crc kubenswrapper[4755]: I1210 16:57:24.500356 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/015ccb61-36a8-4353-9ac1-f5fc815d9976-utilities\") pod \"certified-operators-trtz6\" (UID: \"015ccb61-36a8-4353-9ac1-f5fc815d9976\") " pod="openshift-marketplace/certified-operators-trtz6" Dec 10 16:57:24 crc kubenswrapper[4755]: I1210 16:57:24.500781 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/015ccb61-36a8-4353-9ac1-f5fc815d9976-catalog-content\") pod \"certified-operators-trtz6\" (UID: \"015ccb61-36a8-4353-9ac1-f5fc815d9976\") " pod="openshift-marketplace/certified-operators-trtz6" Dec 10 16:57:24 crc kubenswrapper[4755]: I1210 16:57:24.521447 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pmxv\" (UniqueName: \"kubernetes.io/projected/015ccb61-36a8-4353-9ac1-f5fc815d9976-kube-api-access-9pmxv\") pod \"certified-operators-trtz6\" (UID: \"015ccb61-36a8-4353-9ac1-f5fc815d9976\") " pod="openshift-marketplace/certified-operators-trtz6" Dec 10 16:57:24 crc kubenswrapper[4755]: I1210 16:57:24.602453 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjxxz\" (UniqueName: \"kubernetes.io/projected/f8899904-239f-4c48-85c0-583d0323fc5e-kube-api-access-tjxxz\") pod \"redhat-marketplace-9ltpr\" (UID: \"f8899904-239f-4c48-85c0-583d0323fc5e\") " pod="openshift-marketplace/redhat-marketplace-9ltpr" Dec 10 16:57:24 crc kubenswrapper[4755]: I1210 16:57:24.602557 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8899904-239f-4c48-85c0-583d0323fc5e-utilities\") pod \"redhat-marketplace-9ltpr\" (UID: \"f8899904-239f-4c48-85c0-583d0323fc5e\") " pod="openshift-marketplace/redhat-marketplace-9ltpr" Dec 10 16:57:24 crc kubenswrapper[4755]: I1210 16:57:24.602753 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8899904-239f-4c48-85c0-583d0323fc5e-catalog-content\") pod \"redhat-marketplace-9ltpr\" (UID: \"f8899904-239f-4c48-85c0-583d0323fc5e\") " pod="openshift-marketplace/redhat-marketplace-9ltpr" Dec 10 16:57:24 crc kubenswrapper[4755]: I1210 16:57:24.631173 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-trtz6" Dec 10 16:57:24 crc kubenswrapper[4755]: I1210 16:57:24.715956 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8899904-239f-4c48-85c0-583d0323fc5e-catalog-content\") pod \"redhat-marketplace-9ltpr\" (UID: \"f8899904-239f-4c48-85c0-583d0323fc5e\") " pod="openshift-marketplace/redhat-marketplace-9ltpr" Dec 10 16:57:24 crc kubenswrapper[4755]: I1210 16:57:24.716118 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjxxz\" (UniqueName: \"kubernetes.io/projected/f8899904-239f-4c48-85c0-583d0323fc5e-kube-api-access-tjxxz\") pod \"redhat-marketplace-9ltpr\" (UID: \"f8899904-239f-4c48-85c0-583d0323fc5e\") " pod="openshift-marketplace/redhat-marketplace-9ltpr" Dec 10 16:57:24 crc kubenswrapper[4755]: I1210 16:57:24.716137 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8899904-239f-4c48-85c0-583d0323fc5e-utilities\") pod \"redhat-marketplace-9ltpr\" (UID: \"f8899904-239f-4c48-85c0-583d0323fc5e\") " pod="openshift-marketplace/redhat-marketplace-9ltpr" Dec 10 16:57:24 crc kubenswrapper[4755]: I1210 16:57:24.716559 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8899904-239f-4c48-85c0-583d0323fc5e-utilities\") pod \"redhat-marketplace-9ltpr\" (UID: \"f8899904-239f-4c48-85c0-583d0323fc5e\") " pod="openshift-marketplace/redhat-marketplace-9ltpr" Dec 10 16:57:24 crc kubenswrapper[4755]: I1210 16:57:24.716666 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8899904-239f-4c48-85c0-583d0323fc5e-catalog-content\") pod \"redhat-marketplace-9ltpr\" (UID: \"f8899904-239f-4c48-85c0-583d0323fc5e\") " pod="openshift-marketplace/redhat-marketplace-9ltpr" Dec 10 16:57:24 crc kubenswrapper[4755]: I1210 16:57:24.733849 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjxxz\" (UniqueName: \"kubernetes.io/projected/f8899904-239f-4c48-85c0-583d0323fc5e-kube-api-access-tjxxz\") pod \"redhat-marketplace-9ltpr\" (UID: \"f8899904-239f-4c48-85c0-583d0323fc5e\") " pod="openshift-marketplace/redhat-marketplace-9ltpr" Dec 10 16:57:24 crc kubenswrapper[4755]: I1210 16:57:24.788877 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9ltpr" Dec 10 16:57:25 crc kubenswrapper[4755]: I1210 16:57:25.209246 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-trtz6"] Dec 10 16:57:25 crc kubenswrapper[4755]: I1210 16:57:25.374342 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9ltpr"] Dec 10 16:57:25 crc kubenswrapper[4755]: W1210 16:57:25.376245 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf8899904_239f_4c48_85c0_583d0323fc5e.slice/crio-139340662a91218aa654990f96d3fe7259828c61420c6534e45086bf001c36e3 WatchSource:0}: Error finding container 139340662a91218aa654990f96d3fe7259828c61420c6534e45086bf001c36e3: Status 404 returned error can't find the container with id 139340662a91218aa654990f96d3fe7259828c61420c6534e45086bf001c36e3 Dec 10 16:57:25 crc kubenswrapper[4755]: I1210 16:57:25.622042 4755 generic.go:334] "Generic (PLEG): container finished" podID="f8899904-239f-4c48-85c0-583d0323fc5e" containerID="89a2bfcfc50ccbf87bcf4f412a1e9d8a32d9cc5b5c4ad8cab76acd3d8478107a" exitCode=0 Dec 10 16:57:25 crc kubenswrapper[4755]: I1210 16:57:25.622120 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9ltpr" event={"ID":"f8899904-239f-4c48-85c0-583d0323fc5e","Type":"ContainerDied","Data":"89a2bfcfc50ccbf87bcf4f412a1e9d8a32d9cc5b5c4ad8cab76acd3d8478107a"} Dec 10 16:57:25 crc kubenswrapper[4755]: I1210 16:57:25.622146 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9ltpr" event={"ID":"f8899904-239f-4c48-85c0-583d0323fc5e","Type":"ContainerStarted","Data":"139340662a91218aa654990f96d3fe7259828c61420c6534e45086bf001c36e3"} Dec 10 16:57:25 crc kubenswrapper[4755]: I1210 16:57:25.623971 4755 generic.go:334] "Generic (PLEG): container finished" podID="015ccb61-36a8-4353-9ac1-f5fc815d9976" containerID="ce53e16b6006afc03441fbb5bd15196f998583fc72ea5382fdbf758790f1c570" exitCode=0 Dec 10 16:57:25 crc kubenswrapper[4755]: I1210 16:57:25.624011 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-trtz6" event={"ID":"015ccb61-36a8-4353-9ac1-f5fc815d9976","Type":"ContainerDied","Data":"ce53e16b6006afc03441fbb5bd15196f998583fc72ea5382fdbf758790f1c570"} Dec 10 16:57:25 crc kubenswrapper[4755]: I1210 16:57:25.624063 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-trtz6" event={"ID":"015ccb61-36a8-4353-9ac1-f5fc815d9976","Type":"ContainerStarted","Data":"cff374b8cf5394aaf8d803c519c3e8dfc116c887e02dae8059afdf613176c49b"} Dec 10 16:57:25 crc kubenswrapper[4755]: I1210 16:57:25.624025 4755 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 10 16:57:26 crc kubenswrapper[4755]: I1210 16:57:26.640115 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-trtz6" event={"ID":"015ccb61-36a8-4353-9ac1-f5fc815d9976","Type":"ContainerStarted","Data":"8f5ff51c8e9d24b3b53a4e140c80fd8d8da75d4b3346cd34ac7ebae0eb616edf"} Dec 10 16:57:26 crc kubenswrapper[4755]: I1210 16:57:26.870157 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7wcx7"] Dec 10 16:57:26 crc kubenswrapper[4755]: I1210 16:57:26.873169 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7wcx7" Dec 10 16:57:26 crc kubenswrapper[4755]: I1210 16:57:26.881580 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7wcx7"] Dec 10 16:57:26 crc kubenswrapper[4755]: I1210 16:57:26.974654 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bd183b2-bef7-416b-9dd7-138098b6481b-catalog-content\") pod \"redhat-operators-7wcx7\" (UID: \"4bd183b2-bef7-416b-9dd7-138098b6481b\") " pod="openshift-marketplace/redhat-operators-7wcx7" Dec 10 16:57:26 crc kubenswrapper[4755]: I1210 16:57:26.974718 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bd183b2-bef7-416b-9dd7-138098b6481b-utilities\") pod \"redhat-operators-7wcx7\" (UID: \"4bd183b2-bef7-416b-9dd7-138098b6481b\") " pod="openshift-marketplace/redhat-operators-7wcx7" Dec 10 16:57:26 crc kubenswrapper[4755]: I1210 16:57:26.974889 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpnk8\" (UniqueName: \"kubernetes.io/projected/4bd183b2-bef7-416b-9dd7-138098b6481b-kube-api-access-fpnk8\") pod \"redhat-operators-7wcx7\" (UID: \"4bd183b2-bef7-416b-9dd7-138098b6481b\") " pod="openshift-marketplace/redhat-operators-7wcx7" Dec 10 16:57:27 crc kubenswrapper[4755]: I1210 16:57:27.076841 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpnk8\" (UniqueName: \"kubernetes.io/projected/4bd183b2-bef7-416b-9dd7-138098b6481b-kube-api-access-fpnk8\") pod \"redhat-operators-7wcx7\" (UID: \"4bd183b2-bef7-416b-9dd7-138098b6481b\") " pod="openshift-marketplace/redhat-operators-7wcx7" Dec 10 16:57:27 crc kubenswrapper[4755]: I1210 16:57:27.077147 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bd183b2-bef7-416b-9dd7-138098b6481b-catalog-content\") pod \"redhat-operators-7wcx7\" (UID: \"4bd183b2-bef7-416b-9dd7-138098b6481b\") " pod="openshift-marketplace/redhat-operators-7wcx7" Dec 10 16:57:27 crc kubenswrapper[4755]: I1210 16:57:27.077188 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bd183b2-bef7-416b-9dd7-138098b6481b-utilities\") pod \"redhat-operators-7wcx7\" (UID: \"4bd183b2-bef7-416b-9dd7-138098b6481b\") " pod="openshift-marketplace/redhat-operators-7wcx7" Dec 10 16:57:27 crc kubenswrapper[4755]: I1210 16:57:27.077877 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bd183b2-bef7-416b-9dd7-138098b6481b-catalog-content\") pod \"redhat-operators-7wcx7\" (UID: \"4bd183b2-bef7-416b-9dd7-138098b6481b\") " pod="openshift-marketplace/redhat-operators-7wcx7" Dec 10 16:57:27 crc kubenswrapper[4755]: I1210 16:57:27.077908 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bd183b2-bef7-416b-9dd7-138098b6481b-utilities\") pod \"redhat-operators-7wcx7\" (UID: \"4bd183b2-bef7-416b-9dd7-138098b6481b\") " pod="openshift-marketplace/redhat-operators-7wcx7" Dec 10 16:57:27 crc kubenswrapper[4755]: I1210 16:57:27.098388 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpnk8\" (UniqueName: \"kubernetes.io/projected/4bd183b2-bef7-416b-9dd7-138098b6481b-kube-api-access-fpnk8\") pod \"redhat-operators-7wcx7\" (UID: \"4bd183b2-bef7-416b-9dd7-138098b6481b\") " pod="openshift-marketplace/redhat-operators-7wcx7" Dec 10 16:57:27 crc kubenswrapper[4755]: I1210 16:57:27.208368 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7wcx7" Dec 10 16:57:27 crc kubenswrapper[4755]: I1210 16:57:27.652242 4755 generic.go:334] "Generic (PLEG): container finished" podID="015ccb61-36a8-4353-9ac1-f5fc815d9976" containerID="8f5ff51c8e9d24b3b53a4e140c80fd8d8da75d4b3346cd34ac7ebae0eb616edf" exitCode=0 Dec 10 16:57:27 crc kubenswrapper[4755]: I1210 16:57:27.652383 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-trtz6" event={"ID":"015ccb61-36a8-4353-9ac1-f5fc815d9976","Type":"ContainerDied","Data":"8f5ff51c8e9d24b3b53a4e140c80fd8d8da75d4b3346cd34ac7ebae0eb616edf"} Dec 10 16:57:27 crc kubenswrapper[4755]: I1210 16:57:27.657066 4755 generic.go:334] "Generic (PLEG): container finished" podID="f8899904-239f-4c48-85c0-583d0323fc5e" containerID="567b7560e745de51b2bd76f942d444f9db3b474542e090cb6e58ca4e10695abf" exitCode=0 Dec 10 16:57:27 crc kubenswrapper[4755]: I1210 16:57:27.657095 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9ltpr" event={"ID":"f8899904-239f-4c48-85c0-583d0323fc5e","Type":"ContainerDied","Data":"567b7560e745de51b2bd76f942d444f9db3b474542e090cb6e58ca4e10695abf"} Dec 10 16:57:27 crc kubenswrapper[4755]: I1210 16:57:27.701371 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7wcx7"] Dec 10 16:57:28 crc kubenswrapper[4755]: I1210 16:57:28.669329 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-trtz6" event={"ID":"015ccb61-36a8-4353-9ac1-f5fc815d9976","Type":"ContainerStarted","Data":"34bb6286a8cfb40616257a93baefe47eaed29065dd0445bdce0d8eaa6a1c9c4b"} Dec 10 16:57:28 crc kubenswrapper[4755]: I1210 16:57:28.673380 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9ltpr" event={"ID":"f8899904-239f-4c48-85c0-583d0323fc5e","Type":"ContainerStarted","Data":"6050a0c4858531d4b0e4852f2dc9fcf7b95a022f3351a53e0c65aa901b416ebd"} Dec 10 16:57:28 crc kubenswrapper[4755]: I1210 16:57:28.675208 4755 generic.go:334] "Generic (PLEG): container finished" podID="4bd183b2-bef7-416b-9dd7-138098b6481b" containerID="6b0746f20bec1ec77480be34f56fcdd6bdf09157254795398fb8cf22b7fcfcbe" exitCode=0 Dec 10 16:57:28 crc kubenswrapper[4755]: I1210 16:57:28.675246 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7wcx7" event={"ID":"4bd183b2-bef7-416b-9dd7-138098b6481b","Type":"ContainerDied","Data":"6b0746f20bec1ec77480be34f56fcdd6bdf09157254795398fb8cf22b7fcfcbe"} Dec 10 16:57:28 crc kubenswrapper[4755]: I1210 16:57:28.675268 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7wcx7" event={"ID":"4bd183b2-bef7-416b-9dd7-138098b6481b","Type":"ContainerStarted","Data":"a93dd0d88f737ca1f7f0e50e698e8a61e921131e6c7f148887060578d9baa649"} Dec 10 16:57:28 crc kubenswrapper[4755]: I1210 16:57:28.690045 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-trtz6" podStartSLOduration=2.123843673 podStartE2EDuration="4.690027012s" podCreationTimestamp="2025-12-10 16:57:24 +0000 UTC" firstStartedPulling="2025-12-10 16:57:25.626073809 +0000 UTC m=+5642.226957451" lastFinishedPulling="2025-12-10 16:57:28.192257158 +0000 UTC m=+5644.793140790" observedRunningTime="2025-12-10 16:57:28.687094132 +0000 UTC m=+5645.287977774" watchObservedRunningTime="2025-12-10 16:57:28.690027012 +0000 UTC m=+5645.290910644" Dec 10 16:57:28 crc kubenswrapper[4755]: I1210 16:57:28.710906 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9ltpr" podStartSLOduration=2.197234524 podStartE2EDuration="4.710883491s" podCreationTimestamp="2025-12-10 16:57:24 +0000 UTC" firstStartedPulling="2025-12-10 16:57:25.623768027 +0000 UTC m=+5642.224651659" lastFinishedPulling="2025-12-10 16:57:28.137416994 +0000 UTC m=+5644.738300626" observedRunningTime="2025-12-10 16:57:28.705803342 +0000 UTC m=+5645.306686974" watchObservedRunningTime="2025-12-10 16:57:28.710883491 +0000 UTC m=+5645.311767133" Dec 10 16:57:29 crc kubenswrapper[4755]: I1210 16:57:29.686737 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7wcx7" event={"ID":"4bd183b2-bef7-416b-9dd7-138098b6481b","Type":"ContainerStarted","Data":"a345df7889e3a98afdcb47521957993dc0878fdc0d1acdd2f0f40d6df60c95a6"} Dec 10 16:57:34 crc kubenswrapper[4755]: I1210 16:57:34.632273 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-trtz6" Dec 10 16:57:34 crc kubenswrapper[4755]: I1210 16:57:34.632942 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-trtz6" Dec 10 16:57:34 crc kubenswrapper[4755]: I1210 16:57:34.708844 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-trtz6" Dec 10 16:57:34 crc kubenswrapper[4755]: I1210 16:57:34.757988 4755 generic.go:334] "Generic (PLEG): container finished" podID="4bd183b2-bef7-416b-9dd7-138098b6481b" containerID="a345df7889e3a98afdcb47521957993dc0878fdc0d1acdd2f0f40d6df60c95a6" exitCode=0 Dec 10 16:57:34 crc kubenswrapper[4755]: I1210 16:57:34.758030 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7wcx7" event={"ID":"4bd183b2-bef7-416b-9dd7-138098b6481b","Type":"ContainerDied","Data":"a345df7889e3a98afdcb47521957993dc0878fdc0d1acdd2f0f40d6df60c95a6"} Dec 10 16:57:34 crc kubenswrapper[4755]: I1210 16:57:34.790759 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9ltpr" Dec 10 16:57:34 crc kubenswrapper[4755]: I1210 16:57:34.790835 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9ltpr" Dec 10 16:57:34 crc kubenswrapper[4755]: I1210 16:57:34.832308 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-trtz6" Dec 10 16:57:34 crc kubenswrapper[4755]: I1210 16:57:34.846654 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9ltpr" Dec 10 16:57:35 crc kubenswrapper[4755]: I1210 16:57:35.837013 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9ltpr" Dec 10 16:57:37 crc kubenswrapper[4755]: I1210 16:57:37.253217 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-trtz6"] Dec 10 16:57:37 crc kubenswrapper[4755]: I1210 16:57:37.253989 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-trtz6" podUID="015ccb61-36a8-4353-9ac1-f5fc815d9976" containerName="registry-server" containerID="cri-o://34bb6286a8cfb40616257a93baefe47eaed29065dd0445bdce0d8eaa6a1c9c4b" gracePeriod=2 Dec 10 16:57:37 crc kubenswrapper[4755]: I1210 16:57:37.464538 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9ltpr"] Dec 10 16:57:37 crc kubenswrapper[4755]: I1210 16:57:37.793958 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7wcx7" event={"ID":"4bd183b2-bef7-416b-9dd7-138098b6481b","Type":"ContainerStarted","Data":"b420e3f5c6159b3f073a82992b1690dccbe4211c33873fe8697462b499362b95"} Dec 10 16:57:37 crc kubenswrapper[4755]: I1210 16:57:37.801820 4755 generic.go:334] "Generic (PLEG): container finished" podID="015ccb61-36a8-4353-9ac1-f5fc815d9976" containerID="34bb6286a8cfb40616257a93baefe47eaed29065dd0445bdce0d8eaa6a1c9c4b" exitCode=0 Dec 10 16:57:37 crc kubenswrapper[4755]: I1210 16:57:37.801902 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-trtz6" event={"ID":"015ccb61-36a8-4353-9ac1-f5fc815d9976","Type":"ContainerDied","Data":"34bb6286a8cfb40616257a93baefe47eaed29065dd0445bdce0d8eaa6a1c9c4b"} Dec 10 16:57:37 crc kubenswrapper[4755]: I1210 16:57:37.801956 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-trtz6" event={"ID":"015ccb61-36a8-4353-9ac1-f5fc815d9976","Type":"ContainerDied","Data":"cff374b8cf5394aaf8d803c519c3e8dfc116c887e02dae8059afdf613176c49b"} Dec 10 16:57:37 crc kubenswrapper[4755]: I1210 16:57:37.801971 4755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cff374b8cf5394aaf8d803c519c3e8dfc116c887e02dae8059afdf613176c49b" Dec 10 16:57:37 crc kubenswrapper[4755]: I1210 16:57:37.802141 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9ltpr" podUID="f8899904-239f-4c48-85c0-583d0323fc5e" containerName="registry-server" containerID="cri-o://6050a0c4858531d4b0e4852f2dc9fcf7b95a022f3351a53e0c65aa901b416ebd" gracePeriod=2 Dec 10 16:57:37 crc kubenswrapper[4755]: I1210 16:57:37.810643 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7wcx7" podStartSLOduration=3.633861282 podStartE2EDuration="11.81062741s" podCreationTimestamp="2025-12-10 16:57:26 +0000 UTC" firstStartedPulling="2025-12-10 16:57:28.676782311 +0000 UTC m=+5645.277665943" lastFinishedPulling="2025-12-10 16:57:36.853548439 +0000 UTC m=+5653.454432071" observedRunningTime="2025-12-10 16:57:37.808049949 +0000 UTC m=+5654.408933591" watchObservedRunningTime="2025-12-10 16:57:37.81062741 +0000 UTC m=+5654.411511042" Dec 10 16:57:37 crc kubenswrapper[4755]: I1210 16:57:37.858987 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-trtz6" Dec 10 16:57:37 crc kubenswrapper[4755]: I1210 16:57:37.888900 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/015ccb61-36a8-4353-9ac1-f5fc815d9976-utilities\") pod \"015ccb61-36a8-4353-9ac1-f5fc815d9976\" (UID: \"015ccb61-36a8-4353-9ac1-f5fc815d9976\") " Dec 10 16:57:37 crc kubenswrapper[4755]: I1210 16:57:37.889112 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9pmxv\" (UniqueName: \"kubernetes.io/projected/015ccb61-36a8-4353-9ac1-f5fc815d9976-kube-api-access-9pmxv\") pod \"015ccb61-36a8-4353-9ac1-f5fc815d9976\" (UID: \"015ccb61-36a8-4353-9ac1-f5fc815d9976\") " Dec 10 16:57:37 crc kubenswrapper[4755]: I1210 16:57:37.889167 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/015ccb61-36a8-4353-9ac1-f5fc815d9976-catalog-content\") pod \"015ccb61-36a8-4353-9ac1-f5fc815d9976\" (UID: \"015ccb61-36a8-4353-9ac1-f5fc815d9976\") " Dec 10 16:57:37 crc kubenswrapper[4755]: I1210 16:57:37.891073 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/015ccb61-36a8-4353-9ac1-f5fc815d9976-utilities" (OuterVolumeSpecName: "utilities") pod "015ccb61-36a8-4353-9ac1-f5fc815d9976" (UID: "015ccb61-36a8-4353-9ac1-f5fc815d9976"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 16:57:37 crc kubenswrapper[4755]: I1210 16:57:37.896742 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/015ccb61-36a8-4353-9ac1-f5fc815d9976-kube-api-access-9pmxv" (OuterVolumeSpecName: "kube-api-access-9pmxv") pod "015ccb61-36a8-4353-9ac1-f5fc815d9976" (UID: "015ccb61-36a8-4353-9ac1-f5fc815d9976"). InnerVolumeSpecName "kube-api-access-9pmxv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 16:57:37 crc kubenswrapper[4755]: I1210 16:57:37.945804 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/015ccb61-36a8-4353-9ac1-f5fc815d9976-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "015ccb61-36a8-4353-9ac1-f5fc815d9976" (UID: "015ccb61-36a8-4353-9ac1-f5fc815d9976"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 16:57:37 crc kubenswrapper[4755]: I1210 16:57:37.990561 4755 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/015ccb61-36a8-4353-9ac1-f5fc815d9976-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 16:57:37 crc kubenswrapper[4755]: I1210 16:57:37.990606 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9pmxv\" (UniqueName: \"kubernetes.io/projected/015ccb61-36a8-4353-9ac1-f5fc815d9976-kube-api-access-9pmxv\") on node \"crc\" DevicePath \"\"" Dec 10 16:57:37 crc kubenswrapper[4755]: I1210 16:57:37.990717 4755 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/015ccb61-36a8-4353-9ac1-f5fc815d9976-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 16:57:38 crc kubenswrapper[4755]: I1210 16:57:38.314292 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9ltpr" Dec 10 16:57:38 crc kubenswrapper[4755]: I1210 16:57:38.402192 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8899904-239f-4c48-85c0-583d0323fc5e-utilities\") pod \"f8899904-239f-4c48-85c0-583d0323fc5e\" (UID: \"f8899904-239f-4c48-85c0-583d0323fc5e\") " Dec 10 16:57:38 crc kubenswrapper[4755]: I1210 16:57:38.402313 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tjxxz\" (UniqueName: \"kubernetes.io/projected/f8899904-239f-4c48-85c0-583d0323fc5e-kube-api-access-tjxxz\") pod \"f8899904-239f-4c48-85c0-583d0323fc5e\" (UID: \"f8899904-239f-4c48-85c0-583d0323fc5e\") " Dec 10 16:57:38 crc kubenswrapper[4755]: I1210 16:57:38.402398 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8899904-239f-4c48-85c0-583d0323fc5e-catalog-content\") pod \"f8899904-239f-4c48-85c0-583d0323fc5e\" (UID: \"f8899904-239f-4c48-85c0-583d0323fc5e\") " Dec 10 16:57:38 crc kubenswrapper[4755]: I1210 16:57:38.403050 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f8899904-239f-4c48-85c0-583d0323fc5e-utilities" (OuterVolumeSpecName: "utilities") pod "f8899904-239f-4c48-85c0-583d0323fc5e" (UID: "f8899904-239f-4c48-85c0-583d0323fc5e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 16:57:38 crc kubenswrapper[4755]: I1210 16:57:38.403726 4755 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8899904-239f-4c48-85c0-583d0323fc5e-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 16:57:38 crc kubenswrapper[4755]: I1210 16:57:38.408202 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8899904-239f-4c48-85c0-583d0323fc5e-kube-api-access-tjxxz" (OuterVolumeSpecName: "kube-api-access-tjxxz") pod "f8899904-239f-4c48-85c0-583d0323fc5e" (UID: "f8899904-239f-4c48-85c0-583d0323fc5e"). InnerVolumeSpecName "kube-api-access-tjxxz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 16:57:38 crc kubenswrapper[4755]: I1210 16:57:38.421036 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f8899904-239f-4c48-85c0-583d0323fc5e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f8899904-239f-4c48-85c0-583d0323fc5e" (UID: "f8899904-239f-4c48-85c0-583d0323fc5e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 16:57:38 crc kubenswrapper[4755]: I1210 16:57:38.506065 4755 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8899904-239f-4c48-85c0-583d0323fc5e-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 16:57:38 crc kubenswrapper[4755]: I1210 16:57:38.506141 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tjxxz\" (UniqueName: \"kubernetes.io/projected/f8899904-239f-4c48-85c0-583d0323fc5e-kube-api-access-tjxxz\") on node \"crc\" DevicePath \"\"" Dec 10 16:57:38 crc kubenswrapper[4755]: I1210 16:57:38.836769 4755 generic.go:334] "Generic (PLEG): container finished" podID="f8899904-239f-4c48-85c0-583d0323fc5e" containerID="6050a0c4858531d4b0e4852f2dc9fcf7b95a022f3351a53e0c65aa901b416ebd" exitCode=0 Dec 10 16:57:38 crc kubenswrapper[4755]: I1210 16:57:38.837194 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-trtz6" Dec 10 16:57:38 crc kubenswrapper[4755]: I1210 16:57:38.838616 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9ltpr" Dec 10 16:57:38 crc kubenswrapper[4755]: I1210 16:57:38.838783 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9ltpr" event={"ID":"f8899904-239f-4c48-85c0-583d0323fc5e","Type":"ContainerDied","Data":"6050a0c4858531d4b0e4852f2dc9fcf7b95a022f3351a53e0c65aa901b416ebd"} Dec 10 16:57:38 crc kubenswrapper[4755]: I1210 16:57:38.838960 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9ltpr" event={"ID":"f8899904-239f-4c48-85c0-583d0323fc5e","Type":"ContainerDied","Data":"139340662a91218aa654990f96d3fe7259828c61420c6534e45086bf001c36e3"} Dec 10 16:57:38 crc kubenswrapper[4755]: I1210 16:57:38.839094 4755 scope.go:117] "RemoveContainer" containerID="6050a0c4858531d4b0e4852f2dc9fcf7b95a022f3351a53e0c65aa901b416ebd" Dec 10 16:57:38 crc kubenswrapper[4755]: I1210 16:57:38.888947 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-trtz6"] Dec 10 16:57:38 crc kubenswrapper[4755]: I1210 16:57:38.902018 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-trtz6"] Dec 10 16:57:38 crc kubenswrapper[4755]: I1210 16:57:38.902432 4755 scope.go:117] "RemoveContainer" containerID="567b7560e745de51b2bd76f942d444f9db3b474542e090cb6e58ca4e10695abf" Dec 10 16:57:38 crc kubenswrapper[4755]: I1210 16:57:38.911715 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9ltpr"] Dec 10 16:57:38 crc kubenswrapper[4755]: I1210 16:57:38.922175 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9ltpr"] Dec 10 16:57:38 crc kubenswrapper[4755]: I1210 16:57:38.931887 4755 scope.go:117] "RemoveContainer" containerID="89a2bfcfc50ccbf87bcf4f412a1e9d8a32d9cc5b5c4ad8cab76acd3d8478107a" Dec 10 16:57:38 crc kubenswrapper[4755]: I1210 16:57:38.987003 4755 scope.go:117] "RemoveContainer" containerID="6050a0c4858531d4b0e4852f2dc9fcf7b95a022f3351a53e0c65aa901b416ebd" Dec 10 16:57:38 crc kubenswrapper[4755]: E1210 16:57:38.987449 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6050a0c4858531d4b0e4852f2dc9fcf7b95a022f3351a53e0c65aa901b416ebd\": container with ID starting with 6050a0c4858531d4b0e4852f2dc9fcf7b95a022f3351a53e0c65aa901b416ebd not found: ID does not exist" containerID="6050a0c4858531d4b0e4852f2dc9fcf7b95a022f3351a53e0c65aa901b416ebd" Dec 10 16:57:38 crc kubenswrapper[4755]: I1210 16:57:38.987503 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6050a0c4858531d4b0e4852f2dc9fcf7b95a022f3351a53e0c65aa901b416ebd"} err="failed to get container status \"6050a0c4858531d4b0e4852f2dc9fcf7b95a022f3351a53e0c65aa901b416ebd\": rpc error: code = NotFound desc = could not find container \"6050a0c4858531d4b0e4852f2dc9fcf7b95a022f3351a53e0c65aa901b416ebd\": container with ID starting with 6050a0c4858531d4b0e4852f2dc9fcf7b95a022f3351a53e0c65aa901b416ebd not found: ID does not exist" Dec 10 16:57:38 crc kubenswrapper[4755]: I1210 16:57:38.987522 4755 scope.go:117] "RemoveContainer" containerID="567b7560e745de51b2bd76f942d444f9db3b474542e090cb6e58ca4e10695abf" Dec 10 16:57:38 crc kubenswrapper[4755]: E1210 16:57:38.987838 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"567b7560e745de51b2bd76f942d444f9db3b474542e090cb6e58ca4e10695abf\": container with ID starting with 567b7560e745de51b2bd76f942d444f9db3b474542e090cb6e58ca4e10695abf not found: ID does not exist" containerID="567b7560e745de51b2bd76f942d444f9db3b474542e090cb6e58ca4e10695abf" Dec 10 16:57:38 crc kubenswrapper[4755]: I1210 16:57:38.987858 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"567b7560e745de51b2bd76f942d444f9db3b474542e090cb6e58ca4e10695abf"} err="failed to get container status \"567b7560e745de51b2bd76f942d444f9db3b474542e090cb6e58ca4e10695abf\": rpc error: code = NotFound desc = could not find container \"567b7560e745de51b2bd76f942d444f9db3b474542e090cb6e58ca4e10695abf\": container with ID starting with 567b7560e745de51b2bd76f942d444f9db3b474542e090cb6e58ca4e10695abf not found: ID does not exist" Dec 10 16:57:38 crc kubenswrapper[4755]: I1210 16:57:38.987871 4755 scope.go:117] "RemoveContainer" containerID="89a2bfcfc50ccbf87bcf4f412a1e9d8a32d9cc5b5c4ad8cab76acd3d8478107a" Dec 10 16:57:38 crc kubenswrapper[4755]: E1210 16:57:38.988184 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89a2bfcfc50ccbf87bcf4f412a1e9d8a32d9cc5b5c4ad8cab76acd3d8478107a\": container with ID starting with 89a2bfcfc50ccbf87bcf4f412a1e9d8a32d9cc5b5c4ad8cab76acd3d8478107a not found: ID does not exist" containerID="89a2bfcfc50ccbf87bcf4f412a1e9d8a32d9cc5b5c4ad8cab76acd3d8478107a" Dec 10 16:57:38 crc kubenswrapper[4755]: I1210 16:57:38.988202 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89a2bfcfc50ccbf87bcf4f412a1e9d8a32d9cc5b5c4ad8cab76acd3d8478107a"} err="failed to get container status \"89a2bfcfc50ccbf87bcf4f412a1e9d8a32d9cc5b5c4ad8cab76acd3d8478107a\": rpc error: code = NotFound desc = could not find container \"89a2bfcfc50ccbf87bcf4f412a1e9d8a32d9cc5b5c4ad8cab76acd3d8478107a\": container with ID starting with 89a2bfcfc50ccbf87bcf4f412a1e9d8a32d9cc5b5c4ad8cab76acd3d8478107a not found: ID does not exist" Dec 10 16:57:39 crc kubenswrapper[4755]: I1210 16:57:39.772752 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="015ccb61-36a8-4353-9ac1-f5fc815d9976" path="/var/lib/kubelet/pods/015ccb61-36a8-4353-9ac1-f5fc815d9976/volumes" Dec 10 16:57:39 crc kubenswrapper[4755]: I1210 16:57:39.773419 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8899904-239f-4c48-85c0-583d0323fc5e" path="/var/lib/kubelet/pods/f8899904-239f-4c48-85c0-583d0323fc5e/volumes" Dec 10 16:57:47 crc kubenswrapper[4755]: I1210 16:57:47.209735 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7wcx7" Dec 10 16:57:47 crc kubenswrapper[4755]: I1210 16:57:47.210292 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7wcx7" Dec 10 16:57:47 crc kubenswrapper[4755]: I1210 16:57:47.266887 4755 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7wcx7" Dec 10 16:57:48 crc kubenswrapper[4755]: I1210 16:57:48.007603 4755 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7wcx7" Dec 10 16:57:48 crc kubenswrapper[4755]: I1210 16:57:48.069499 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7wcx7"] Dec 10 16:57:49 crc kubenswrapper[4755]: I1210 16:57:49.958659 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7wcx7" podUID="4bd183b2-bef7-416b-9dd7-138098b6481b" containerName="registry-server" containerID="cri-o://b420e3f5c6159b3f073a82992b1690dccbe4211c33873fe8697462b499362b95" gracePeriod=2 Dec 10 16:57:50 crc kubenswrapper[4755]: I1210 16:57:50.483136 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7wcx7" Dec 10 16:57:50 crc kubenswrapper[4755]: I1210 16:57:50.616819 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fpnk8\" (UniqueName: \"kubernetes.io/projected/4bd183b2-bef7-416b-9dd7-138098b6481b-kube-api-access-fpnk8\") pod \"4bd183b2-bef7-416b-9dd7-138098b6481b\" (UID: \"4bd183b2-bef7-416b-9dd7-138098b6481b\") " Dec 10 16:57:50 crc kubenswrapper[4755]: I1210 16:57:50.616952 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bd183b2-bef7-416b-9dd7-138098b6481b-utilities\") pod \"4bd183b2-bef7-416b-9dd7-138098b6481b\" (UID: \"4bd183b2-bef7-416b-9dd7-138098b6481b\") " Dec 10 16:57:50 crc kubenswrapper[4755]: I1210 16:57:50.617046 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bd183b2-bef7-416b-9dd7-138098b6481b-catalog-content\") pod \"4bd183b2-bef7-416b-9dd7-138098b6481b\" (UID: \"4bd183b2-bef7-416b-9dd7-138098b6481b\") " Dec 10 16:57:50 crc kubenswrapper[4755]: I1210 16:57:50.618053 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4bd183b2-bef7-416b-9dd7-138098b6481b-utilities" (OuterVolumeSpecName: "utilities") pod "4bd183b2-bef7-416b-9dd7-138098b6481b" (UID: "4bd183b2-bef7-416b-9dd7-138098b6481b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 16:57:50 crc kubenswrapper[4755]: I1210 16:57:50.719165 4755 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bd183b2-bef7-416b-9dd7-138098b6481b-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 16:57:50 crc kubenswrapper[4755]: I1210 16:57:50.763252 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4bd183b2-bef7-416b-9dd7-138098b6481b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4bd183b2-bef7-416b-9dd7-138098b6481b" (UID: "4bd183b2-bef7-416b-9dd7-138098b6481b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 16:57:50 crc kubenswrapper[4755]: I1210 16:57:50.811653 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bd183b2-bef7-416b-9dd7-138098b6481b-kube-api-access-fpnk8" (OuterVolumeSpecName: "kube-api-access-fpnk8") pod "4bd183b2-bef7-416b-9dd7-138098b6481b" (UID: "4bd183b2-bef7-416b-9dd7-138098b6481b"). InnerVolumeSpecName "kube-api-access-fpnk8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 16:57:50 crc kubenswrapper[4755]: I1210 16:57:50.828790 4755 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bd183b2-bef7-416b-9dd7-138098b6481b-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 16:57:50 crc kubenswrapper[4755]: I1210 16:57:50.829038 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fpnk8\" (UniqueName: \"kubernetes.io/projected/4bd183b2-bef7-416b-9dd7-138098b6481b-kube-api-access-fpnk8\") on node \"crc\" DevicePath \"\"" Dec 10 16:57:50 crc kubenswrapper[4755]: I1210 16:57:50.971059 4755 generic.go:334] "Generic (PLEG): container finished" podID="4bd183b2-bef7-416b-9dd7-138098b6481b" containerID="b420e3f5c6159b3f073a82992b1690dccbe4211c33873fe8697462b499362b95" exitCode=0 Dec 10 16:57:50 crc kubenswrapper[4755]: I1210 16:57:50.971104 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7wcx7" event={"ID":"4bd183b2-bef7-416b-9dd7-138098b6481b","Type":"ContainerDied","Data":"b420e3f5c6159b3f073a82992b1690dccbe4211c33873fe8697462b499362b95"} Dec 10 16:57:50 crc kubenswrapper[4755]: I1210 16:57:50.971134 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7wcx7" event={"ID":"4bd183b2-bef7-416b-9dd7-138098b6481b","Type":"ContainerDied","Data":"a93dd0d88f737ca1f7f0e50e698e8a61e921131e6c7f148887060578d9baa649"} Dec 10 16:57:50 crc kubenswrapper[4755]: I1210 16:57:50.971133 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7wcx7" Dec 10 16:57:50 crc kubenswrapper[4755]: I1210 16:57:50.971156 4755 scope.go:117] "RemoveContainer" containerID="b420e3f5c6159b3f073a82992b1690dccbe4211c33873fe8697462b499362b95" Dec 10 16:57:50 crc kubenswrapper[4755]: I1210 16:57:50.995459 4755 scope.go:117] "RemoveContainer" containerID="a345df7889e3a98afdcb47521957993dc0878fdc0d1acdd2f0f40d6df60c95a6" Dec 10 16:57:51 crc kubenswrapper[4755]: I1210 16:57:51.012072 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7wcx7"] Dec 10 16:57:51 crc kubenswrapper[4755]: I1210 16:57:51.026960 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7wcx7"] Dec 10 16:57:51 crc kubenswrapper[4755]: I1210 16:57:51.029593 4755 scope.go:117] "RemoveContainer" containerID="6b0746f20bec1ec77480be34f56fcdd6bdf09157254795398fb8cf22b7fcfcbe" Dec 10 16:57:51 crc kubenswrapper[4755]: I1210 16:57:51.082786 4755 scope.go:117] "RemoveContainer" containerID="b420e3f5c6159b3f073a82992b1690dccbe4211c33873fe8697462b499362b95" Dec 10 16:57:51 crc kubenswrapper[4755]: E1210 16:57:51.083347 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b420e3f5c6159b3f073a82992b1690dccbe4211c33873fe8697462b499362b95\": container with ID starting with b420e3f5c6159b3f073a82992b1690dccbe4211c33873fe8697462b499362b95 not found: ID does not exist" containerID="b420e3f5c6159b3f073a82992b1690dccbe4211c33873fe8697462b499362b95" Dec 10 16:57:51 crc kubenswrapper[4755]: I1210 16:57:51.083402 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b420e3f5c6159b3f073a82992b1690dccbe4211c33873fe8697462b499362b95"} err="failed to get container status \"b420e3f5c6159b3f073a82992b1690dccbe4211c33873fe8697462b499362b95\": rpc error: code = NotFound desc = could not find container \"b420e3f5c6159b3f073a82992b1690dccbe4211c33873fe8697462b499362b95\": container with ID starting with b420e3f5c6159b3f073a82992b1690dccbe4211c33873fe8697462b499362b95 not found: ID does not exist" Dec 10 16:57:51 crc kubenswrapper[4755]: I1210 16:57:51.083438 4755 scope.go:117] "RemoveContainer" containerID="a345df7889e3a98afdcb47521957993dc0878fdc0d1acdd2f0f40d6df60c95a6" Dec 10 16:57:51 crc kubenswrapper[4755]: E1210 16:57:51.083769 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a345df7889e3a98afdcb47521957993dc0878fdc0d1acdd2f0f40d6df60c95a6\": container with ID starting with a345df7889e3a98afdcb47521957993dc0878fdc0d1acdd2f0f40d6df60c95a6 not found: ID does not exist" containerID="a345df7889e3a98afdcb47521957993dc0878fdc0d1acdd2f0f40d6df60c95a6" Dec 10 16:57:51 crc kubenswrapper[4755]: I1210 16:57:51.083813 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a345df7889e3a98afdcb47521957993dc0878fdc0d1acdd2f0f40d6df60c95a6"} err="failed to get container status \"a345df7889e3a98afdcb47521957993dc0878fdc0d1acdd2f0f40d6df60c95a6\": rpc error: code = NotFound desc = could not find container \"a345df7889e3a98afdcb47521957993dc0878fdc0d1acdd2f0f40d6df60c95a6\": container with ID starting with a345df7889e3a98afdcb47521957993dc0878fdc0d1acdd2f0f40d6df60c95a6 not found: ID does not exist" Dec 10 16:57:51 crc kubenswrapper[4755]: I1210 16:57:51.083844 4755 scope.go:117] "RemoveContainer" containerID="6b0746f20bec1ec77480be34f56fcdd6bdf09157254795398fb8cf22b7fcfcbe" Dec 10 16:57:51 crc kubenswrapper[4755]: E1210 16:57:51.084191 4755 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b0746f20bec1ec77480be34f56fcdd6bdf09157254795398fb8cf22b7fcfcbe\": container with ID starting with 6b0746f20bec1ec77480be34f56fcdd6bdf09157254795398fb8cf22b7fcfcbe not found: ID does not exist" containerID="6b0746f20bec1ec77480be34f56fcdd6bdf09157254795398fb8cf22b7fcfcbe" Dec 10 16:57:51 crc kubenswrapper[4755]: I1210 16:57:51.084229 4755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b0746f20bec1ec77480be34f56fcdd6bdf09157254795398fb8cf22b7fcfcbe"} err="failed to get container status \"6b0746f20bec1ec77480be34f56fcdd6bdf09157254795398fb8cf22b7fcfcbe\": rpc error: code = NotFound desc = could not find container \"6b0746f20bec1ec77480be34f56fcdd6bdf09157254795398fb8cf22b7fcfcbe\": container with ID starting with 6b0746f20bec1ec77480be34f56fcdd6bdf09157254795398fb8cf22b7fcfcbe not found: ID does not exist" Dec 10 16:57:51 crc kubenswrapper[4755]: I1210 16:57:51.793648 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bd183b2-bef7-416b-9dd7-138098b6481b" path="/var/lib/kubelet/pods/4bd183b2-bef7-416b-9dd7-138098b6481b/volumes" Dec 10 16:58:10 crc kubenswrapper[4755]: I1210 16:58:10.359177 4755 patch_prober.go:28] interesting pod/machine-config-daemon-ggt8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 16:58:10 crc kubenswrapper[4755]: I1210 16:58:10.359824 4755 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 16:58:40 crc kubenswrapper[4755]: I1210 16:58:40.359527 4755 patch_prober.go:28] interesting pod/machine-config-daemon-ggt8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 16:58:40 crc kubenswrapper[4755]: I1210 16:58:40.359996 4755 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 16:59:09 crc kubenswrapper[4755]: I1210 16:59:09.289683 4755 scope.go:117] "RemoveContainer" containerID="70b783833605e770f28f5b7978f9c033a7830d1f592fc19ca8cf79a27a597e8f" Dec 10 16:59:09 crc kubenswrapper[4755]: I1210 16:59:09.327863 4755 scope.go:117] "RemoveContainer" containerID="c251a6c3eed3e5c9e81d709c320fb94511f09ae3446bf49163e2f30eea6f741a" Dec 10 16:59:09 crc kubenswrapper[4755]: I1210 16:59:09.376533 4755 scope.go:117] "RemoveContainer" containerID="aae9831892a6095efe81c610f88ce117b7fd84726cd182f179b22718fd5dcd81" Dec 10 16:59:10 crc kubenswrapper[4755]: I1210 16:59:10.359806 4755 patch_prober.go:28] interesting pod/machine-config-daemon-ggt8v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 16:59:10 crc kubenswrapper[4755]: I1210 16:59:10.360181 4755 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 16:59:10 crc kubenswrapper[4755]: I1210 16:59:10.360276 4755 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" Dec 10 16:59:10 crc kubenswrapper[4755]: I1210 16:59:10.361074 4755 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c6023c772f9cc48b73b2c2cb21153ab38fb3d0f36b9615d776f62c1c12e7d3a8"} pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 10 16:59:10 crc kubenswrapper[4755]: I1210 16:59:10.361155 4755 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" containerName="machine-config-daemon" containerID="cri-o://c6023c772f9cc48b73b2c2cb21153ab38fb3d0f36b9615d776f62c1c12e7d3a8" gracePeriod=600 Dec 10 16:59:10 crc kubenswrapper[4755]: E1210 16:59:10.510605 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:59:10 crc kubenswrapper[4755]: I1210 16:59:10.798079 4755 generic.go:334] "Generic (PLEG): container finished" podID="b132a8b9-1c99-414d-8773-229bf36b305d" containerID="c6023c772f9cc48b73b2c2cb21153ab38fb3d0f36b9615d776f62c1c12e7d3a8" exitCode=0 Dec 10 16:59:10 crc kubenswrapper[4755]: I1210 16:59:10.798126 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" event={"ID":"b132a8b9-1c99-414d-8773-229bf36b305d","Type":"ContainerDied","Data":"c6023c772f9cc48b73b2c2cb21153ab38fb3d0f36b9615d776f62c1c12e7d3a8"} Dec 10 16:59:10 crc kubenswrapper[4755]: I1210 16:59:10.798172 4755 scope.go:117] "RemoveContainer" containerID="89057655675f6d124f1d2ccf4bdb67361d7785d60846c227de326814a7f4d3a9" Dec 10 16:59:10 crc kubenswrapper[4755]: I1210 16:59:10.798886 4755 scope.go:117] "RemoveContainer" containerID="c6023c772f9cc48b73b2c2cb21153ab38fb3d0f36b9615d776f62c1c12e7d3a8" Dec 10 16:59:10 crc kubenswrapper[4755]: E1210 16:59:10.799240 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:59:25 crc kubenswrapper[4755]: I1210 16:59:25.758071 4755 scope.go:117] "RemoveContainer" containerID="c6023c772f9cc48b73b2c2cb21153ab38fb3d0f36b9615d776f62c1c12e7d3a8" Dec 10 16:59:25 crc kubenswrapper[4755]: E1210 16:59:25.758901 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:59:38 crc kubenswrapper[4755]: I1210 16:59:38.758019 4755 scope.go:117] "RemoveContainer" containerID="c6023c772f9cc48b73b2c2cb21153ab38fb3d0f36b9615d776f62c1c12e7d3a8" Dec 10 16:59:38 crc kubenswrapper[4755]: E1210 16:59:38.759242 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 16:59:50 crc kubenswrapper[4755]: I1210 16:59:50.757204 4755 scope.go:117] "RemoveContainer" containerID="c6023c772f9cc48b73b2c2cb21153ab38fb3d0f36b9615d776f62c1c12e7d3a8" Dec 10 16:59:50 crc kubenswrapper[4755]: E1210 16:59:50.757930 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 17:00:00 crc kubenswrapper[4755]: I1210 17:00:00.157899 4755 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29423100-5kvfw"] Dec 10 17:00:00 crc kubenswrapper[4755]: E1210 17:00:00.159015 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8899904-239f-4c48-85c0-583d0323fc5e" containerName="extract-utilities" Dec 10 17:00:00 crc kubenswrapper[4755]: I1210 17:00:00.159036 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8899904-239f-4c48-85c0-583d0323fc5e" containerName="extract-utilities" Dec 10 17:00:00 crc kubenswrapper[4755]: E1210 17:00:00.159056 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8899904-239f-4c48-85c0-583d0323fc5e" containerName="extract-content" Dec 10 17:00:00 crc kubenswrapper[4755]: I1210 17:00:00.159065 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8899904-239f-4c48-85c0-583d0323fc5e" containerName="extract-content" Dec 10 17:00:00 crc kubenswrapper[4755]: E1210 17:00:00.159085 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="015ccb61-36a8-4353-9ac1-f5fc815d9976" containerName="registry-server" Dec 10 17:00:00 crc kubenswrapper[4755]: I1210 17:00:00.159093 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="015ccb61-36a8-4353-9ac1-f5fc815d9976" containerName="registry-server" Dec 10 17:00:00 crc kubenswrapper[4755]: E1210 17:00:00.159103 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bd183b2-bef7-416b-9dd7-138098b6481b" containerName="extract-content" Dec 10 17:00:00 crc kubenswrapper[4755]: I1210 17:00:00.159109 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bd183b2-bef7-416b-9dd7-138098b6481b" containerName="extract-content" Dec 10 17:00:00 crc kubenswrapper[4755]: E1210 17:00:00.159134 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="015ccb61-36a8-4353-9ac1-f5fc815d9976" containerName="extract-content" Dec 10 17:00:00 crc kubenswrapper[4755]: I1210 17:00:00.159141 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="015ccb61-36a8-4353-9ac1-f5fc815d9976" containerName="extract-content" Dec 10 17:00:00 crc kubenswrapper[4755]: E1210 17:00:00.159149 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="015ccb61-36a8-4353-9ac1-f5fc815d9976" containerName="extract-utilities" Dec 10 17:00:00 crc kubenswrapper[4755]: I1210 17:00:00.159156 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="015ccb61-36a8-4353-9ac1-f5fc815d9976" containerName="extract-utilities" Dec 10 17:00:00 crc kubenswrapper[4755]: E1210 17:00:00.159170 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bd183b2-bef7-416b-9dd7-138098b6481b" containerName="extract-utilities" Dec 10 17:00:00 crc kubenswrapper[4755]: I1210 17:00:00.159180 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bd183b2-bef7-416b-9dd7-138098b6481b" containerName="extract-utilities" Dec 10 17:00:00 crc kubenswrapper[4755]: E1210 17:00:00.159200 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8899904-239f-4c48-85c0-583d0323fc5e" containerName="registry-server" Dec 10 17:00:00 crc kubenswrapper[4755]: I1210 17:00:00.159208 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8899904-239f-4c48-85c0-583d0323fc5e" containerName="registry-server" Dec 10 17:00:00 crc kubenswrapper[4755]: E1210 17:00:00.159216 4755 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bd183b2-bef7-416b-9dd7-138098b6481b" containerName="registry-server" Dec 10 17:00:00 crc kubenswrapper[4755]: I1210 17:00:00.159223 4755 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bd183b2-bef7-416b-9dd7-138098b6481b" containerName="registry-server" Dec 10 17:00:00 crc kubenswrapper[4755]: I1210 17:00:00.159969 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="015ccb61-36a8-4353-9ac1-f5fc815d9976" containerName="registry-server" Dec 10 17:00:00 crc kubenswrapper[4755]: I1210 17:00:00.160013 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8899904-239f-4c48-85c0-583d0323fc5e" containerName="registry-server" Dec 10 17:00:00 crc kubenswrapper[4755]: I1210 17:00:00.160031 4755 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bd183b2-bef7-416b-9dd7-138098b6481b" containerName="registry-server" Dec 10 17:00:00 crc kubenswrapper[4755]: I1210 17:00:00.161083 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29423100-5kvfw" Dec 10 17:00:00 crc kubenswrapper[4755]: I1210 17:00:00.164709 4755 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 10 17:00:00 crc kubenswrapper[4755]: I1210 17:00:00.164780 4755 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 10 17:00:00 crc kubenswrapper[4755]: I1210 17:00:00.170366 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29423100-5kvfw"] Dec 10 17:00:00 crc kubenswrapper[4755]: I1210 17:00:00.250949 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c0cd0fc3-d50c-42df-a3e7-9dd641f1f0d7-config-volume\") pod \"collect-profiles-29423100-5kvfw\" (UID: \"c0cd0fc3-d50c-42df-a3e7-9dd641f1f0d7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423100-5kvfw" Dec 10 17:00:00 crc kubenswrapper[4755]: I1210 17:00:00.251216 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wv46\" (UniqueName: \"kubernetes.io/projected/c0cd0fc3-d50c-42df-a3e7-9dd641f1f0d7-kube-api-access-2wv46\") pod \"collect-profiles-29423100-5kvfw\" (UID: \"c0cd0fc3-d50c-42df-a3e7-9dd641f1f0d7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423100-5kvfw" Dec 10 17:00:00 crc kubenswrapper[4755]: I1210 17:00:00.251271 4755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c0cd0fc3-d50c-42df-a3e7-9dd641f1f0d7-secret-volume\") pod \"collect-profiles-29423100-5kvfw\" (UID: \"c0cd0fc3-d50c-42df-a3e7-9dd641f1f0d7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423100-5kvfw" Dec 10 17:00:00 crc kubenswrapper[4755]: I1210 17:00:00.352925 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wv46\" (UniqueName: \"kubernetes.io/projected/c0cd0fc3-d50c-42df-a3e7-9dd641f1f0d7-kube-api-access-2wv46\") pod \"collect-profiles-29423100-5kvfw\" (UID: \"c0cd0fc3-d50c-42df-a3e7-9dd641f1f0d7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423100-5kvfw" Dec 10 17:00:00 crc kubenswrapper[4755]: I1210 17:00:00.353005 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c0cd0fc3-d50c-42df-a3e7-9dd641f1f0d7-secret-volume\") pod \"collect-profiles-29423100-5kvfw\" (UID: \"c0cd0fc3-d50c-42df-a3e7-9dd641f1f0d7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423100-5kvfw" Dec 10 17:00:00 crc kubenswrapper[4755]: I1210 17:00:00.353163 4755 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c0cd0fc3-d50c-42df-a3e7-9dd641f1f0d7-config-volume\") pod \"collect-profiles-29423100-5kvfw\" (UID: \"c0cd0fc3-d50c-42df-a3e7-9dd641f1f0d7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423100-5kvfw" Dec 10 17:00:00 crc kubenswrapper[4755]: I1210 17:00:00.354367 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c0cd0fc3-d50c-42df-a3e7-9dd641f1f0d7-config-volume\") pod \"collect-profiles-29423100-5kvfw\" (UID: \"c0cd0fc3-d50c-42df-a3e7-9dd641f1f0d7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423100-5kvfw" Dec 10 17:00:00 crc kubenswrapper[4755]: I1210 17:00:00.363329 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c0cd0fc3-d50c-42df-a3e7-9dd641f1f0d7-secret-volume\") pod \"collect-profiles-29423100-5kvfw\" (UID: \"c0cd0fc3-d50c-42df-a3e7-9dd641f1f0d7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423100-5kvfw" Dec 10 17:00:00 crc kubenswrapper[4755]: I1210 17:00:00.375670 4755 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wv46\" (UniqueName: \"kubernetes.io/projected/c0cd0fc3-d50c-42df-a3e7-9dd641f1f0d7-kube-api-access-2wv46\") pod \"collect-profiles-29423100-5kvfw\" (UID: \"c0cd0fc3-d50c-42df-a3e7-9dd641f1f0d7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423100-5kvfw" Dec 10 17:00:00 crc kubenswrapper[4755]: I1210 17:00:00.486862 4755 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29423100-5kvfw" Dec 10 17:00:01 crc kubenswrapper[4755]: I1210 17:00:01.013948 4755 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29423100-5kvfw"] Dec 10 17:00:01 crc kubenswrapper[4755]: W1210 17:00:01.018168 4755 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc0cd0fc3_d50c_42df_a3e7_9dd641f1f0d7.slice/crio-33a79f943038185bb0028c9d143ab1f356b07ed44bb82bed1bc8732e3877beef WatchSource:0}: Error finding container 33a79f943038185bb0028c9d143ab1f356b07ed44bb82bed1bc8732e3877beef: Status 404 returned error can't find the container with id 33a79f943038185bb0028c9d143ab1f356b07ed44bb82bed1bc8732e3877beef Dec 10 17:00:01 crc kubenswrapper[4755]: I1210 17:00:01.708428 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29423100-5kvfw" event={"ID":"c0cd0fc3-d50c-42df-a3e7-9dd641f1f0d7","Type":"ContainerStarted","Data":"33a79f943038185bb0028c9d143ab1f356b07ed44bb82bed1bc8732e3877beef"} Dec 10 17:00:02 crc kubenswrapper[4755]: I1210 17:00:02.719561 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29423100-5kvfw" event={"ID":"c0cd0fc3-d50c-42df-a3e7-9dd641f1f0d7","Type":"ContainerStarted","Data":"a4eceae5cded338a8c38b9d697176b1d49778b9f8b2a67e8f547773aee96aecc"} Dec 10 17:00:02 crc kubenswrapper[4755]: I1210 17:00:02.742420 4755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29423100-5kvfw" podStartSLOduration=2.742397019 podStartE2EDuration="2.742397019s" podCreationTimestamp="2025-12-10 17:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 17:00:02.740125567 +0000 UTC m=+5799.341009199" watchObservedRunningTime="2025-12-10 17:00:02.742397019 +0000 UTC m=+5799.343280651" Dec 10 17:00:03 crc kubenswrapper[4755]: I1210 17:00:03.750753 4755 generic.go:334] "Generic (PLEG): container finished" podID="c0cd0fc3-d50c-42df-a3e7-9dd641f1f0d7" containerID="a4eceae5cded338a8c38b9d697176b1d49778b9f8b2a67e8f547773aee96aecc" exitCode=0 Dec 10 17:00:03 crc kubenswrapper[4755]: I1210 17:00:03.750798 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29423100-5kvfw" event={"ID":"c0cd0fc3-d50c-42df-a3e7-9dd641f1f0d7","Type":"ContainerDied","Data":"a4eceae5cded338a8c38b9d697176b1d49778b9f8b2a67e8f547773aee96aecc"} Dec 10 17:00:03 crc kubenswrapper[4755]: I1210 17:00:03.775684 4755 scope.go:117] "RemoveContainer" containerID="c6023c772f9cc48b73b2c2cb21153ab38fb3d0f36b9615d776f62c1c12e7d3a8" Dec 10 17:00:03 crc kubenswrapper[4755]: E1210 17:00:03.776365 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 17:00:05 crc kubenswrapper[4755]: I1210 17:00:05.182852 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29423100-5kvfw" Dec 10 17:00:05 crc kubenswrapper[4755]: I1210 17:00:05.282433 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c0cd0fc3-d50c-42df-a3e7-9dd641f1f0d7-secret-volume\") pod \"c0cd0fc3-d50c-42df-a3e7-9dd641f1f0d7\" (UID: \"c0cd0fc3-d50c-42df-a3e7-9dd641f1f0d7\") " Dec 10 17:00:05 crc kubenswrapper[4755]: I1210 17:00:05.282744 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2wv46\" (UniqueName: \"kubernetes.io/projected/c0cd0fc3-d50c-42df-a3e7-9dd641f1f0d7-kube-api-access-2wv46\") pod \"c0cd0fc3-d50c-42df-a3e7-9dd641f1f0d7\" (UID: \"c0cd0fc3-d50c-42df-a3e7-9dd641f1f0d7\") " Dec 10 17:00:05 crc kubenswrapper[4755]: I1210 17:00:05.282870 4755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c0cd0fc3-d50c-42df-a3e7-9dd641f1f0d7-config-volume\") pod \"c0cd0fc3-d50c-42df-a3e7-9dd641f1f0d7\" (UID: \"c0cd0fc3-d50c-42df-a3e7-9dd641f1f0d7\") " Dec 10 17:00:05 crc kubenswrapper[4755]: I1210 17:00:05.283458 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0cd0fc3-d50c-42df-a3e7-9dd641f1f0d7-config-volume" (OuterVolumeSpecName: "config-volume") pod "c0cd0fc3-d50c-42df-a3e7-9dd641f1f0d7" (UID: "c0cd0fc3-d50c-42df-a3e7-9dd641f1f0d7"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 17:00:05 crc kubenswrapper[4755]: I1210 17:00:05.284061 4755 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c0cd0fc3-d50c-42df-a3e7-9dd641f1f0d7-config-volume\") on node \"crc\" DevicePath \"\"" Dec 10 17:00:05 crc kubenswrapper[4755]: I1210 17:00:05.292771 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0cd0fc3-d50c-42df-a3e7-9dd641f1f0d7-kube-api-access-2wv46" (OuterVolumeSpecName: "kube-api-access-2wv46") pod "c0cd0fc3-d50c-42df-a3e7-9dd641f1f0d7" (UID: "c0cd0fc3-d50c-42df-a3e7-9dd641f1f0d7"). InnerVolumeSpecName "kube-api-access-2wv46". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 17:00:05 crc kubenswrapper[4755]: I1210 17:00:05.300098 4755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0cd0fc3-d50c-42df-a3e7-9dd641f1f0d7-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c0cd0fc3-d50c-42df-a3e7-9dd641f1f0d7" (UID: "c0cd0fc3-d50c-42df-a3e7-9dd641f1f0d7"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 17:00:05 crc kubenswrapper[4755]: I1210 17:00:05.386098 4755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2wv46\" (UniqueName: \"kubernetes.io/projected/c0cd0fc3-d50c-42df-a3e7-9dd641f1f0d7-kube-api-access-2wv46\") on node \"crc\" DevicePath \"\"" Dec 10 17:00:05 crc kubenswrapper[4755]: I1210 17:00:05.386127 4755 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c0cd0fc3-d50c-42df-a3e7-9dd641f1f0d7-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 10 17:00:05 crc kubenswrapper[4755]: I1210 17:00:05.772263 4755 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29423100-5kvfw" Dec 10 17:00:05 crc kubenswrapper[4755]: I1210 17:00:05.781737 4755 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29423100-5kvfw" event={"ID":"c0cd0fc3-d50c-42df-a3e7-9dd641f1f0d7","Type":"ContainerDied","Data":"33a79f943038185bb0028c9d143ab1f356b07ed44bb82bed1bc8732e3877beef"} Dec 10 17:00:05 crc kubenswrapper[4755]: I1210 17:00:05.781797 4755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="33a79f943038185bb0028c9d143ab1f356b07ed44bb82bed1bc8732e3877beef" Dec 10 17:00:05 crc kubenswrapper[4755]: I1210 17:00:05.831076 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29423055-8lsjm"] Dec 10 17:00:05 crc kubenswrapper[4755]: I1210 17:00:05.839975 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29423055-8lsjm"] Dec 10 17:00:07 crc kubenswrapper[4755]: I1210 17:00:07.776819 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b3dcf6a-eaa0-4c77-a572-6d8f670c5b26" path="/var/lib/kubelet/pods/4b3dcf6a-eaa0-4c77-a572-6d8f670c5b26/volumes" Dec 10 17:00:09 crc kubenswrapper[4755]: I1210 17:00:09.433095 4755 scope.go:117] "RemoveContainer" containerID="d44fd080980ef2ccf7576f8f0202605b930b156d7ad022932e00681c7dbf6994" Dec 10 17:00:18 crc kubenswrapper[4755]: I1210 17:00:18.757817 4755 scope.go:117] "RemoveContainer" containerID="c6023c772f9cc48b73b2c2cb21153ab38fb3d0f36b9615d776f62c1c12e7d3a8" Dec 10 17:00:18 crc kubenswrapper[4755]: E1210 17:00:18.758651 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 17:00:25 crc kubenswrapper[4755]: I1210 17:00:25.054209 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-db-sync-jfc28"] Dec 10 17:00:25 crc kubenswrapper[4755]: I1210 17:00:25.062143 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-db-sync-jfc28"] Dec 10 17:00:25 crc kubenswrapper[4755]: I1210 17:00:25.772216 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="998863b6-4f48-4c8b-8011-a40377686b99" path="/var/lib/kubelet/pods/998863b6-4f48-4c8b-8011-a40377686b99/volumes" Dec 10 17:00:29 crc kubenswrapper[4755]: I1210 17:00:29.757939 4755 scope.go:117] "RemoveContainer" containerID="c6023c772f9cc48b73b2c2cb21153ab38fb3d0f36b9615d776f62c1c12e7d3a8" Dec 10 17:00:29 crc kubenswrapper[4755]: E1210 17:00:29.758726 4755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ggt8v_openshift-machine-config-operator(b132a8b9-1c99-414d-8773-229bf36b305d)\"" pod="openshift-machine-config-operator/machine-config-daemon-ggt8v" podUID="b132a8b9-1c99-414d-8773-229bf36b305d" Dec 10 17:00:31 crc kubenswrapper[4755]: I1210 17:00:31.059174 4755 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-storageinit-rvkrd"] Dec 10 17:00:31 crc kubenswrapper[4755]: I1210 17:00:31.077750 4755 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-storageinit-rvkrd"] Dec 10 17:00:31 crc kubenswrapper[4755]: I1210 17:00:31.772245 4755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="085de3b2-f23a-4359-a057-459a8a81d898" path="/var/lib/kubelet/pods/085de3b2-f23a-4359-a057-459a8a81d898/volumes"